Our work Explainable neural image recommendation using Network Dissection visual concepts, by Antonio Ossa-Guerra, Denis Parra, and Hans Löbel from the Computer Science Department at PUC Chile, has been accepted in the LatinX in Computer Vision workshop at ICCV 2021 to be presented in the poster session.
In recommendation systems (RS), explanations are valuable due to the known benefits in user satisfaction, trustworthiness, and scrutability. However, most state-of-the-art RS learn user and item representations via matrix factorization or neural networks, which results in latent, accurate, but non-interpretable suggestions. For instance, Visually-aware recommendation systems rely on latent visual features obtained from pre-trained Deep Convolutional Neural Networks (DNNs), so they suffer from this problem.
In this article we introduce a method for visually-aware recommendations which is both accurate and interpretable. To do this, we leverage Network Dissection to extract interpretable visual features from pre-trained neural networks, we then train a model with these interpretable representations and adopt a feature attribution method to explain the recommendations.
Our results show that our models trained using our approach perform similarly to a conventional latent-factors approach, but with the additional ability to deliver personalized explanations. Also, the proposed method shows that it is possible to take advantage of existing “black box” systems and transform them into explainable systems by using appropriate item representations.