In a recommendation context, explanations are valuable due to the known benefits in satisfaction, trustworthiness, and scrutability. Nowadays, visually aware recommendation systems training relies on latent visual features obtained from pre-trained deep neural networks (DNNs). This approach has shown to be performant but lacks interpretability due to the inability to deliver explanations about user preferences and the model itself. In this paper, we propose a novel framework to develop explainable versions of existing model architectures. The main component of the framework is a concept extractor that delivers interpretable representations of images based on the visual concepts present in them (from colors to objects and scenes). We then train model architectures with interpretable representations and adopt a feature attribution method to explain their outputs. Our results show that models trained using the proposed approach perform similarly to a conventional approach with the ability to deliver personalized explanations. Also, the proposed framework shows that it is possible to take advantage of existing "black box" systems and transform them into explainable systems by using an appropriate item representation.
How to join?
The defense will be in spanish. If you want to attend, please send an email to Antonio (firstname.lastname@example.org) before the defense.