Explainable Artificial Intelligence

The main focus of the XAI subgroup is to create explanations for artificial intelligence systems that generally work like “black boxes.” We look for solutions that deliver explainable predictions that do not significantly affect the model’s performance and adjust to the available requirements. This area arises from the GDPR law in which users of AI-based research systems have the right to explain any decision made automatically.

Students

Andrés Carvallo, Ivania Donoso, Hernán Valdivieso

Collaborators

Katrien Verbert (KU Leuven, Belgium), Chaoli Wang, Tobias Schreck

Latest publications

  1. Algorithmic and HCI aspects for explaining recommendations of artistic images
    Dominguez, Vicente, Donoso-Guzmán, Ivania, Messina, Pablo, and Parra, Denis
    ACM Transactions on Interactive Intelligent Systems (TiiS) 2020
  2. Interpretable Contextual Team-aware Item Recommendation: Application in Multiplayer Online Battle Arena Games
    Villa, Andrés, Araujo, Vladimir, Cattan, Francisca, and Parra, Denis
    In Proceedings of the 14th ACM Conference on Recommender Systems 2020
  3. Analyzing the Design Space for Visualizing Neural Attention in Text Classification
    Parra, Denis, Valdivieso, Hernán, Carvallo, Andrés, Rada, Gabriel, Verbert, Katrien, and Schreck, Tobias
    In Proc. IEEE VIS Workshop on Vis X AI: 2nd Workshop on Visualization for AI Explainability (VISxAI) 2019
  4. The Effect of Explanations and Algorithmic Accuracy on Visual Recommender Systems of Artistic Images
    Dominguez, Vicente, Messina, Pablo, Donoso-Guzmán, Ivania, and Parra, Denis
    In 24th Conference on Intelligent User interfaces 2019
  5. Towards Explanations for Visual Recommender Systems of Artistic Images
    Dominguez, Vicente, Messina, Pablo, Trattner, Christoph, and Parra, Denis
    In Joint Workshop on Interfaces and Human Decision Making for Recommender Systems 2018