In recent years, we have witnessed the rapid adoption of AI to automate and solve different tasks. However, the use of AI-based systems is often lacking in explainability, which means letting users understand the rationale behind AI models' predictions. This problem has driven the development of explainable AI (XAI), and a diversity of methods have been proposed to construct explanations. Several methods resort to generating visualizations that support the explanation; thus, XAI does not only involve AI knowledge like the model to be used or algorithms to be explained, it also requires knowledge to design and implement appropriate visualizations. Although there are workflows for designing interactive machine learning (IML) or XAI applications, they focus on the stages of the machine learning (ML) model building process and do not provide guidelines or strategies to design or analyze the visualizations intended for XAI applications. Therefore, in this thesis we propose starting from the XAI task space and connect it to Munzner's widely adopted nested model for visualization design in order to bridge this gap. In this way, we propose the VD4XAI (Visualization Design for XAI) framework to guide the analysis and design process of XAI visualizations for local explanations. Our goal is to bridge the gap between AI/ML experts, designers, domain experts and final users for building effective XAI visualizations. This also will foster the development and application of visual XAI approaches.
How to join?
The defense will be in spanish. If you want to attend, please send an email to Hernán (firstname.lastname@example.org) before the defense.