Artificial Intelligence systems are increasingly playing an increasingly important role in our daily lives. As their importance in our everyday lives grows, it is fundamental that the internal mechanisms that guide these algorithms are as clear as possible. It is not by chance that the recent General Data Protec-tion Regulation (GDPR) emphasized the users' right to explanation when people face artificial intelli-gence-based technologies. Unfortunately, the current research tends to go in the opposite direction since most of the approaches try to maximize the effectiveness of the models (e.g., recommendation accuracy) at the expense of explainability and transparency. The main research questions which arise from this scenario is straightforward: how can we deal with such a dichotomy between the need for effective adaptive systems and the right to transparency and interpretability? Several research lines are triggered by this question: building transparent intelligent systems, analyzing the impact of opaque algorithms on final users, studying the role of explanation strategies, and investi-gating how to provide users with more control over the behavior of intelligent systems. The workshop tries to address these research lines and aims to provide a forum for the Italian community to discuss problems, challenges, and innovative approaches in the various fields of AI.
Topics of interests include but are not limited
to:
- Explainable Artificial Intelligence
- Trustable and Transparent LLMs
- Justification Models in Generative AI
- Interpretable Machine Learning Models
- Strategies to Explain Black Box Decision Systems
- Designing new Explanation Styles and Approaches
- Evaluating Transparency and Interpretability of AI Systems
- Technical Aspects of Algorithms for Explanation
- Theoretical Aspects of Explanation and Interpretability
- Ethics in Explainable AI
- Argumentation Theory for Explainable AI
- Natural Language Generation for Explainable AI
- Human-Machine Interaction for Explainable AI
- Fairness and Bias Auditing
- Privacy-Preserving Explanations
- Privacy by Design Approaches for Human Data
- Monitoring and Understanding System Behavior
- Successful Applications of Interpretable AI Systems
- Demo and Proof of Concepts with Explainable Results