Motivation

Background and Topic Relevence

Nowadays we are witnessing a new summer of Artificial Intelligence, since the AI-based algorithms are being adopting in a growing number of contexts and applications domains, ranging from media and en-tertainment to medical, finance and legal decision-making. While the very first AI systems were easily interpretable, the current trend showed the rise of opaque methodologies such as those based on Deep Neural Networks (DNN), whose (very good) effectiveness is contrasted by the enormous complexity of the models, which is due to the huge number of layers and parameters that characterize these mod-els. As intelligent systems become more and more widely applied (especially in very “sensitive” do-main), it is not possible to adopt opaque or inscrutable black-box models or to ignore the general ra-tionale that guides the algorithms in the task it carries on Moreover, the metrics that are usually adopt-ed to evaluate the effectiveness of the algorithms reward very opaque methodologies that maximize the accuracy of the suggestions at the expense of the transparency and explainability of the model. This is-sue is even more felt in the light of the recent experiences, such as the General Data Protection Regula-tion (GDPR) and DARPA's Explainable AI Project, which further emphasized the need and the right for scrutable and transparent methodologies that can guide the user in a complete comprehension of the in-formation held and managed by AI-based systems.

The workshop tries to address these research questions and aims to provide a forum for the Italian community to discuss problems, challenges, and innovative ap-proaches in the area.