Workshop Program

Program and Proceedings

Workshop Proceedings, as every year, is available on:

CEUR-WS.org

https://ceur-ws.org/Vol-3839

The articles will be fully indexed by Scopus, Google Scholar and all other indexing platforms.



Tentative Schedule

November 26th, 2024 - Room D0.01

16:00 - 16:15 Opening

16:15 - 17:00 Invited Talk - Mirko Marras - Faithful Generative Modeling over Knowledge Graphs for Explainable Personalization

17:00 - 17:40 Session 1

  • 17:00 - 17:20 - Silvia D'Amicantonio, Mishal Kizhakkam Kulangara, Het Darshan Mehta, Shalini Pal, Marco Levantesi, Marco Polignano, Erasmo Purificato and Ernesto William De Luca. A Comprehensive Strategy to Bias and Mitigation in Human Resource Decision Systems
  • 17:20 - 17:40 - Leonardo Dal Ronco and Erasmo Purificato. ExplainBattery: Enhancing Battery Capacity Estimation with an Efficient LSTM Model and Explainability Features

17:40 - 17:45 Closing Day 1




November 27th, 2024 - Room Kolp 2

10:30 - 10:45 Opening Day 2

10:45 - 12:05 Session 2

  • 10:45 - 11:05 - Ejdis Gjinika, Nicola Arici, Luca Putelli, Alfonso Emilio Gerevini and Ivan Serina. An Analysis on How Pre-Trained Language Models Learn Different Aspects
  • 11:05 - 11:25 - Giovanna Castellano, Maria Grazia Miccoli, Raffele Scaringi, Gennaro Vessio and Gianluca Zaza. Using LLMs to explain AI-generated art classification via Grad-CAM heatmaps
  • 11:25 - 11:45 - Zhuofan Zhang and Herbert Wiklicky. Probabilistic Abstract Interpretation on Neural Networks via Grids Approximation
  • 11:45 - 12:05 - Daehyun Yoo and Caterina Giannetti. Ethical AI Systems and Shared Accountability: The Role of Economic Incentives in Fairness and Explainability

12:05 - 12:10 Closing Day 2




Invited Talk

  • Speaker: Dott. Mirko Marras, Università degli Studi di Cagliari

  • Title: Faithful Generative Modeling over Knowledge Graphs for Explainable Personalization
  • Abstract: As personalized systems become an integral part of our daily lives, ensuring their transparency and relevance is of critical importance. To address this challenge, we will explore the use of generative models for the creation of explainable suggestions rooted in the structure of knowledge graphs, ensuring that the generated outputs are both relevant and consistent with the underlying data. We will highlight the significance of faithfulness in generative modeling, presenting methods and applications that effectively balance personalization with explainability under a faithful generation. Examples from domains such as entertainment, education, and nutrition will illustrate how these approaches can enhance user trust and improve the impact of personalized systems. Furthermore, we will envision how these methods further promise to generalize to a wide range of tasks beyond personalization, expanding their applicability across diverse fields and communities.

  • Bio: Mirko Marras is a Tenure-Track Assistant Professor of Deep Learning at the University of Cagliari (Italy), in the Department of Mathematics and Computer Science. Prior to this, he was a postdoctoral researcher at EPFL (Switzerland) and a visiting scholar at Eurecat (Spain) and New York University (USA). His interdisciplinary research focuses on developing trustworthy machine learning methods, with applications spanning entertainment, education, nutrition, smart cities, and biometrics. He has co-authored over 90 papers in leading venues, including AAAI, ECIR, RecSys, SIGIR, and UMAP, and has delivered tutorials on responsible artificial intelligence, with a special emphasis on explainability, at prominent conferences such as RecSys 2022, ECML-PKDD 2023, and ECIR 2024. As a regular member of the program committees for major conferences increasingly influenced by trustworthy artificial intelligence, he has received several outstanding reviewer awards, including at ECIR 2024 and RecSys 2023 and 2024. He is a member of both ACM and IEEE.