PhD thesis already defended

  1. Juliette Faille (ESR6), Explainable models for text production, CNRS, 2023. https://zenodo.org/records/14231559

  2. Ettore Maritti (ESR1). A holistic perspective on designing and evaluating explainable AI models: from white-box additive models to post-hoc explanations for black-box models, USC, 2024. https://minerva.usc.es/xmlui/handle/10347/34497

  3. Michele Cafagna (ESR5). Visually Grounded Language Generation: Data, Models and Explanations beyond Descriptive Captions, UOM, 2024. https://zenodo.org/records/14052376

  4. Alisa Rieger (ESR10). Interactions to mitigate human biases, TUDelft, 2024. https://repository.tudelft.nl/record/uuid:703a1aad-d585-459a-b0b3-ac55d9e98fcd

  5. Luca Nannini (ESR11). Explainability in Process Mining: A Framework for Improved Decision-Making, USC, 2024. https://zenodo.org/records/14162735

Publications in Journals

  1. E. Mariotti (ESR1-CiTIUS-USC), A. Arias Duart, M. Cafagna (ESR5-UOM), A. Gatt, D. Garcia-Gasulla, M. J. Alonso, “TextFocus: Assessing the Faithfulness of Feature Attribution Methods Explanations in Natural Language Processing,” IEEE Access, 2023, https://doi.org/10.1109/ACCESS.2023.10543008
  2. A. Rieger (ESR10-TUDelft), T. Draws, M. Theune, N. Tintarev, «Nudges to Mitigate Confirmation Bias during Web Search on Debated Topics: Support vs. Manipulation», ACM Transactions on the Web, Volume 18, Issue 2, Article No.: 27, pp 1–27, 2024, https://doi.org/10.1145/3635034
  3. L. Nannini (ESR11-INDRA), E. Bonel, D. Bassi, M. Joshua, «Beyond phase-in: assessing impacts on disinformation of the EU Digital Services Act», AI and Ethics, 2024, https://doi.org/10.1007/s43681-024-00467-w
  4. E. Mariotti (ESR1-INDRA), J. Alonso Moral, A. Gatt, «Exploring the balance between interpretability and performance with carefully designed constrainable Neural Additive Models», Information Fusion, 2023, http://doi.org/10.1016/j.inffus.2023.101882
  5. M. Cafagna (ESR5-UOM), L. M. Rojas-Barahona, K. van Deemter, A. Gatt, «Interpreting vision and language generative models with semantic visual priors», Frontiers in Artificial Intelligence, 2023, http://doi.org/10.3389/frai.2023.1220476
  6. T. Di Noia, N. Tintarev, P. Fatourou, M. Schedl, «Recommender systems under European AI regulation», Communications of the ACM, Volume 65, Issue 4, 2022, http://doi.org/10.1145/3512728
  7. L. Nannini (ESR11-Indra), M. Marchiori Manerba, I. Beretta, “Mapping the landscape of ethical considerations in explainable AI research,” Springer Ethics and Information Technology, vol. 26, article 44, 2024, https://doi.org/10.1007/s10676-024-09773-7.

  8. Q. Shaheen (ESR7a-WUT), K. Budzynska, C. Sierra, “An explanation-oriented inquiry dialogue game for expert collaborative recommendations,” Argument & Computation, 2023, https://doi.org/10.3233/aac-230010.

  9. A. Landowska, K. Budzynska, H. Zhang (ESR8b-WUT), “Quantitative and Qualitative Analysis of Moral Foundations in Argumentation,” Argumentation, vol. 38, 2024, https://doi.org/10.1007/s10503-024-09636-x.

  10. K. Ilker, A. Pedrotti, M. Dogan, M. Cafagna (ESR5-UOM), E. C. Acikgoz, L. Parcalabescu, I. Calixto, A. Frank, A. Gatt, A. Erdem, E. Erdem, “ViLMA: A Zero-Shot Benchmark for Linguistic and Temporal Grounding in Video-Language Models,” The Twelfth International Conference on Learning Representations (ICLR24), 2024, https://doi.org/10.48550/arxiv.2311.07022.

  11. S. Srivastava (ESR9-UTWENTE), S. Wentzel, A. Catala, M. Theune, “Measuring and implementing lexical alignment: A systematic literature review,” Computer Speech & Language, 2024, https://doi.org/10.1016/j.csl.2024.101731.

Publications in Conferences

  1.  A. Sivaprasad (ESR9-UTWENTE), E. Reiter, “Linguistically Communicating Uncertainty in Patient-Facing Risk Prediction Models,” EACL 2024: Proceedings of the Conference of the European Chapter of the Association for Computational Linguistics, 2024, Pages 87–99, https://doi.org/10.48550/arXiv.2401.17511
  2.    E. Mariotti (ESR1-CiTIUS-USC), A. Sivaprasad, J. Alonso, «Beyond Prediction Similarity: ShapGAP for Evaluating Faithful Surrogate Models in XAI», World Conference on Explainable Artificial Intelligence, 2023, https://link.springer.com/chapter/10.1007/978-3-031-44064-9_10
  3.       E. Calò (ESR4-UTRECHT), J. Levy, «General Boolean Formula Minimization with QBF Solvers», 25th International Conference of the Catalan Association for Artificial Intelligence (CCIA 2023), 2023, https://doi.org/10.48550/arXiv.2303.06643
  4.       A. Rieger (ESR10-TUDelft), F. Bredius, N. Tintarev, M. Pera, «Searching for the Whole Truth: Harnessing the Power of Intellectual Humility to Boost Better Search on Debated Topics», Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, 2023, http://doi.org/10.1145/3544549.3585693
  5.       E. Calò (ESR4-UTRECHT), J. Levy, A. Gatt, K. van Deemter, «Is Shortest Always Best? The Role of Brevity in Logic-to-Text Generation», Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), 2023, http://doi.org/10.5281/zenodo.8246480
  6.       T. Mickus, E. Calò (ESR4-UTRECHT), L. Jacqmin, D. Paperno, M. Constant, «„Mann“ is to “Donna” as「国王」is to « Reine » Adapting the Analogy Task for Multilingual and Contextual Embeddings», Proceedings of the 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), 2023, http://dx.doi.org/10.18653/v1/2023.starsem-1.25
  7.       L. Nannini (ESR11-CiTIUS-USC), A. Balayn, A. Smith, «Explainability in AI Policies: A Critical Review of Communications, Reports, Regulations, and Standards in the EU, US, and UK», Proceedings, 2023, http://doi.org/10.1145/3593013.3594074
  8.   S. Srivastava (ESR9-UTWENTE), M. Theune, A. Catala, «The Role of Lexical Alignment in Human Understanding of Explanations by Conversational Agents», Proceedings of the 28th International Conference on Intelligent User Interfaces, 2023, https://doi.org/10.1145/3581641.3584086
  9.   A. Arias-Duart, E. Mariotti (ESR1-CiTIUS-USC), D. Garcia-Gasulla, J. Alonso-Moral, «A Confusion Matrix for Evaluating Feature Attribution Methods», 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2023, https://doi.org/10.1109/CVPRW59228.2023.00380
  10.   M. Cafagna (ESR5-UOM), K. van Deemter, A. Gatt, «HL Dataset: Visually-Grounded Description of Scenes, Actions and Rationales», 16th International Natural Language Generation Conference, 2023, https://doi.org/10.18653/v1/2023.inlg-main.21
  11.   M. Cafagna (ESR5-UOM), K. van Deemter, A. Gatt, «Understanding Cross-modal Interactions in V&L Models that Generate Scene Descriptions», The first Unimodal and Multimodal Induction of Linguistic Structures Workshop (UM-IoS) at the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP2022), 2022, http://doi.org/10.5281/zenodo.7669908
  12.   E. Calò (ESR4-UTRECHT), E. van der Werf, A. Gatt, K. van Deemter, «Enhancing and Evaluating the Grammatical Framework Approach to Logic-to-Text Generation», Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), 2022, http://dx.doi.org/10.18653/v1/2022.gem-1.13
  13.   A. Rieger (ESR10-TUDelft), «Interactive Interventions to Mitigate Cognitive Bias», Doctoral Consortium at Conference on User Modeling, Adaptation and Personalization 2022, July 04th to 07th 2022, https://doi.org/10.1145/3503252.3534362
  14.   A. Bringas Colmenarejo, L. Nannini (ESR11-INDRA), A. Rieger (ESR10-TUDelft), X. Zhao, G. Patro, G. Kasneci, K. Kinder-Kurlanda, «Fairness in Agreement With European Values: An Interdisciplinary Perspective on AI Regulation», Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, 2022, http://doi.org/10.1145/3514094.3534158
  15.   A. Rieger (ESR10-TUDelft), Q. Shaheen, C. Sierra, M. Theune, N. Tintarev, «Towards Healthy Engagement with Online Debates: An Investigation of Debate Summaries and Personalized Persuasive Suggestions», UMAP ’22 Adjunct: Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, July 2022, https://doi.org/10.1145/3511047.3537692
  16.   J. Faille (ESR6-CNRS), A. Gatt, C. Gardent, «Entity-Based Semantic Adequacy for Data-to-Text Generation», Findings of the Association for Computational Linguistics: EMNLP 2021, 2021, https://doi.org/10.18653/v1/2021.findings-emnlp.132
  17.   A. Rieger (ESR10-TUDelft), T. Draws, M. Theune, N. Tintarev, «This Item Might Reinforce Your Opinion: Obfuscation and Labeling of Search Results to Mitigate Confirmation Bias», Proceedings of the 32st ACM Conference on Hypertext and Social Media, 2021, https://doi.org/10.1145/3465336.3475101
  18.   E. Mariotti (ESR1-CiTIUS-USC), J. Alonso, A. Gatt, «Prometheus: Harnessing Natural Language for Human-centric Explainable Artificial Intelligence», Proceedings of CAEPIA’21, Actas del XX Congreso Español sobre Tecnologías y Lógica Fuzzy, pp. 274-279, Málaga, 2021, https://doi.org/10.5281/zenodo.5878570
  19.   E. Mariotti (ESR1-CiTIUS-USC), J. Alonso, R. Confalonieri, «A Framework for Analyzing Fairness, Accountability, Transparency and Ethics: A Use-case in Banking Services», Proceedings of the IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Luxembourg, 2021, https://doi.org/10.1109/fuzz45933.2021.9494481
  20.  J. Sevilla (ESR3-UNIABDN), «Explaining data using causal Bayesian networks», Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI), collocated with the 12th International Conference on Natural Language Generation (INLG), Dublin, Ireland, 2020, ACL anthology, https://doi.org/10.5281/zenodo.5897993
  21.   M. Demollin (ESR8-WUT), Q. Shaheen, K. Budzynska, C. Sierra, «Argumentation Theoretical Frameworks for Explainable Artificial Intelligence», 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI2020), 2020, https://doi.org/10.5281/zenodo.5886641
  22.   J. Faille (ESR6-CNRS), A. Gatt, C. Gardent, «The Natural Language Pipeline, Neural Text Generation and Explainability», 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, 2020, https://doi.org/10.5281/zenodo.5887676
  23.   C. Hennessy (ESR2-CiTIUS-USC), A. Bugarín-Diz, E. Reiter, «Explaining Bayesian Networks in Natural Language: State of the Art and Challenges», Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI), collocated with the 12th International Conference on Natural Language Generation (INLG), Dublin, Ireland, 2020, ACL anthology, https://doi.org/10.5281/zenodo.5882297
  24.   A. Mayn (ESR4-UU), K. van Deemter, «Towards Generating Effective Explanations of Logical Formulas: Challenges and Strategies», Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI), collocated with the 12th International Conference on Natural Language Generation (INLG), Dublin, Ireland, 2020, ACL anthology, https://doi.org/10.5281/zenodo.5906859
  25.   E. Mariotti (ESR1-CiTIUS-USC), J. Alonso, A. Gatt, «Towards Harnessing Natural Language Generation to Explain Black-box Models», Proceedings of the 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, collocated with the 12th International Conference on Natural Language Generation (INLG), Dublin, Ireland, 2020, ACL anthology, https://doi.org/10.5281/zenodo.5876893
  26. A. Rieger (ESR10-UTWENTE), M. Theune, N. Tintarev, “Toward Natural Language Mitigation Strategies for Cognitive Biases in Recommender Systems,” NL4XAI 2020 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, 2020, http://doi.org/10.5281/zenodo.5883476.

  27. A. Rieger (ESR10-UTWENTE), S. Kulane, U. Gadiraju, M. S. Pera, “Disentangling Web Search on Debated Topics: A User-Centered Exploration,” Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization, 2023, http://doi.org/10.1145/3627043.3659559.

  28. A. Rieger (ESR10-UTWENTE), F. Bredius, M. Theune, M. S. Pera, “From Potential to Practice: Intellectual Humility During Search on Debated Topics,” Proceedings of the 2024 ACM SIGIR Conference on Human Information Interaction and Retrieval, 2024, http://doi.org/10.1145/3627508.3638306.

  29. Z. Zhao, M. Theune, S. Srivastava (ESR9-UTWENTE), D. Braun, “Exploring Lexical Alignment in a Price Bargain Chatbot,” ACM Conversational User Interfaces 2024, 2024, http://doi.org/10.1145/3640794.3665576.

  30. S. Srivastava (ESR9-UTWENTE), M. Theune, A. Catala, C. Reed, “Trust in a Human-Computer Collaborative Task With or Without Lexical Alignment,” Adjunct Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization, 2024, http://doi.org/10.1145/3631700.3664868.

  31. H. Zhang (ESR8b-WUT), A. Landowska, K. Budzynska, “Detection and Analysis of Moral Values in Argumentation,” Pre-Proceedings of a Workshop affiliated with the 26th European Conference on Artificial Intelligence: Value Engineering in AI, 2023, http://doi.org/10.13140/rg.2.2.13098.59849.

  32. L. Nannini (ESR11-INDRA), “Habemus a Right to an Explanation: so What? – A Framework on Transparency-Explainability Functionality and Tensions in the EU AI Act,” Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2023, http://doi.org/10.5281/zenodo.14041249.

  33. L. Nannini (ESR11-INDRA), M. M. Joshua, D. Bassi, E. Bonel, “Position: Machine Learning-powered Assessments of the EU Digital Services Act Aid Quantify Policy Impacts on Online Harms,” Proceedings of Machine Learning Research, 2024, http://doi.org/10.5281/zenodo.14041316.

  34. J. Sevilla (ESR3a-UOM), N. Babakov (ESR2b-UTWENTE), E. Reiter, A. Bugarín, “Explaining Bayesian Networks in Natural Language using Factor Arguments: Evaluation in the Medical Domain,” First Workshop on Explainable Artificial Intelligence for the Medical Domain (EXPLIMED), 2023, http://doi.org/10.5281/zenodo.14040523.

  35. J. A. Lefebre Lobaina (ESR7b-WUT), C. Sierra, A. Georgara, “Exploiting Peer Trust and Semantic Similarities in the Assignment Assessment Process,” 2023, http://doi.org/10.5281/zenodo.14066237.

  36. Z. Wu, T. Draws, F. Cau, F. Barile, A. Rieger (ESR10-UTWENTE), N. Tintarev, “Explaining Search Result Stances to Opinionated People,” Communications in Computer and Information Science, ISBN: 9783031440663, 2023, https://doi.org/10.48550/arXiv.2309.08460.

Chapter in a Book

  1.  A. Sivaprasad (ESR9-UTWENTE), E. Reiter, N. Tintarev, N. Oren, “Evaluation of Human-Understandability of Global Model Explanations using Decision Tree,” ECAI 2023: Proceedings of the European Conference on Artificial Intelligence, 2023, Pages 45–58, https://doi.org/10.1007/978-3-031-50396-2_3
  2. A. Rieger (ESR10-TUDelft), T. Draws, N. Mattis, D. Maxwell, D. Elsweiler, U. Gadiraju, D. McKay, A. Bozzon, M. Pera, «Responsible Opinion Formation on Debated Topics in Web Search», European Conference on Information Retrieval, 2024, https://doi.org/10.1007/978-3-031-56066-8_32
  3. L. Parcalabescu, M. Cafagna (ESR5-UOM), L. Muradjan, A. Frank, I. Calixto, A. Gatt, «VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena», Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022, http://doi.org/10.18653/v1/2022.acl-long.567
  4. B. Wang, M. Theune, S. Srivastava (ESR9-UTWENTE), “Examining Lexical Alignment in Human-Agent Conversations with GPT-3.5 and GPT-4 Models,” Lecture Notes in Computer Science, Chatbot Research and Design, 2024, Pages [TBD], http://doi.org/10.1007/978-3-031-54975-5_6.

Other publications

  1.  Ewelina Gajewska, Katarzyna Budzynska, Barbara Konat, Marcin Koszowy, Eds. (2023). Linguistically Analysing Polarisation on Social Media. The New Ethos Reports, vol. 1. http://doi.org/10.17388/wut.2023.0001.ains
  2. Gajewska, E., Budzynska, K., Konat, B., Koszowy, M., Kiljan, K., Uberna, M., & Zhang, H (ESR8-WUT). Ethos and Pathos in Online Group Discussions: Corpora for Polarisation Issues in Social Media. http://doi.org/10.48550/arxiv.2404.04889
  3. Brand, Joshua L. M.; Nannini, Luca (ESR11). Does Explainable AI Have Moral Value? http://doi.org/10.48550/arxiv.2311.14687
  4. Nikolay Babakov (ESR2b-CiTIUS-USC), Ehud Reiter, Alberto Bugarin. Scalability of Bayesian Network Structure Elicitation with Large Language Models: a Novel Methodology and Comparative Analysis. http://doi.org/10.5281/zenodo.14046261

Posters

  1. S. Wentzel, M. Theune, S. Srivastava, D. Bucur, «Interplay between linguistic alignment and sentiment in online discussions», The 33rd Meeting of Computational Linguistics in The Netherlands, Antwerp, Belgium, 22 September 2023.
  2. L. Nannini, «Explainability in Process Mining: A Framework for Improved Decision-Making», AAAI/ACM AIES 2023, student track presentation and workshop attendance, Montreal, 8-10 August 2023.
  3. S. Srivastava, «Personalized Explanations by Conversational Agents using Lexical Alignment», Doctoral Consortium and the poster sessions of the conference Conversational User Interfaces 2023, July 19-21, 2023, Eindhoven, Netherlands. https://doi.org/10.5281/zenodo.8381519
  4. A. Rieger, F. Bredius, N. Tintarev, M. Pera, «Searching for the Whole Truth: Harnessing the Power of Intellectual Humility to Boost Better Search on Debated Topics», Late Breaking Poster at CHI Conference on Human Factors in Computing Systems, April 26th and 27th, 2023. https://doi.org/10.5281/zenodo.8382587
  5. S. Srivastava, M. Theune, A. Catala, «Role of Lexical Alignment in Human Understanding of Explanations by Conversational Agents», ICT.Open 2023 conference, April 19-20, 2023, Utrecht, The Netherlands. https://doi.org/10.5281/zenodo.8381515
  6. A. Rieger, «Boosting Intellectual Humility to Mitigate Confirmation Bias during Search on Debated Topics», NWO ICT.OPEN2023, April 20th 2023.
  7. A. Rieger, “Harnessing the Power of Intellectual Humility to Boost Better Search on Debated Topics,” Late Breaking Work Poster at CHI 2023, Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg, Germany, 23–28 April 2023.

  8. A. Sivaprasad (ESR3b), “Linguistically Communicating Uncertainty in Patient-Facing Risk Prediction Models,” Scottish Informatics and Computer Science Alliance, Edinburgh, Scotland, 15 January 2024.

Organization of scientific workshops

  1. Paving the way towards Explainable Artificial Intelligence through Fuzzy Sets and Systems, José María Alonso Moral, Open Webinar on Explainable Artificial Intelligence, Università degli Studi di Urbino Carlo Bo, Italy, June 24, 2022.
  2. 2nd edition of NLG in the Lowlands, Utrecht University, Eduardo Calò, Kees van Deemter, Albert Gatt, June22, 2023.
  3. Reconfiguring the Mold: Behavior Science and Ethical Tech Culture, Luca Nannini, WASP-HS AI for Humanity and Society 2023 Workshop, November 2023.

Prizes

  1. Best paper at 32nd ACM Conference on Hypertext and Social Media won by Alisa Rieger (ESR10-TU Delft), August 30- September 2, 2021. Title:  This Item Might Reinforce Your Opinion: Obfuscation and Labeling of Search Results to Mitigate Confirmation Bias.

  2. BitsxLaMaraton hackathon won by Jaime Sevilla (ESR3-UNIABDN) and Alexandra Mayn (ESR4-UU), December 19-20, 2020. The hackathon was organised by the Barcelona School of Informatics (FIB)Hackers@UPC (organisers of HackUPC) and the Barcelona Supercomputing Center (BSC) and focused on developing solutions to the challenges posed By COVID-19.

Videos

  • Showcasing NL4XAI

Research Projects and Achievements by Our Early Stage Researchers

  • ESR2 Research Results

Research Projects and Achievements by Our Early Stage Researchers

  • ESR7 Research Results

PhD title: Argumentation-based multi-agent recommender system

More information here.

  • ESR6 Research Results.

Case study: Entity-Based Semantic Adequacy for Data-to-Text Generation.

More information click here.

  • Alisa RiegerESR10 Research Results.

Case study: This Item Might Reinforce Your Opinion: Obfuscation and Labeling of Search Results to Mitigate Confirmation Bias.

More information click here.

  • Alisa Rieger, ESR10- TUDelft, Research Results.

Case study: Searching for the Whole Truth: Harnessing the Power of Intellectual Humility to Boost Better Search on Debated Topics

More information click here.

  • Eduardo Calò, ESR5-UU, Research Results.

Case study: Understanding Cross-modal Interactions in V&L Models that Generate Scene Descriptions

More information click here.

  • Ettore Mariotti, ESR1-CiTIUS-USC, Research Results.

Case study: Explaining black box AI models

More information click here.

  • Luca Nannini, ESR11-INDRA, Research Results.

Case study: Explainability in Process Mining A Framework for Improved Decision Making

More information click here.

  • Nikolay Babakov, ESR2-CiTIUS-USC, Research Results.

Case study: Explainable AI and Bayesian Networks Development

More information click here.

Share