Ettore Mariotti, a MSCA PhD researcher at CiTIUS-USC, is working on Explainable Artificial Intelligence (XAI) within the NL4XAI project. His research includes both explaining black box AI models and developing white box approaches to improve transparency and trustworthiness. Recently, he was interviewed by Matteo Ciprian, a Machine Learning Engineer based in Cambridge (UK) with expertise in Machine Learning, Sensor Fusion, and applied AI.

In the conversation, Ettore provided an engaging introduction to XAI, highlighting his commitment to advancing research in this vital area of AI. The interview showcased his efforts to make AI systems more accessible and understandable to users, emphasizing the importance of explainability in the ever-evolving field of Artificial Intelligence.

See the full interview:

Categories: News

NL4XAI - Interactive Natural Language Technology for Explainable Artificial Intelligence
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.