Ettore Mariotti, a MSCA PhD researcher at CiTIUS-USC, is working on Explainable Artificial Intelligence (XAI) within the NL4XAI project. His research includes both explaining black box AI models and developing white box approaches to improve transparency and trustworthiness. Recently, he was interviewed by Matteo Ciprian, a Machine Learning Engineer based in Cambridge (UK) with expertise in Machine Learning, Sensor Fusion, and applied AI.
In the conversation, Ettore provided an engaging introduction to XAI, highlighting his commitment to advancing research in this vital area of AI. The interview showcased his efforts to make AI systems more accessible and understandable to users, emphasizing the importance of explainability in the ever-evolving field of Artificial Intelligence.
See the full interview: