Juliette Faille, CNRS
Juliette Faille, a promising NL4XAI Early Stage Researcher, is currently based at the prestigious Centre National de la Recherche Scientifique (CNRS) in France. Her research focus centers on developing Explainable Models for Text Production, a critical field in natural language processing.
In this interview, we will discuss Juliette's journey as a researcher, her current work in Explainable AI, and her views on the future of natural language processing.
What is your current role?
I‘m an Early Stage Researcher (ESR) in the NL4XAI project and a final year PhD student in Computer Science. I’m a member of the Synalp group at the LORIA laboratory, which is located in Nancy in France. My PhD advisors are Claire Gardent (CNRS) and Albert Gatt (UOM/UU).
What is your research focus and what problem(s) does it address?
I‘m working on the topic of the explainability of Natural Language Generation models. The state-of-the-art models for text generation are Large Language Models (LLMs) which have on average impressively good results (as a lot of people discovered recently with OpenAI’s ChatGPT). However, as these models become more and more used, it is even more crucial to make sure that the texts they produce are correct.
In the context of Data-to-Text generation (i.e. generation of a text based on some input data), the output text sometimes contains information not present in the input data (hallucinations) or omits to mention all desired input data (omissions). In my PhD project, we first evaluated the output text and quantified its omissions and hallucinations. We then worked on analysing different state-of-the-art Data-to-Text models, to understand and detect omissions based on the model parameters. Finally, in collaboration with Quentin Brabant, Gwénolé Lecorvé and Lina Maria Rojas Barahona from Orange Labs, and in the context of dialog generation, we studied how we could improve the overall quality of output questions and in particular improve their faithfulness to the input data.
What is your key takeaway so far?
As LLMs applications become more and more popular, it becomes more crucial to work on their evaluation, interpretability, and controllability. Even though their output texts are on average of very high quality, some outputs still contain hallucinations or omissions and that require fine-grained analysis to be detected.
What are (some of) the tasks in your day-to-day life as an ESR/PhD candidate?
Working on my PhD project, first involves identifying research questions and assumptions to verify. This requires reading conference and journal articles related to my research topic. Then, we define and implement experiments (e.g. choice of the NLG model, dataset, explainability algorithms to test). We analyse the results and eventually write and submit scientific papers to present them.
I also regularly attend seminars, workshops or conferences related to my research.
As an ESR in the NL4XAI project, I visited different labs, at the University of Malta, the University of Utrecht and Orange Labs, where I collaborated with PhD students and researchers. I also took part in multiple training events organized by different universities of the NL4XAI project.
What keeps you entertained in your spare time?
In my spare time, I really like dancing, doing yoga and running. I also very much enjoy visiting art museums.
Where can people learn more about you and your research?
People can look at the papers we wrote here : https://aclanthology.org/people/j/juliette-faille/, and see information about my PhD project here: https://nl4xai.eu/people/esr6/