Early Stage Researcher at CiTIUS, USC
Centro Singular de Investigación en Tecnoloxías Intelixentes (CiTIUS)
Universidade de Santiago de Compostela (USC)
Rúa de Jenaro de la Fuente Domínguez,
15782 – Santiago de Compostela, Spain.
Connect
- +34 881 816 383
- ettore.mariotti@usc.es
Explaining black-box models in terms of grey-box twin models
· Main Supervisor: Dr. Jose M. Alonso, Centro Singular de Investigación en Tecnoloxías Intelixentes (CiTIUS) – Universidade de Santiago de Compostela (USC), josemaria.alonso.moral@usc.es
· PhD Co-Supervisor: Dr. Albert Gatt, Institute of Linguistics and Language Technology – Università ta’ Malta (UOM)
PHD RESEARCH TOPIC
Objectives:
To define, design and develop a new methodology to explain black-box models in Natural Language. Taking as inspiration the paradigm of digital twins that is applied to industry 4.0, the ESR will generate explainable models as digital twins of black-box models. The target will be explaining neural networks, including deep architectures. The explainable twin model will be supported by a pool of grey-box models such as Bayesian networks, interpretable fuzzy systems and decision trees; endowed with a linguistic layer for providing expert and non-expert users with explanations in Natural Language.
use cases:
– explaining the outcomes of models for automatic description of human faces, for example in security applications (defined in a secondment at Università ta’ Malta); and
– a real use case (“Experiences, offering, demand and classification in semantic-search processes applied to business processes”) defined in an inter-sectoral secondment (Indra, Spain).
Expected Results:
(1) Theoretical understanding of the state of the art in relevant aspects of machine learning methods (paying special attention to Neural Networks / Deep Learning approaches) and the paradigm of digital twins.
(2) SWOT analysis of existing methods for explaining black-box models and study of methods for explaining grey-box models in Natural Language.
(3) Definition and implementation of a new methodology for explaining black-box models in Natural Language.
(4) Validation of this methodology in two use cases where the goodness of the generated explanations will be evaluated by humans.
(5) A fully functional web service coming from (3) and (4).