By Luca Nannini, PhD at Indra, Minsait.

The blog article is an exploratory analysis of how XAI principles could leverage user trust and data integrity in blockchain systems. Through a preliminary overview of the current international AI ethics news, an introduction to explainable AI is provided. Furthermore, there are presented a few research papers quantify the benefits of AI, XAI, and Blockchain solutions in terms of security and performance.

This blog article was published as a result of the collaboration between Luca Nannini, NL4XAI fellow, and Affidaty, an Italian hardware & software development company working with blockchain technology.

The increasing complexity of data sets and AI algorithms presents challenges in ensuring their quality. Due to the complexity of AI models, their explicability has become a major area of research.XAI techniques improve the interpretability of AI models: this allows developers to more easily identify and manage the impact of various variables on their work. It also allows ordinary citizens to have an explanation of the results of their requests.

Paired to XAI models, decentralized AI solutions relying on blockchain architectures could allow to easily distribute and guarantee data traceability and accessibility. The combination of blockchain and AI can create a secure, decentralized, and highly sensitive system for storing and processing AI-generated data. The combination of AI and blockchain architectures would bring greater performance in computational application, as well as security for data accountability.

There are already conceptual frameworks that propose the use of XAI to enable data chain traceability through IPFS. The conceptual frameworks suggest that XAI-based DApps would provide a way to collect and store metadata from AI predictor nodes. This would allow the validation of the metadata of AI predictor nodes and the validation of XAI explanations.

To read the full blog post please click here (available in Italian):

Categories: News