Skip to main navigation Skip to search Skip to main content

From prediction to understanding: A review of XAI applications and innovations in materials science

Research output: Contribution to journalArticlepeer-review

Abstract

While machine learning promises to accelerate materials discovery, its opaque nature risks undermining both scientific rigor and trust in its predictions. This has motivated the development and use of eXplainable Artificial Intelligence (XAI) methods, which aim to elucidate the decision-making logic behind these intelligent systems. In this paper, we provide a critical review of recent advances in XAI applied to materials science, based on a systematic analysis of more than 140 publications. Our review identifies conceptual ambiguities in XAI terminology and clarifies the distinction between self-explanatory and post-hoc approaches. In this regard, we introduce a taxonomy that organizes current families of XAI methods and highlights key methodological innovations in the field. Our analysis highlights SHAP’s dominance, such that it has become the gold standard for XAI in materials science. This adoption raises both opportunities and concerns, as graph-based and perturbation-based approaches continue to emerge. Finally, we present current limitations and open research questions about the use of XAI in the field, particularly regarding how we evaluate and trust explanations. Moving from prediction to understanding is now the central challenge for applying machine learning to materials science.

Original languageEnglish
Article number114493
JournalComputational Materials Science
Volume267
DOIs
StatePublished - 10 Mar 2026

Keywords

  • Artificial intelligence
  • Explainable
  • Interpretability
  • Machine learning
  • Material science
  • XAI

Fingerprint

Dive into the research topics of 'From prediction to understanding: A review of XAI applications and innovations in materials science'. Together they form a unique fingerprint.

Cite this