Resumen
While machine learning promises to accelerate materials discovery, its opaque nature risks undermining both scientific rigor and trust in its predictions. This has motivated the development and use of eXplainable Artificial Intelligence (XAI) methods, which aim to elucidate the decision-making logic behind these intelligent systems. In this paper, we provide a critical review of recent advances in XAI applied to materials science, based on a systematic analysis of more than 140 publications. Our review identifies conceptual ambiguities in XAI terminology and clarifies the distinction between self-explanatory and post-hoc approaches. In this regard, we introduce a taxonomy that organizes current families of XAI methods and highlights key methodological innovations in the field. Our analysis highlights SHAP’s dominance, such that it has become the gold standard for XAI in materials science. This adoption raises both opportunities and concerns, as graph-based and perturbation-based approaches continue to emerge. Finally, we present current limitations and open research questions about the use of XAI in the field, particularly regarding how we evaluate and trust explanations. Moving from prediction to understanding is now the central challenge for applying machine learning to materials science.
| Idioma original | Inglés |
|---|---|
| Número de artículo | 114493 |
| Publicación | Computational Materials Science |
| Volumen | 267 |
| DOI | |
| Estado | Publicada - 10 mar. 2026 |
Huella
Profundice en los temas de investigación de 'From prediction to understanding: A review of XAI applications and innovations in materials science'. En conjunto forman una huella única.Citar esto
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver