Permanent external link
Explainability can foster trust in artificial intelligence in geoscience- Author details
- Sören Dramsch, Jesper; Kuglitsch, Monique M.; Fernández-Torres, Miguel-Ángel; Toreti, Andrea; Arif Albayrak, Rustem; Nava, Lorenzo; Ghaffarian, Saman; Cheng, Ximeng; Ma, Jackie; Samek, Wojciech; Venguswamy, Rudy; Koul, Anirudh; Muthuregunathan, Raghavan; Hrast Essenfelder, Arthur.
- Unique identifier
- https://doi.org/10.1038/s41561-025-01639-x
- Summary
Artificial intelligence (AI) offers powerful tools for analysing complex geoscientific data and improving the detection, monitoring and forecasting of natural hazards, yet the increasing complexity of AI models often reduces their interpretability and limits trust in their results. This article argues that explainable artificial intelligence (XAI) can address this challenge by making the decision processes of AI models more transparent and understandable to human experts. XAI methods can reveal how models use input variables, detect errors or biases in data, and uncover physical relationships within environmental systems, thereby improving both model reliability and scientific understanding. Despite these benefits, the adoption of XAI in geoscience remains limited due to constraints in time, resources and methodological maturity. The authors therefore call for stronger demand from stakeholders, better resources and training, increased interdisciplinary collaboration, and the integration of XAI into standard AI workflows to foster trust and enable wider and more responsible use of AI in geoscience.
- Disclaimer
- Information and views set out in this community page can also be those of the author and do not necessarily reflect the official opinion of the European Commission.
Sectors