Skip to main content
UCP Knowledge NetworkApplied knowledge for action
Research papers_ KN icon

https://www.nature.com/articles/s41561-025-01639-x

Published on 19 November 2025
This paper shows how explainable AI reveals model decision processes, building trust and supporting wider AI adoption in geoscience.
Research papers
Author details
Dramsch, Jesper Sören ; M. Kuglitsch, Monique ; Fernández-Torres, Miguel-Ángel ; Toreti, Andrea ; Arif Albayrak, Rustem ; Nava, Lorenzo ; Ghaffarian, Saman ; Cheng, Ximeng ; Ma, Jackie ; Samek, Wojciech ; Venguswamy, Rudy ; Koul, Anirudh ; Muthuregunathan, Raghavan ; Hrast Essenfelder, Arthur
Unique identifier
https://doi.org/10.1038/s41561-025-01639-x
Comment

This paper discusses the challenges and opportunities of using artificial intelligence (AI) in geoscience. While AI can analyse complex, multidimensional data and solve nonlinear problems, increasing model complexity often reduces interpretability. In critical contexts, such as natural hazard scenarios, this lack of understanding can undermine trust and limit implementation. The authors argue that explainable AI (XAI) methods, which make opaque ‘black-box’ models more interpretable, can enhance human understanding, build trust in model results, and promote broader adoption of AI in geoscience research and applications.

Disclaimer
Information and views set out in this community page can also be those of the author and do not necessarily reflect the official opinion of the European Commission.

Hazard types

Environmental Geohazards Multi-hazard

DRM Phases

Prevention

Geographic focus

all Europe/EU

Sectors

AI, RPAS & remote sensing Risk reduction & assessment

Risk drivers

Climate change Environmental degradation