https://www.nature.com/articles/s41561-025-01639-x
This paper shows how explainable AI reveals model decision processes, building trust and supporting wider AI adoption in geoscience.
Research papers
This paper discusses the challenges and opportunities of using artificial intelligence (AI) in geoscience. While AI can analyse complex, multidimensional data and solve nonlinear problems, increasing model complexity often reduces interpretability. In critical contexts, such as natural hazard scenarios, this lack of understanding can undermine trust and limit implementation. The authors argue that explainable AI (XAI) methods, which make opaque ‘black-box’ models more interpretable, can enhance human understanding, build trust in model results, and promote broader adoption of AI in geoscience research and applications.
Hazard types
DRM Phases
Geographic focus
Sectors
Risk drivers