XAI4Science: From Understanding Model Behavior to Discovering New Scientific Knowledge

April 27, 2025, co-located with ICLR 2025

Peridot 201 & 206 @ Singapore EXPO

About

Machine learning (ML) models are impressive when they work but they can also show unreliable, untrustworthy, and harmful dangerous behavior. Yet, such models are widely adopted and deployed, even though we do not understand why they work so well and fail miserably at times. Such rapid dissemination encourages irresponsible use, for example, to spread misinformation or create deep fakes, while hindering the efforts to use them to solve pressing societal problems and advance human knowledge.


Ideally, we want models to help us improve our understanding of the world and, at the very least, we want them to aid human knowledge and help us to further enrich it. Our goal in this workshop is to take a step in this direction by bringing together researchers working on understanding model behavior and using it to discover new human knowledge. The workshop will include theoretical topics on understanding model behavior, namely interpretability and explainability (XAI), but also three distinct scientific application areas: weather and climate, healthcare, and material science (AI4Science).

Topics

A-priori (i.e., ante-hoc) interpretability and self-explainable models for understanding model’s behaviour


A-posteriori (i.e., post-hoc) interpretability and attribution methods for understanding model’s behaviour, including methods for evaluating the accuracy of post-hoc interpretability and attribution


Practical use of interpretability and explainability for knowledge discovery in

•⁠ 🌦️ ⁠Weather and climate science,
•⁠ 🧪 ⁠Material science, and
•⁠ 🩺 ⁠⁠Healthcare
🌦️ XAI for ⁠weather and climate science
Speaker

Lily Xu

9:15 am - 9:45 am
Title: Barriers to Explainability for Sustainability, and A Path Towards More Reliable Models
Speaker

David Rolnick

1:30 pm - 2:00 pm
Title: Application-driven Machine Learning in Climate Change
🤖 XAI for scientific discovery
Speaker

Erik Cambria

11:15 am - 11:45 am
Title: Neurosymbolic AI for Hypothesis Generation in Social, Chemical, and Cognitive Sciences
Speaker

Chandan Reddy

11:45 am - 12:15 pm
Title: Interpretable and Explainable Scientific Equation Discovery via LLM-Guided Symbolic Programming
🧪 XAI for ⁠material science
Speaker

Qianxiao Li

4:00 pm - 4:30 pm
Title: Learning Interpretable Macroscopic Dynamics
🔎 XAI methodology
Speaker

Maximilian Dreyer


Speaker

Jim Berend

8:45 am - 9:15 am
Title: Probing for the Unknown: Mechanistic Concept Discovery Beyond Human Expectation With SemanticLens
Speaker

Bryan Kian Hsiang Low

3:30 pm - 4:00 pm
Title: PIED: Physics-Informed Experimental Design for Inverse Problems
🩺 XAI for medicine
Speaker

Grégoire Montavon

4:30 pm - 5:00 pm
Title: Explainable AI for Unsupervised Learning: Turning Raw Data into Scientific Insights

Schedule

TIME EVENT & PRESENTERS
8:30 am - 8:45 am Introduction by organizers
8:45 am - 9:15 am Invited talk I: Maximilian Dreyer & Jim Berend Fraunhofer Heinrich Hertz Institute

🔎 XAI methodology

9:15 am - 9:45 am Invited talk II: Lily Xu Columbia University

🌦️ XAI for ⁠weather and climate science

9:45 am - 11:15 am Posters Session I & Coffee Break
11:15 am - 11:45 am Invited talk III: Erik Cambria Nanyang Technological University

🤖 XAI for scientific discovery

11:45 am - 12:15 pm Invited talk IV: Chandan Reddy Virginia Tech

🤖 XAI for scientific discovery

12:15 pm - 1:30 pm LUNCH BREAK
1:30 pm - 2:00 pm Invited talk V: David Rolnick McGill University

🌦️ XAI for ⁠weather and climate science

2:00 pm - 3:30 pm Posters Session II & Coffee Break
3:30 pm - 4:00 pm Invited talk VI: Bryan Kian Hsiang Low National University of Singapore

🔎 XAI methodology

4:00 pm - 4:30 pm Invited talk VII: Qianxiao Li National University of Singapore

🧪 XAI for ⁠material science

4:30 pm - 5:00 pm Invited talk VIII: Grégoire Montavon Charité Universitätsmedizin Berlin & BIFOLD

🩺 XAI for medicine

5:00 pm - 5:15 pm Contributed Talk: Thomas Decker Siemens AG, LMU Munich, & MCML
5:15 pm - 5:55 pm Panel Discussion
5:55 pm - 6:00 pm Closing Remarks

Call for Submissions

Submission Guidelines

This year, ICLR is discontinuing the separate “Tiny Papers” track, and is instead requiring each workshop to accept short (3–5 pages in ICLR format, exact page length to be determined by each workshop) paper submissions, with an eye towards inclusion; see Call For Tiny Papers for more details. Authors of these papers will be earmarked for potential funding from ICLR, but need to submit a separate application for Financial Assistance that evaluates their eligibility. This application for Financial Assistance to attend ICLR 2025 will become available on ICLR 2025 at the beginning of February and close on March 2nd.

Submission Tracks
(1) Regular Track: Regular workshop paper (Page limit: 6-8 pages).
(2) Tiny Paper Track: Short / Tiny paper (Page limit: 3-5 pages).
Submission Format
Submissions must be in a single PDF file and are required to use the ICLR 2025 LaTeX template. The list of references does not count towards the page limit. Authors may use as many pages of appendices as they wish, but reviewers are not required to read the appendix.
Submission Link
Papers should be submitted to Openreview.
General Policy
The workshop is non-archival and does not publish proceedings. Submissions can be subsequently or concurrently submitted to other venues.
All submissions will be double-blinded and peer-reviewed by at least two reviewers. Please do not include any identifying information in your submissions. Any submission found to be violating this policy may be desk rejected.
Presentation
All accepted papers must be presented in person as posters.

Camera-ready Submission Guidelines

Camera-ready version submission should be in PDF format. Please follow the XAI4Science workshop camera-ready LaTeX template and de-anonymize your submission to include your complete author list. You are allowed a maximum of 9 pages for the regular paper track and 6 pages for the tiny paper track (excluding references and appendices). If you have supplementary material, you may append it at the end of the main paper. Please submit the camera-ready version of your submission via OpenReview by April 19th.

Important Dates

Submission Open January 17, 2025, AoE
Submission Deadline February 10, 2025, AoE
Decision Notification March 5, 2025, AoE
Camera-ready Submission April 19, 2025, AoE
Workshop Date April 27, 2025

Call for Reviewers

We rely on our reviewers for the quality of the workshop program. Please fill out this form if you are interested in being a reviewer for the ICLR 2025 Workshop on XAI4Science. The review period will be February 10th-28th with emergency reviews happening the week after.


Thank you very much for your willingness to support the workshop in this manner!

Organizers

Organizer 1
Gianmarco Mengaldo

National University of Singapore

Organizer 2
Jiawen Wei

National University of Singapore

Organizer 4
Emtiyaz Khan

RIKEN-AIP

Organizer 5
Abeba Birhane

Trinity College Dublin

Organizer 4
Sara Hooker

Cohere For AI

Committee 3
Sebastian Lapuschkin

Fraunhofer Heinrich Hertz Institute

Program Committee

Committee 1
Wojciech Samek

Fraunhofer Heinrich Hertz Institute

Committee 4
Prasanna Balaprakash

Oak Ridge National Laboratory

Committee 5
Hugues Turbé

University of Geneva