2nd XAI4Science: From Understanding Model Behavior to Discovering New Scientific Knowledge

January 27, 2026, co-located with AAAI 2026

Singapore EXPO

About

Machine learning (ML) models are impressive when they work but they can also show unreliable, untrustworthy, and harmful dangerous behavior. Such behavior is even more common in the era of large models, such as chatGPT, which are quickly being adopted even though we do not understand why they work so well and fail miserably at times. Unfortunately, such rapid dissemination encourages irresponsible use, for example, to spread misinformation or create deep fakes, while hindering the efforts to use them to solve pressing societal problems and advance human knowledge. Ideally, we want models that have a human-like capacity to learn by observing, theorizing, and validating the theories to improve the understanding of the world. At the very least, we want them to aid human knowledge and help us to further enrich it.


Our goal in this workshop is to bring together researchers working on understanding model behavior and show how this key aspect can lead to discovering new human knowledge. This workshop sits at the intersection of eXplainable AI (XAI) and AI4Science. While "BLUE XAI" methods focus on communicating model output to end users, scientific discovery demands "RED XAI" approaches that probe, decompose, and allow re-engineering models themselves. Bridging these two cultures raises questions that neither traditional XAI nor broad AI4Science venues currently cover.

Topics

The workshop will include theoretical topics on understanding model behavior, namely interpretability and explainability, but also scientific application areas such as weather and climate, healthcare, and material science (AI4Science). These topics are brought together to highlight how seemingly diverse applied scientific fields can leverage XAI for knowledge discovery.


A-priori (i.e., ante-hoc) interpretability and self-explainable models for understanding model’s behaviour


A-posteriori (i.e., post-hoc) interpretability and attribution methods for understanding model’s behaviour, including methods for evaluating the accuracy of post-hoc interpretability and attribution


Practical use of interpretability and explainability for knowledge discovery in

•⁠ 🌦️ ⁠Weather and climate science,
•⁠ 🧪 ⁠Material science, and
•⁠ 🩺 ⁠⁠Healthcare

Schedule

TIME EVENT & PRESENTERS
8:50 am - 9:00 am Opening Remarks
9:00 am - 9:30 am Invited talk I: Invited Speaker I Affiliation
9:30 am - 10:00 am Invited talk II: Invited Speaker II Affiliation
10:00 am - 10:30 am Flash Talk I: overview of posters in Session I (accepted authors)
10:30 am - 12:00 pm Poster Session I & Coffee Break
12:00 pm - 12:10 pm Contributed talk I: Contributed Speaker I Affiliation
12:10 pm - 1:30 pm LUNCH BREAK
1:30 pm - 2:00 pm Invited talk III: Invited Speaker III Affiliation
2:00 pm - 2:30 pm Invited talk IV: Invited Speaker IV Affiliation
2:30 pm - 3:00 pm Flash Talk II: overview of posters in Session II (accepted authors)
3:00 pm - 4:30 pm Poster Session II & Coffee Break
4:30 pm - 4:40 pm Contributed talk II: Contributed Speaker II Affiliation
4:40 pm - 5:30 pm Panel Discussion
5:30 pm - 5:40 pm Closing Remarks

Call for Submissions

Submission Guidelines

Submission Tracks
(1) Regular Track: 6-8 pages, excluding references and appendices.
(2) Short Paper Track: 3-5 pages, excluding references and appendices.
Submission Format
Submissions must be in a single PDF file and are required to use the AAAI 2026 LaTeX template. The list of references does not count towards the page limit. Authors may use as many pages of appendices as they wish, but reviewers are not required to read the appendix.
Submission Link
Papers should be submitted to Openreview. Also, please make sure that all authors have an OpenReview profile with the latest information. Creating one may take up to 2 weeks.
General Policy
The workshop is non-archival and does not publish proceedings. Submissions can be subsequently or concurrently submitted to other venues. We welcome (optionally anonymous) submissions of ongoing and unpublished work on any topics related to the workshop, including but not limited to the listed topics (T1, T2) and (P1, P2, P3). Each paper will be peer-reviewed by at least two reviewers.
Presentation
All accepted papers must be presented in person as posters.

Important Dates

Submission Due October 22, 2025, AoE
Decision Notification November 5, 2025, AoE
Workshop Date January 27, 2026

Organizers

Organizer 1
Gianmarco Mengaldo

National University of Singapore

Organizer 2
Jiawen Wei

National University of Singapore

Organizer 4
Krzysztof Kacprzyk

University of Cambridge

Organizer 5
Abeba Birhane

Trinity College Dublin

Organizer 6
Sara Hooker

Cohere For AI

Organizer 7
Sebastian Lapuschkin

Fraunhofer Heinrich Hertz Institute

Program Committee

Committee 1
Wojciech Samek

Fraunhofer Heinrich Hertz Institute

Committee 2
Prasanna Balaprakash

Oak Ridge National Laboratory

Committee 3
Mihaela van der Schaar

University of Cambridge

Committee 4
Hugues Turbé

University of Geneva