January 27, 2026, co-located with AAAI 2026
Singapore EXPO
Machine learning (ML) models are impressive when they work but they can also show unreliable, untrustworthy, and harmful dangerous behavior. Such behavior is even more common in the era of large models, such as chatGPT, which are quickly being adopted even though we do not understand why they work so well and fail miserably at times. Unfortunately, such rapid dissemination encourages irresponsible use, for example, to spread misinformation or create deep fakes, while hindering the efforts to use them to solve pressing societal problems and advance human knowledge. Ideally, we want models that have a human-like capacity to learn by observing, theorizing, and validating the theories to improve the understanding of the world. At the very least, we want them to aid human knowledge and help us to further enrich it.
Our goal in this workshop is to bring together researchers working on understanding model behavior and show how this key aspect can lead to discovering new human knowledge. This workshop sits at the intersection of eXplainable AI (XAI) and AI4Science. While "BLUE XAI" methods focus on communicating model output to end users, scientific discovery demands "RED XAI" approaches that probe, decompose, and allow re-engineering models themselves. Bridging these two cultures raises questions that neither traditional XAI nor broad AI4Science venues currently cover.
The workshop will include theoretical topics on understanding model behavior, namely interpretability and explainability, but also scientific application areas such as weather and climate, healthcare, and material science (AI4Science). These topics are brought together to highlight how seemingly diverse applied scientific fields can leverage XAI for knowledge discovery.
A-priori (i.e., ante-hoc) interpretability and self-explainable models for understanding model’s behaviour
A-posteriori (i.e., post-hoc) interpretability and attribution methods for understanding model’s behaviour, including methods for evaluating the accuracy of post-hoc interpretability and attribution
Practical use of interpretability and explainability for knowledge discovery in
• 🌦️ Weather and climate science,TIME | EVENT & PRESENTERS |
---|---|
8:50 am - 9:00 am | Opening Remarks |
9:00 am - 9:30 am | Invited talk I: Invited Speaker I Affiliation |
9:30 am - 10:00 am | Invited talk II: Invited Speaker II Affiliation |
10:00 am - 10:30 am | Flash Talk I: overview of posters in Session I (accepted authors) |
10:30 am - 12:00 pm | Poster Session I & Coffee Break |
12:00 pm - 12:10 pm | Contributed talk I: Contributed Speaker I Affiliation |
12:10 pm - 1:30 pm | LUNCH BREAK |
1:30 pm - 2:00 pm | Invited talk III: Invited Speaker III Affiliation |
2:00 pm - 2:30 pm | Invited talk IV: Invited Speaker IV Affiliation |
2:30 pm - 3:00 pm | Flash Talk II: overview of posters in Session II (accepted authors) |
3:00 pm - 4:30 pm | Poster Session II & Coffee Break |
4:30 pm - 4:40 pm | Contributed talk II: Contributed Speaker II Affiliation |
4:40 pm - 5:30 pm | Panel Discussion |
5:30 pm - 5:40 pm | Closing Remarks |
Submission Due | October 22, 2025, AoE |
Decision Notification | November 5, 2025, AoE |
Workshop Date | January 27, 2026 |
National University of Singapore
National University of Singapore
RIKEN-AIP
University of Cambridge
Trinity College Dublin
Cohere For AI
Fraunhofer Heinrich Hertz Institute
Fraunhofer Heinrich Hertz Institute
Oak Ridge National Laboratory
University of Cambridge
University of Geneva