January 27, 2026, co-located with AAAI 2026
Singapore EXPO | Room Garnet 215
Machine learning (ML) models are impressive when they work but they can also show unreliable, untrustworthy, and harmful dangerous behavior. Such behavior is even more common in the era of large models, such as chatGPT, which are quickly being adopted even though we do not understand why they work so well and fail miserably at times. Unfortunately, such rapid dissemination encourages irresponsible use, for example, to spread misinformation or create deep fakes, while hindering the efforts to use them to solve pressing societal problems and advance human knowledge. Ideally, we want models that have a human-like capacity to learn by observing, theorizing, and validating the theories to improve the understanding of the world. At the very least, we want them to aid human knowledge and help us to further enrich it.
Our goal in this workshop is to bring together researchers working on understanding model behavior and show how this key aspect can lead to discovering new human knowledge. This workshop sits at the intersection of eXplainable AI (XAI) and AI4Science. While "BLUE XAI" methods focus on communicating model output to end users, scientific discovery demands "RED XAI" approaches that probe, decompose, and allow re-engineering models themselves. Bridging these two cultures raises questions that neither traditional XAI nor broad AI4Science venues currently cover.
The workshop will include theoretical topics on understanding model behavior, namely interpretability and explainability, but also scientific application areas such as weather and climate, healthcare, and material science (AI4Science). These topics are brought together to highlight how seemingly diverse applied scientific fields can leverage XAI for knowledge discovery.
A-priori (i.e., ante-hoc) interpretability and self-explainable models for understanding model’s behaviour
A-posteriori (i.e., post-hoc) interpretability and attribution methods for understanding model’s behaviour, including methods for evaluating the accuracy of post-hoc interpretability and attribution
Practical use of interpretability and explainability for knowledge discovery in
• 🌦️ Weather and climate science,
|
9:00 am - 9:30 am
Title: Generalizable Scientific Law Discovery in LLM Agents |
|
9:30 am - 10:00 am
Title: Visualizing and Understanding Multimodal Interactions |
|
1:30 pm - 2:00 pm
Title: Explainability Matters |
|
2:00 pm - 2:30 pm
Title: Trustable XAI for Healthcare |
| TIME | EVENT & PRESENTERS |
|---|---|
| 8:50 am - 9:00 am | Opening Remarks |
| 9:00 am - 9:30 am | Invited talk I: Charles Cheung NVIDIA |
| 9:30 am - 10:00 am | Invited talk II: Paul Pu Liang MIT |
| 10:00 am - 10:20 am | Flash Talk I: overview of posters in Session I (accepted authors) |
| 10:20 am - 10:30 am | Contributed talk I: Alexander Owen Davies University of Bristol |
| 10:30 am - 12:00 pm | Poster Session I & Coffee Break |
| 12:00 pm - 1:30 pm | LUNCH BREAK |
| 1:30 pm - 2:00 pm | Invited talk III: Andrea Bertolini Scuola Superiore Sant'Anna |
| 2:00 pm - 2:30 pm | Invited talk IV: Hugues Turbé University of Geneva |
| 2:30 pm - 2:50 pm | Flash Talk II: overview of posters in Session II (accepted authors) |
| 2:50 pm - 3:00 pm | Contributed talk II: Yi Cao Johns Hopkins University |
| 3:00 pm - 4:30 pm | Poster Session II & Coffee Break |
| 4:30 pm - 5:00 pm | Panel Discussion |
| 5:00 pm - 5:10 pm | Closing Remarks |
| Workshop Date | January 27, 2026 |
National University of Singapore
National University of Singapore
RIKEN-AIP
University of Cambridge
Trinity College Dublin
adaption
Fraunhofer Heinrich Hertz Institute
Fraunhofer Heinrich Hertz Institute
PrimaLabs
University of Cambridge
University of Geneva