April 27, 2025, co-located with ICLR 2025
Peridot 201 & 206 @ Singapore EXPO
Machine learning (ML) models are impressive when they work but they can also show unreliable, untrustworthy, and harmful dangerous behavior. Yet, such models are widely adopted and deployed, even though we do not understand why they work so well and fail miserably at times. Such rapid dissemination encourages irresponsible use, for example, to spread misinformation or create deep fakes, while hindering the efforts to use them to solve pressing societal problems and advance human knowledge.
Ideally, we want models to help us improve our understanding of the world and, at the very least, we want them to aid human knowledge and help us to further enrich it. Our goal in this workshop is to take a step in this direction by bringing together researchers working on understanding model behavior and using it to discover new human knowledge. The workshop will include theoretical topics on understanding model behavior, namely interpretability and explainability (XAI), but also three distinct scientific application areas: weather and climate, healthcare, and material science (ML4Science).
A-priori (i.e., ante-hoc) interpretability and self-explainable models for understanding model’s behaviour
A-posteriori (i.e., post-hoc) interpretability and attribution methods for understanding model’s behaviour, including methods for evaluating the accuracy of post-hoc interpretability and attribution
Practical use of interpretability and explainability for knowledge discovery in
• Weather and climate science,TIME | EVENT & PRESENTERS |
---|---|
8:15 am - 8:30 am | Introduction by organizers |
8:30 am - 9:00 am | Invited talk I: Speaker I (Institution I) |
9:00 am - 9:30 am | Invited talk II: Speaker II (Institution II) |
9:30 am - 10:30 am | Coffee Break & Posters Session I |
10:30 am - 11:00 am | Invited talk III: Speaker III (Institution III) |
11:00 am - 11:30 am | Invited talk IV: Speaker IV (Institution IV) |
11:30 am - 12:15 pm | Panel Discussion |
12:15 pm - 1:30 pm | Lunch Break |
1:30 pm - 2:00 pm | Invited talk V: Speaker V (Institution V) |
2:00 pm - 2:30 pm | Invited talk VI: Speaker VI (Institution VI) |
2:30 pm - 3:30 pm | Coffee Break & Posters Session II |
3:30 pm - 4:00 pm | Invited talk VII: Speaker VII (Institution VII) |
4:00 pm - 4:30 pm | Invited talk VIII: Speaker VIII (Institution VIII) |
4:30 pm - 5:15 pm | Panel Discussion |
5:15 pm - 5:30 pm | Closing Remarks |
This year, ICLR is discontinuing the separate “Tiny Papers” track, and is instead requiring each workshop to accept short (3–5 pages in ICLR format, exact page length to be determined by each workshop) paper submissions, with an eye towards inclusion; see Call For Tiny Papers for more details. Authors of these papers will be earmarked for potential funding from ICLR, but need to submit a separate application for Financial Assistance that evaluates their eligibility. This application for Financial Assistance to attend ICLR 2025 will become available on ICLR 2025 at the beginning of February and close on March 2nd.
Submission Open | January 17, 2025, AoE |
Submission Deadline | February 10, 2025, AoE |
Decision Notification | March 5, 2025, AoE |
Workshop Date | April 27, 2025 |
We rely on our reviewers for the quality of the workshop program. Please fill out this form if you are interested in being a reviewer for the ICLR 2025 Workshop on XAI4Science. The review period will be February 10th-28th with emergency reviews happening the week after.
Thank you very much for your willingness to support the workshop in this manner!
National University of Singapore
National University of Singapore
RIKEN-AIP
RIKEN-AIP
Trinity College Dublin
Cohere For AI
Fraunhofer Heinrich Hertz Institute
Fraunhofer Heinrich Hertz Institute
Oak Ridge National Laboratory
University of Geneva