Join the next RISE-MICCAI Tutorial - March 31, 2026
Sunday 22nd March 2026

Join the next RISE-MICCAI Tutorial:
Multi-Modal Explainability (XAI) for Medical Imaging with Captum
Tuesday, March 31, 2026 at 10:00 am EST / 4:00 pm CET
Abstract:
In the high-stakes world of clinical practice, AI models cannot operate as "black boxes." Explainable AI (XAI) is the bridge between algorithmic prediction and clinical trust, providing the transparency necessary for medical professionals to validate AI-driven insights. While XAI techniques are well-established for unimodal data, their application in multi-modal learning remains a frontier. Medical diagnosis rarely relies on a single source; it thrives on the fusion of various signals.
This session dives into the complexities of explaining models that process diverse data streams simultaneously—such as MRI scans coupled with patient Electronic Health Records (EHR). Using Captum, PyTorch's powerful library for model interpretability, we will move beyond simple heatmaps to understand how models weigh and integrate multi-modal information.
We will begin with a concise theoretical overview of XAI taxonomies (Attribution, Perturbation, and Gradient-based methods) before getting hands-on with three distinct medical use cases:
- Regression: Predicting biomarkers or disease progression using multi-sequence imaging.
- Classification: Diagnosing pathology by fusing image modalities (e.g., CT and PET) or images with tabular clinical data.
- Segmentation: Interpreting the "why" behind pixel-level masks in complex anatomical structures.
Each case study utilizes open-source multi-modal datasets, ensuring you can replicate and extend these experiments in your own research.
Key Takeaways
- XAI Taxonomy Mastery: Understand the strengths and limitations of different interpretability families (e.g., Integrated Gradients, DeepLift, and Occlusion).
- Multi-Modal Fusion Analysis: Learn how to attribute importance scores across heterogeneous inputs (Image + Tabular).
- Captum Proficiency: Gain practical experience using the Captum library to interpret complex PyTorch architectures.
- Clinical Validation Framework: Develop a mindset for evaluating whether an explanation is "clinically grounded" or merely a visual artifact
Presenter: Rachid Zeghlache, AI Researcher
Rachid Zeghlache is an AI researcher specializing in deep learning for medical imaging, with a focus on how disease evolves over time. Holding a Ph.D. from the University of Western Brittany, his work spans longitudinal learning, generative AI, and explainable AI, building models that don't just analyze a single scan, but track and predict patient trajectories across multiple visits and modalities.
He holds a European patent for a method of predicting disease progression and has co-organized two MICCAI data challenges, including the MARIO AMD progression challenge.
Beyond the lab, Rachid is passionate about closing the gap between AI research and clinical reality. His current interests center on augmented academic research and clinical workflow automation using agentic AI designing intelligent systems that don't just support researchers and clinicians, but actively participate in diagnostic reasoning, longitudinal monitoring, and decision support within real-world healthcare pipelines.
Registration is free and open to everyone