SIG-FAIMI


Special Interest Group on Fairness of AI on Medical Imaging

During the last few years, the research community of fairness, equity and accountability in machine learning has highlighted the potential risks associated with biased systems in various application scenarios, ranging from face recognition to neural translation models and job hiring assistants. A large body of research studies has shown that, due to a variety of reasons, such as database construction, modelling choices, training strategies and even lack of diversity in team composition, such machine learning systems can be biased in terms of demographic attributes like gender, ethnicity, age or geographical distribution, presenting unequal behaviour on disadvantaged or underrepresented subpopulations. Even though fairness in machine learning has been extensively studied in decision-making scenarios like job hiring, credit scoring and criminal justice, as well as computer vision applications, it was not until recently that researchers started to study and characterize bias and design mitigation strategies for systems in medical image computing (MIC) and computer assisted interventions (CAI).  

Mission:

The SIG-FAIMI aims to create awareness about potential fairness issues that can emerge in the context of computerized medical imaging and computer assisted interventions. It will act as a forum within the MICCAI community to discuss such issues from a comprehensive perspective, including not only methodological advances to diagnose and mitigate fairness biases but also to understand its consequences in the clinical context, generate awareness in the medical imaging community, and provide a venue to generate consensus and guidelines on how to address them. 

Goals: 

  • Targeted engagement of active groups around fairness in AI, both internal and external to MICCAI.  
  • Promote research within the MICCAI community on topics related to fairness of AI in medical imaging and create awareness about the importance of incorporating fairness aspects in our daily research practices. 
  • Expand the MICCAI community and conference attendees to, but not limited to, legal and ethics experts. This will include reaching out to experts on the areas of ethics and legal/regulatory considerations of fairness. 
  • Increase the diversity of the MICCAI workshops for those interested, by providing support, structure and access to the wider scientific community. 
  • Create effective communication among the different communities with a web page listing useful resources to learn about fairness of AI in medical imaging, and use of social media. 

Board Members:

Esther Puyol-Antón, President, esther.puyol_anton@kcl.ac.uk
Enzo Ferrante, Vice-president, eferrante@sinc.unl.edu.ar
Ben Glocker, Treasurer, b.glocker@imperial.ac.uk
Veronika Cheplygina, vech@itu.dk
Aasa Feragen, afhar@dtu.dk
Melanie Ganz-Benjaminsen, ganz@di.ku.dk
Andrew King, andrew.king@kcl.ac.uk
Eike Petersen, ewipe@dtu.dk