What does AI mean for radiologists/Fairness in Diagnoses
Judy Wawira Gichoya
Emory University School of Medicine Hiding in plain sight – What does AI’s ability to detect patterns not visible to radiologists mean?
Recent papers have demonstrated superhuman ability of AI models to predict demographics (including self-reported race, age and sex), biologic age, ICD codes, and healthcare costs from X-ray images. While the model performance in these cases is surprisingly good for tasks that are difficult for radiologists, challenges of model explainability makes it difficult to harness this ability for patient care. In this talk we will review
Examples of cases where AI can detect “hidden signals” in X-ray images, Generalizability of these models to new data to determine their utility, Lay our a research roadmap to harness the ability of these models for patient care.
Haoran Zhang
MIT Group Fairness in Chest X-ray Diagnosis: Helpful or Harmful?
Machine learning models are being increasingly deployed in real-world clinical environments. However, these models often exhibit disparate performance between population groups, potentially leading to inequitable and discriminatory predictions. In this primer, we will discuss what it means for a model to be "fair" in the clinical setting by studying two algorithmic fairness definitions: group fairness and minimax fairness. We will analyze deep learning models for disease diagnosis using chest X-rays through the lens of these two definitions. We will then discuss algorithmic fairness methods for achieving fairness, finding that such algorithmic interventions can have serious unintended consequences. Next, we will specialize our analysis to the spurious correlation scenario -- where models may use demographic attributes as shortcuts. Finally, we question what the appropriate definition of fairness is in the clinical context, and advocate for an investigation of bias in the data whenever possible, as opposed to blindly applying algorithmic interventions.