AI methods can ease the workload of medical experts in data interpretation but often contain societal biases, especially in homogeneous datasets like mammograms. This leads to reduced performance in AI for diverse populations, influenced by biases in human decisions and historical inequities. This lecture will focus on identifying and mitigating these biases in AI, emphasizing the crucial role of human judgment for fair and effective AI-supported decision-making.
Learning Objectives:
Upon completion, participant will be able to understand some of the fundamental principles and challenges of medical image analysis methods.
Upon completion, participant will be able to develop a critical understanding of advanced A.I. based methods, and apprehend what they can and cannot do in the context of medical applications. Concepts of uncertainty in A.I. predictions will be discussed.
Upon completion, participant will be able to understand some of the causes of bias in A.I. methods, including rater-based and population-based biases. Mitigation strategies to reduce bias in A.I. will be discussed.