AI medical imaging models are biased, performing better at predicting race/gender than diagnosing diseases.
The emerging rise of using AI in medical imaging necessitates maximizing accuracy and minimizing biases. The term bias encompasses partiality, tendency, preference, and systematically flawed thinking patterns. In people, biases can be conscious orimpacts overall performance accuracy. In AI machine learning, algorithms learn from massive amounts of training data rather than explicitly hard-coded instructions. Several factors impact theof AI models to bias.
This was not surprising, given two years prior, Ghassemi, Gichoya, and Zhang were among the co-authors of a separate MIT and Harvard Medical Schoolthat showed how AI deep learning models can predict a person’s self-reported race from medical image pixel data as inputs with a high degree of accuracy. The 2022 study demonstrated that AI easily learned to spot self-reportedfrom medical images.