An FDA-approved AI algorithm was more likely to wrongly indicate the presence of cancer in Black women compared to White, Hispanic and Asian women, according to a new study.
“There are few demographically diverse databases for AI algorithm training, and the FDA does not require diverse datasets for validation,” Nguyen said. “Because of the differences among patient populations, it’s important to investigate whether AI software can accommodate and perform at the same level for different patient ages, races and ethnicities.
False positives were significantly more likely in Black patients and less likely in Asian patients compared to white patients. Older patients between the ages of 71-80 were more likely to have false positive results than younger patients.