Muralidharan Vijaytha, Adewale Boluwatife Adeleye, Huang Caroline J, Nta Mfon Thelma, Ademiju Peter Oluwaduyilemi, Pathmarajah Pirunthan, Hang Man Kien, Adesanya Oluwafolajimi, Abdullateef Ridwanullah Olamide, Babatunde Abdulhammed Opeyemi, Ajibade Abdulquddus, Onyeka Sonia, Cai Zhou Ran, Daneshjou Roxana, Olatunji Tobi
Department of Dermatology, Stanford University School of Medicine, Stanford, CA, 94304, USA.
Babcock University Teaching Hospital, Ilishan-Remo, Ogun State, Nigeria.
NPJ Digit Med. 2024 Oct 3;7(1):273. doi: 10.1038/s41746-024-01270-x.
Machine learning and artificial intelligence (AI/ML) models in healthcare may exacerbate health biases. Regulatory oversight is critical in evaluating the safety and effectiveness of AI/ML devices in clinical settings. We conducted a scoping review on the 692 FDA-approved AI/ML-enabled medical devices approved from 1995-2023 to examine transparency, safety reporting, and sociodemographic representation. Only 3.6% of approvals reported race/ethnicity, 99.1% provided no socioeconomic data. 81.6% did not report the age of study subjects. Only 46.1% provided comprehensive detailed results of performance studies; only 1.9% included a link to a scientific publication with safety and efficacy data. Only 9.0% contained a prospective study for post-market surveillance. Despite the growing number of market-approved medical devices, our data shows that FDA reporting data remains inconsistent. Demographic and socioeconomic characteristics are underreported, exacerbating the risk of algorithmic bias and health disparity.
医疗保健领域的机器学习和人工智能(AI/ML)模型可能会加剧健康偏见。监管监督对于评估AI/ML设备在临床环境中的安全性和有效性至关重要。我们对1995年至2023年期间美国食品药品监督管理局(FDA)批准的692款启用AI/ML的医疗设备进行了范围审查,以检查透明度、安全报告和社会人口统计学代表性。只有3.6%的批准报告了种族/族裔,99.1%未提供社会经济数据。81.6%未报告研究对象的年龄。只有46.1%提供了性能研究的全面详细结果;只有1.9%包含与具有安全性和有效性数据的科学出版物的链接。只有9.0%包含上市后监测的前瞻性研究。尽管市场批准的医疗设备数量不断增加,但我们的数据表明,FDA的报告数据仍然不一致。人口统计学和社会经济特征报告不足,加剧了算法偏见和健康差距的风险。