Department of Internal Medicine Sciences, Gülhane Faculty of Medicine, University of Health Sciences, Ankara, Turkiye.
Commission of National Education, Culture, Youth and Sports of the Parliament, Ankara, Turkiye.
Turk J Med Sci. 2024 May 20;54(3):483-492. doi: 10.55730/1300-0144.5814. eCollection 2024.
The aim of this study is to examine the risks associated with the use of artificial intelligence (AI) in medicine and to offer policy suggestions to reduce these risks and optimize the benefits of AI technology. AI is a multifaceted technology. If harnessed effectively, it has the capacity to significantly impact the future of humanity in the field of health, as well as in several other areas. However, the rapid spread of this technology also raises significant ethical, legal, and social issues. This study examines the potential dangers of AI integration in medicine by reviewing current scientific work and exploring strategies to mitigate these risks. Biases in data sets for AI systems can lead to inequities in health care. Educational data that is narrowly represented based on a demographic group can lead to biased results from AI systems for those who do not belong to that group. In addition, the concepts of explainability and accountability in AI systems could create challenges for healthcare professionals in understanding and evaluating AI-generated diagnoses or treatment recommendations. This could jeopardize patient safety and lead to the selection of inappropriate treatments. Ensuring the security of personal health information will be critical as AI systems become more widespread. Therefore, improving patient privacy and security protocols for AI systems is imperative. The report offers suggestions for reducing the risks associated with the increasing use of AI systems in the medical sector. These include increasing AI literacy, implementing a participatory society-in-the-loop management strategy, and creating ongoing education and auditing systems. Integrating ethical principles and cultural values into the design of AI systems can help reduce healthcare disparities and improve patient care. Implementing these recommendations will ensure the efficient and equitable use of AI systems in medicine, improve the quality of healthcare services, and ensure patient safety.
本研究旨在探讨人工智能(AI)在医学领域应用所带来的风险,并提出政策建议以降低这些风险,优化 AI 技术的效益。AI 是一项多层面的技术。如果能有效利用,它有潜力在医疗保健领域以及其他几个领域对人类的未来产生重大影响。然而,这项技术的快速传播也引发了重大的伦理、法律和社会问题。本研究通过回顾当前的科学研究,探讨了减轻这些风险的策略,从而探讨了 AI 在医学中的潜在风险。AI 系统数据集中的偏差可能导致医疗保健的不平等。基于特定群体的教育数据代表性狭窄,可能导致不属于该群体的人从 AI 系统中得出有偏差的结果。此外,AI 系统的可解释性和可问责性概念可能会给医疗保健专业人员理解和评估 AI 生成的诊断或治疗建议带来挑战。这可能会危及患者安全,并导致选择不适当的治疗方法。随着 AI 系统的广泛应用,确保个人健康信息的安全将至关重要。因此,必须改善 AI 系统的患者隐私和安全协议。该报告提出了一些建议,以降低与医疗领域日益增加的 AI 系统使用相关的风险。这些建议包括提高 AI 素养、实施参与式社会循环管理策略以及建立持续的教育和审计系统。将伦理原则和文化价值观融入 AI 系统的设计中可以帮助减少医疗保健差距,改善患者护理。实施这些建议将确保 AI 系统在医学中的高效和公平应用,提高医疗服务质量,并确保患者安全。