Weidener Lukas, Fischer Michael
Research Unit for Quality and Ethics in Health Care, UMIT TIROL - Private University for Health Sciences and Health Technology, Hall in Tirol, Austria.
JMIR AI. 2024 Jan 12;3:e51204. doi: 10.2196/51204.
The integration of artificial intelligence (AI)-based applications in the medical field has increased significantly, offering potential improvements in patient care and diagnostics. However, alongside these advancements, there is growing concern about ethical considerations, such as bias, informed consent, and trust in the development of these technologies.
This study aims to assess the role of ethics in the development of AI-based applications in medicine. Furthermore, this study focuses on the potential consequences of neglecting ethical considerations in AI development, particularly their impact on patients and physicians.
Qualitative content analysis was used to analyze the responses from expert interviews. Experts were selected based on their involvement in the research or practical development of AI-based applications in medicine for at least 5 years, leading to the inclusion of 7 experts in the study.
The analysis revealed 3 main categories and 7 subcategories reflecting a wide range of views on the role of ethics in AI development. This variance underscores the subjectivity and complexity of integrating ethics into the development of AI in medicine. Although some experts view ethics as fundamental, others prioritize performance and efficiency, with some perceiving ethics as potential obstacles to technological progress. This dichotomy of perspectives clearly emphasizes the subjectivity and complexity surrounding the role of ethics in AI development, reflecting the inherent multifaceted nature of this issue.
Despite the methodological limitations impacting the generalizability of the results, this study underscores the critical importance of consistent and integrated ethical considerations in AI development for medical applications. It advocates further research into effective strategies for ethical AI development, emphasizing the need for transparent and responsible practices, consideration of diverse data sources, physician training, and the establishment of comprehensive ethical and legal frameworks.
基于人工智能(AI)的应用在医学领域的整合显著增加,有望改善患者护理和诊断。然而,随着这些进展,人们越来越关注伦理考量,如偏差、知情同意以及对这些技术开发的信任。
本研究旨在评估伦理在医学领域基于AI的应用开发中的作用。此外,本研究关注在AI开发中忽视伦理考量的潜在后果,特别是其对患者和医生的影响。
采用定性内容分析法分析专家访谈的回复。专家的选择基于他们在医学领域基于AI的应用研究或实际开发中至少参与5年,最终该研究纳入了7名专家。
分析揭示了3个主要类别和7个子类别,反映了对伦理在AI开发中作用的广泛观点。这种差异凸显了将伦理融入医学AI开发的主观性和复杂性。虽然一些专家将伦理视为基础,但另一些专家则优先考虑性能和效率,有些人认为伦理是技术进步的潜在障碍。这种观点的二分法清楚地强调了围绕伦理在AI开发中作用的主观性和复杂性,反映了这个问题固有的多面性。
尽管方法上的局限性影响了结果的普遍性,但本研究强调了在医学应用的AI开发中持续和综合的伦理考量的至关重要性。它主张进一步研究伦理AI开发的有效策略,强调需要透明和负责的做法、考虑多样的数据源、医生培训以及建立全面的伦理和法律框架。