Luckner Steffen, Lauer Wolfgang
Referat 124 - Medizinproduktesicherheit, Bundesministerium für Gesundheit, Mauerstraße 29, 10117, Berlin, Deutschland.
Bundesinstitut für Arzneimittel und Medizinprodukte (BfArM), Bonn, Deutschland.
Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz. 2025 Aug;68(8):854-861. doi: 10.1007/s00103-025-04091-9. Epub 2025 Jul 4.
The use of artificial intelligence (AI) in healthcare offers great potential but also presents regulatory challenges. The EU Artificial Intelligence Act (AIA), the Medical Device Regulation (MDR), and the Regulation on in vitro diagnostic medical devices (IVDR) establish the framework for the safe and ethical use of AI-based medical devices in Europe.When assessing the regulatory classification of AI software or products with AI components, it must be determined-based on the intended purpose defined by the manufacturer and the classification criteria of the MDR or IVDR-whether the product qualifies as a medical device. If so, all relevant regulations of the AIA apply in addition to those of the MDR/IVDR.Both the MDR/IVDR and the AIA assign products to different risk classes based on their respective risk potential, defining specific requirements accordingly. However, the classification criteria and implications differ between the regulatory frameworks. As a result, even medical devices in the lower MDR/IVDR risk category with involvement of a notified body are considered high-risk products under the AIA. It is therefore crucial to differentiate precisely between the respective regulations.The harmonization of regulatory requirements remains a challenge, particularly because classification under the AIA and MDR/IVDR is based on different criteria despite some similar terminology. Open questions persist, especially regarding lowest-risk medical devices and self-learning AI systems. Further development of regulatory guidelines is necessary to ensure the safe and efficient integration of innovative AI applications into healthcare. The German Federal Ministry of Health (BMG) and the Federal Institute for Drugs and Medical Devices (BfArM) are actively working at both national and European levels to develop pragmatic solutions.
人工智能(AI)在医疗保健领域的应用潜力巨大,但也带来了监管挑战。欧盟《人工智能法案》(AIA)、《医疗器械法规》(MDR)和《体外诊断医疗器械法规》(IVDR)为欧洲安全、合乎伦理地使用基于人工智能的医疗器械建立了框架。在评估人工智能软件或带有人工智能组件的产品的监管分类时,必须根据制造商定义的预期用途以及MDR或IVDR的分类标准,确定该产品是否符合医疗器械的定义。如果是,除了MDR/IVDR的所有相关法规外,AIA的所有相关法规也适用。MDR/IVDR和AIA都根据各自的风险潜力将产品分配到不同的风险类别,并据此定义具体要求。然而,不同监管框架下的分类标准和影响有所不同。因此,即使是在MDR/IVDR中属于较低风险类别且有公告机构参与的医疗器械,在AIA下也被视为高风险产品。因此,准确区分各自的法规至关重要。监管要求的协调仍然是一项挑战,特别是因为尽管有一些相似的术语,但AIA和MDR/IVDR下的分类基于不同的标准。悬而未决问题依然存在,尤其是关于最低风险医疗器械和自学习人工智能系统的问题。有必要进一步制定监管指南,以确保将创新的人工智能应用安全、高效地整合到医疗保健中。德国联邦卫生部(BMG)和联邦药品与医疗器械研究所(BfArM)正在国家和欧洲层面积极努力,以制定切实可行的解决方案。