Lamadé Annegret, Beekmann Dustin, Eickhoff Simon, Grefkes Christian, Tscherpel Caroline, Meyding-Lamadé Uta, Bassa Burc
Philipps-Universität Marburg, Marburg, Deutschland.
Watson Farley & Williams LLP, Hamburg, Deutschland.
Nervenarzt. 2024 Mar;95(3):242-246. doi: 10.1007/s00115-023-01573-6. Epub 2023 Dec 12.
The ability of some artificial intelligence (AI) systems to autonomously evolve and the sometimes very limited possibilities to comprehend their decision-making processes present new challenges to our legal system. At a European level this has led to reform efforts, of which the proposal for a European AI regulation promises to close regulatory gaps in existing product safety law through cross-sectoral AI-specific safety requirements. A prerequisite, however, would be that the EU legislator does not only avoid duplications and contradictions with existing safety requirements but also refrains from imposing exaggerated and unattainable demands. If this were to be taken into consideration, the new safety requirements could also be used to specify the undefined standard of care in liability law. Nevertheless, challenges in the context of provability continue to remain unresolved, posing a risk of rendering the legal protection efforts of the aggrieved party ineffective. It remains to be seen whether the EU legislator will address this need for reform with the recently proposed reform of product liability law by the Commission.
一些人工智能(AI)系统自主进化的能力以及理解其决策过程的可能性有时非常有限,这给我们的法律体系带来了新的挑战。在欧洲层面,这引发了改革努力,其中一项欧洲人工智能监管提案有望通过跨部门的特定于人工智能的安全要求来填补现有产品安全法律中的监管空白。然而,一个前提条件是,欧盟立法者不仅要避免与现有安全要求重复和矛盾,还要避免提出夸张和无法实现的要求。如果考虑到这一点,新的安全要求也可用于明确责任法中未明确的注意标准。尽管如此,可证明性方面的挑战仍未得到解决,这有可能使受害方的法律保护努力无效。欧盟立法者是否会通过委员会最近提议的产品责任法改革来满足这一改革需求,还有待观察。