Cavique Luís
Universidade Aberta, DCeT and Lasige, FCUL, Lisboa, Portugal.
Front Artif Intell. 2024 Aug 21;7:1439702. doi: 10.3389/frai.2024.1439702. eCollection 2024.
Over the last decade, investment in artificial intelligence (AI) has grown significantly, driven by technology companies and the demand for PhDs in AI. However, new challenges have emerged, such as the 'black box' and bias in AI models. Several approaches have been developed to reduce these problems. Responsible AI focuses on the ethical development of AI systems, considering social impact. Fair AI seeks to identify and correct algorithm biases, promoting equitable decisions. Explainable AI aims to create transparent models that allow users to interpret results. Finally, Causal AI emphasizes identifying cause-and-effect relationships and plays a crucial role in creating more robust and reliable systems, thereby promoting fairness and transparency in AI development. Responsible, Fair, and Explainable AI has several weaknesses. However, Causal AI is the approach with the slightest criticism, offering reassurance about the ethical development of AI.
在过去十年中,在科技公司以及对人工智能博士的需求推动下,对人工智能(AI)的投资显著增长。然而,新的挑战已经出现,比如人工智能模型中的“黑匣子”和偏差问题。已经开发了几种方法来减少这些问题。负责任的人工智能关注人工智能系统的道德发展,考虑其社会影响。公平的人工智能试图识别并纠正算法偏差,促进公平决策。可解释的人工智能旨在创建透明模型,让用户能够解读结果。最后,因果人工智能强调识别因果关系,在创建更强大、更可靠的系统中发挥关键作用,从而促进人工智能开发中的公平性和透明度。负责任、公平和可解释的人工智能存在一些弱点。然而,因果人工智能是受到批评最少的方法,为人工智能的道德发展提供了保障。