Curtis Caitlin, Gillespie Nicole, Lockey Steven
School of Business, The University of Queensland, Brisbane, QLD 4072 Australia.
Centre for Policy Futures, The University of Queensland, Brisbane, QLD 4072 Australia.
AI Ethics. 2023;3(1):145-153. doi: 10.1007/s43681-022-00163-7. Epub 2022 May 24.
We argue that a perfect storm of five conditions heightens the risk of harm to society from artificial intelligence: (1) the powerful, invisible nature of AI, (2) low public awareness and AI literacy, (3) rapid scaled deployment of AI, (4) insufficient regulation, and (5) the gap between trustworthy AI principles and practices. To prevent harm, fit-for-purpose regulation and public AI literacy programs have been recommended, but education and government regulation will not be sufficient: AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and taking accountability to mitigate the risks.
我们认为,五种情况同时出现形成的一场完美风暴加剧了人工智能给社会带来危害的风险:(1)人工智能强大且无形的特性;(2)公众意识和人工智能素养较低;(3)人工智能的快速大规模部署;(4)监管不足;(5)可信人工智能原则与实践之间的差距。为防止危害,有人建议制定适用的监管措施和开展公众人工智能素养项目,但仅靠教育和政府监管是不够的:部署人工智能的组织需要发挥核心作用,按照可信人工智能的原则创建和部署可信人工智能,并承担起减轻风险的责任。