Makridis Christos, Hurley Seth, Klote Mary, Alterovitz Gil
National Artificial Intelligence Institute, Department of Veterans Affairs, Washington, DC, United States.
Stanford Digital Economy Lab, Stanford University, Stanford, CA, United States.
JMIR Med Inform. 2021 Jun 2;9(6):e28921. doi: 10.2196/28921.
Despite widespread agreement that artificial intelligence (AI) offers significant benefits for individuals and society at large, there are also serious challenges to overcome with respect to its governance. Recent policymaking has focused on establishing principles for the trustworthy use of AI. Adhering to these principles is especially important for ensuring that the development and application of AI raises economic and social welfare, including among vulnerable groups and veterans.
We explore the newly developed principles around trustworthy AI and how they can be readily applied at scale to vulnerable groups that are potentially less likely to benefit from technological advances.
Using the US Department of Veterans Affairs as a case study, we explore the principles of trustworthy AI that are of particular interest for vulnerable groups and veterans.
We focus on three principles: (1) designing, developing, acquiring, and using AI so that the benefits of its use significantly outweigh the risks and the risks are assessed and managed; (2) ensuring that the application of AI occurs in well-defined domains and is accurate, effective, and fit for the intended purposes; and (3) ensuring that the operations and outcomes of AI applications are sufficiently interpretable and understandable by all subject matter experts, users, and others.
These principles and applications apply more generally to vulnerable groups, and adherence to them can allow the VA and other organizations to continue modernizing their technology governance, leveraging the gains of AI while simultaneously managing its risks.
尽管人们普遍认为人工智能(AI)能为个人乃至整个社会带来巨大益处,但在其治理方面仍存在严峻挑战有待克服。近期的政策制定聚焦于确立人工智能可靠使用的原则。坚持这些原则对于确保人工智能的开发和应用能够提升经济和社会福祉,包括弱势群体和退伍军人的福祉而言尤为重要。
我们探讨围绕可靠人工智能的新制定原则,以及如何能够大规模地将这些原则应用于那些可能较难从技术进步中受益的弱势群体。
以美国退伍军人事务部为例,我们探讨对弱势群体和退伍军人特别重要的可靠人工智能原则。
我们着重关注三项原则:(1)设计、开发、获取和使用人工智能,使其使用的益处显著超过风险,且风险得到评估和管理;(2)确保人工智能的应用在明确界定的领域内进行,并且准确、有效且适合预期目的;(3)确保人工智能应用的操作和结果能被所有主题专家、用户及其他人员充分解读和理解。
这些原则和应用更广泛地适用于弱势群体,坚持这些原则能使退伍军人事务部及其他组织继续推进其技术治理现代化,在利用人工智能带来的收益的同时管理其风险。