Legal Design Lab, Stanford Law School, Stanford, CA, USA.
Philos Trans A Math Phys Eng Sci. 2024 Apr 15;382(2270):20230157. doi: 10.1098/rsta.2023.0157. Epub 2024 Feb 26.
As more groups consider how AI may be used in the legal sector, this paper envisions how companies and policymakers can prioritize the perspective of community members as they design AI and policies around it. It presents findings of structured interviews and design sessions with community members, in which they were asked about whether, how, and why they would use AI tools powered by large language models to respond to legal problems like receiving an eviction notice. The respondents reviewed options for simple versus complex interfaces for AI tools, and expressed how they would want to engage with an AI tool to resolve a legal problem. These empirical findings provide directions that can counterbalance legal domain experts' proposals about the public interest around AI, as expressed by attorneys, court officials, advocates and regulators. By hearing directly from community members about how they want to use AI for civil justice tasks, what risks concern them, and the value they would find in different kinds of AI tools, this research can ensure that people's points of view are understood and prioritized, rather than only domain experts' assertions about people's needs and preferences around legal help AI. This article is part of the theme issue 'A complexity science approach to law and governance'.
随着越来越多的团体考虑如何在法律领域使用人工智能,本文设想了公司和政策制定者在设计人工智能及其相关政策时如何优先考虑社区成员的观点。本文介绍了对社区成员进行结构化访谈和设计会议的结果,在这些访谈和会议中,他们被问到是否以及如何使用由大型语言模型提供支持的人工智能工具来解决收到驱逐通知等法律问题。受访者审查了人工智能工具的简单和复杂界面的选择,并表达了他们希望如何与人工智能工具互动以解决法律问题。这些实证研究结果为律师、法院官员、倡导者和监管机构等法律领域专家提出的有关人工智能公共利益的建议提供了平衡的方向。通过直接听取社区成员关于他们如何希望将人工智能用于民事司法任务、他们关心哪些风险以及他们在不同类型的人工智能工具中发现的价值的意见,这项研究可以确保人们的观点得到理解和重视,而不仅仅是法律领域专家关于人们对人工智能辅助法律的需求和偏好的断言。本文是主题为“法律和治理的复杂性科学方法”的一部分。