Khoury College of Computer Sciences, Northeastern University, Boston, MA, United States.
J Med Internet Res. 2021 Nov 9;23(11):e30704. doi: 10.2196/30704.
Prior studies have demonstrated the safety risks when patients and consumers use conversational assistants such as Apple's Siri and Amazon's Alexa for obtaining medical information.
The aim of this study is to evaluate two approaches to reducing the likelihood that patients or consumers will act on the potentially harmful medical information they receive from conversational assistants.
Participants were given medical problems to pose to conversational assistants that had been previously demonstrated to result in potentially harmful recommendations. Each conversational assistant's response was randomly varied to include either a correct or incorrect paraphrase of the query or a disclaimer message-or not-telling the participants that they should not act on the advice without first talking to a physician. The participants were then asked what actions they would take based on their interaction, along with the likelihood of taking the action. The reported actions were recorded and analyzed, and the participants were interviewed at the end of each interaction.
A total of 32 participants completed the study, each interacting with 4 conversational assistants. The participants were on average aged 42.44 (SD 14.08) years, 53% (17/32) were women, and 66% (21/32) were college educated. Those participants who heard a correct paraphrase of their query were significantly more likely to state that they would follow the medical advice provided by the conversational assistant (χ=3.1; P=.04). Those participants who heard a disclaimer message were significantly more likely to say that they would contact a physician or health professional before acting on the medical advice received (χ=43.5; P=.001).
Designers of conversational systems should consider incorporating both disclaimers and feedback on query understanding in response to user queries for medical advice. Unconstrained natural language input should not be used in systems designed specifically to provide medical advice.
先前的研究表明,当患者和消费者使用对话助手(如苹果的 Siri 和亚马逊的 Alexa)获取医疗信息时,存在安全风险。
本研究旨在评估两种方法,以降低患者或消费者对从对话助手中获得的潜在有害医疗信息采取行动的可能性。
参与者被要求向之前被证明会导致潜在有害建议的对话助手提出医疗问题。每个对话助手的回复都随机变化,包括对查询的正确或不正确的释义、免责声明消息,或者不告诉参与者在不先与医生交谈的情况下不应根据建议采取行动。然后,参与者根据他们的互动提出他们将采取的行动,并表示采取行动的可能性。记录并分析报告的行动,并在每次互动结束时对参与者进行访谈。
共有 32 名参与者完成了研究,每人与 4 个对话助手进行交互。参与者的平均年龄为 42.44 岁(SD 14.08),53%(17/32)为女性,66%(21/32)受过大学教育。那些听到正确释义的参与者更有可能表示他们将遵循对话助手提供的医疗建议(χ=3.1;P=.04)。那些听到免责声明消息的参与者更有可能表示他们将在根据收到的医疗建议采取行动之前联系医生或健康专业人员(χ=43.5;P=.001)。
对话系统的设计者应考虑在响应用户查询医疗建议时纳入免责声明和查询理解反馈。专门用于提供医疗建议的系统不应使用不受限制的自然语言输入。