Nadarzynski Tom, Knights Nicky, Husbands Deborah, Graham Cynthia, Llewellyn Carrie D, Buchanan Tom, Montgomery Ian, Rodriguez Alejandra Soruco, Ogueri Chimeremumma, Singh Nidhi, Rouse Evan, Oyebode Olabisi, Das Ankit, Paydon Grace, Lall Gurpreet, Bulukungu Anathoth, Yanyali Nur, Stefan Alexandra, Ridge Damien
School of Social Sciences, University of Westminster, London, United Kingdom.
Kinsey Institute, Indiana University, Bloomington, Indiana, United States of America.
PLOS Digit Health. 2025 Feb 13;4(2):e0000724. doi: 10.1371/journal.pdig.0000724. eCollection 2025 Feb.
The digitalisation of healthcare has provided new ways to address disparities in sexual health outcomes that particularly affect ethnic and sexual minorities. Conversational artificial intelligence (AI) chatbots can provide personalised health education and refer users for appropriate medical consultations. We aimed to explore design principles of a chatbot-assisted culturally sensitive self-assessment intervention based on the disclosure of health-related information.
In 2022, an online survey was conducted among an ethnically diverse UK sample (N = 1,287) to identify the level and type of health-related information disclosure to sexual health chatbots, and reactions to chatbots' risk appraisal. Follow-up interviews (N = 41) further explored perceptions of chatbot-led health assessment to identify aspects related to acceptability and utilisation. Datasets were analysed using one-way ANOVAs, linear regression, and thematic analysis.
Participants had neutral-to-positive attitudes towards chatbots and were comfortable disclosing demographic and sensitive health information. Chatbot awareness, previous experience and positive attitudes towards chatbots predicted information disclosure. Qualitatively, four main themes were identified: "Chatbot as an artificial health advisor", "Disclosing information to a chatbot", "Ways to facilitate trust and disclosure", and "Acting on self-assessment".
Chatbots were acceptable for health self-assessment among this sample of ethnically diverse individuals. Most users reported being comfortable disclosing sensitive and personal information, but user anonymity is key to engagement with chatbots. As this technology becomes more advanced and widely available, chatbots could potentially become supplementary tools for health education and screening eligibility assessment. Future research is needed to establish their impact on screening uptake and access to health services among minoritised communities.
医疗保健数字化提供了新方法来解决性健康结果方面的差异,这些差异尤其影响少数族裔和性少数群体。对话式人工智能(AI)聊天机器人可以提供个性化健康教育,并为用户转介进行适当的医疗咨询。我们旨在探索一种基于健康相关信息披露的聊天机器人辅助的文化敏感型自我评估干预措施的设计原则。
2022年,对英国一个种族多样化的样本(N = 1287)进行了在线调查,以确定向性健康聊天机器人披露的健康相关信息的水平和类型,以及对聊天机器人风险评估的反应。后续访谈(N = 41)进一步探讨了对聊天机器人主导的健康评估的看法,以确定与可接受性和利用率相关的方面。使用单因素方差分析、线性回归和主题分析对数据集进行了分析。
参与者对聊天机器人持中立到积极的态度,并且愿意披露人口统计学和敏感的健康信息。聊天机器人意识、以前的经验以及对聊天机器人的积极态度预测了信息披露情况。定性分析确定了四个主要主题:“聊天机器人作为人工健康顾问”、“向聊天机器人披露信息”、“促进信任和披露的方法”以及“根据自我评估采取行动”。
在这个种族多样化的个体样本中,聊天机器人可用于健康自我评估。大多数用户表示愿意披露敏感和个人信息,但用户匿名是与聊天机器人互动的关键。随着这项技术变得更加先进且广泛可用,聊天机器人有可能成为健康教育和筛查资格评估的辅助工具。未来需要进行研究,以确定它们对少数族裔社区筛查接受率和获得医疗服务的影响。