Kühne Simon, Jacobsen Jannes, Legewie Nicolas, Dollmann Jörg
Faculty of Sociology, Bielefeld University, Bielefeld, Germany.
Data-Method-Monitoring Cluster, German Center for Integration and Migration Research, Berlin, Germany.
J Med Internet Res. 2025 May 27;27:e70179. doi: 10.2196/70179.
The integration of artificial intelligence (AI) holds substantial potential to alter diagnostics and treatment in health care settings. However, public attitudes toward AI, including trust and risk perception, are key to its ethical and effective adoption. Despite growing interest, empirical research on the factors shaping public support for AI in health care (particularly in large-scale, representative contexts) remains limited.
This study aimed to investigate public attitudes toward AI in patient health care, focusing on how AI attributes (autonomy, costs, reliability, and transparency) shape perceptions of support, risk, and personalized care. In addition, it examines the moderating role of sociodemographic characteristics (gender, age, educational level, migration background, and subjective health status) in these evaluations. Our study offers novel insights into the relative importance of AI system characteristics for public attitudes and acceptance.
We conducted a factorial vignette experiment with a probability-based survey of 3030 participants from Germany's general population. Respondents were presented with hypothetical scenarios involving AI applications in diagnosis and treatment in a hospital setting. Linear regression models assessed the relative influence of AI attributes on the dependent variables (support, risk perception, and personalized care), with additional subgroup analyses to explore heterogeneity by sociodemographic characteristics.
Mean values between 4.2 and 4.4 on a 1-7 scale indicate a generally neutral to slightly negative stance toward AI integration in terms of general support, risk perception, and personalized care expectations, with responses spanning the full scale from strong support to strong opposition. Among the 4 dimensions, reliability emerges as the most influential factor (percentage of explained variance [EV] of up to 10.5%). Respondents expect AI to not only prevent errors but also exceed current reliability standards while strongly disapproving of nontraceable systems (transparency is another important factor, percentage of EV of up to 4%). Costs and autonomy play a comparatively minor role (percentage of EVs of up to 1.5% and 1.3%), with preferences favoring collaborative AI systems over autonomous ones, and higher costs generally leading to rejection. Heterogeneity analysis reveals limited sociodemographic differences, with education and migration background influencing attitudes toward transparency and autonomy, and gender differences primarily affecting cost-related perceptions. Overall, attitudes do not substantially differ between AI applications in diagnosis versus treatment.
Our study fills a critical research gap by identifying the key factors that shape public trust and acceptance of AI in health care, particularly reliability, transparency, and patient-centered approaches. Our findings provide evidence-based recommendations for policy makers, health care providers, and AI developers to enhance trust and accountability, key concerns often overlooked in system development and real-world applications. The study highlights the need for targeted policy and educational initiatives to support the responsible integration of AI in patient care.
人工智能(AI)的整合在改变医疗保健环境中的诊断和治疗方面具有巨大潜力。然而,公众对AI的态度,包括信任和风险认知,是其道德和有效应用的关键。尽管兴趣日益浓厚,但关于在医疗保健领域(特别是在大规模、具有代表性的背景下)塑造公众对AI支持的因素的实证研究仍然有限。
本研究旨在调查公众对患者医疗保健中AI的态度,重点关注AI属性(自主性、成本、可靠性和透明度)如何塑造支持、风险和个性化护理的认知。此外,它还考察了社会人口统计学特征(性别、年龄、教育水平、移民背景和主观健康状况)在这些评估中的调节作用。我们的研究为AI系统特征对公众态度和接受度的相对重要性提供了新的见解。
我们进行了一项析因 vignette 实验,并对来自德国普通人群的3030名参与者进行了基于概率的调查。向受访者展示了涉及医院环境中AI在诊断和治疗中的应用的假设情景。线性回归模型评估了AI属性对因变量(支持、风险认知和个性化护理)的相对影响,并进行了额外的亚组分析以按社会人口统计学特征探索异质性。
在1-7的量表上,4.2至4.4之间的平均值表明,在总体支持、风险认知和个性化护理期望方面,对AI整合的态度总体上从中立到略为负面,回答范围从强烈支持到强烈反对。在这四个维度中,可靠性是最具影响力的因素(解释方差百分比[EV]高达10.5%)。受访者期望AI不仅能预防错误,还能超越当前的可靠性标准,同时强烈反对不可追溯的系统(透明度是另一个重要因素,EV百分比高达4%)。成本和自主性的作用相对较小(EV百分比高达1.5%和1.3%),人们更倾向于协作式AI系统而非自主式系统,成本较高通常会导致拒绝。异质性分析显示社会人口统计学差异有限,教育和移民背景影响对透明度和自主性的态度,性别差异主要影响与成本相关的认知。总体而言,AI在诊断和治疗中的应用之间的态度没有实质性差异。
我们的研究通过确定塑造公众对医疗保健中AI的信任和接受的关键因素,特别是可靠性、透明度和以患者为中心的方法,填补了一个关键的研究空白。我们的研究结果为政策制定者、医疗保健提供者和AI开发者提供了基于证据的建议,以增强信任和问责制,这是系统开发和实际应用中经常被忽视的关键问题。该研究强调了有针对性的政策和教育举措的必要性,以支持AI在患者护理中的负责任整合。