Chair of Sport and Health Management, Technical University of Munich, Campus D - Uptown Munich, Georg-Brauchle-Ring 60/62, 80992, Munich, Germany.
BMC Med Inform Decis Mak. 2022 Sep 13;22(1):240. doi: 10.1186/s12911-022-01986-4.
The goal of the study is to assess the downstream effects of who requests personal information from individuals for artificial intelligence-(AI) based healthcare research purposes-be it a pharmaceutical company (as an example of a for-profit organization) or a university hospital (as an example of a not-for-profit organization)-as well as their boundary conditions on individuals' likelihood to release personal information about their health. For the latter, the study considers two dimensions: the tendency to self-disclose (which is aimed to be high so that AI applications can reach their full potential) and the tendency to falsify (which is aimed to be low so that AI applications are based on both valid and reliable data).
Across three experimental studies with Amazon Mechanical Turk workers from the U.S. (n = 204, n = 330, and n = 328, respectively), Covid-19 was used as the healthcare research context.
University hospitals (vs. pharmaceutical companies) score higher on altruism and lower on egoism. Individuals were more willing to disclose data if they perceived that the requesting organization acts based on altruistic motives (i.e., the motives function as gate openers). Individuals were more likely to protect their data by intending to provide false information when they perceived egoistic motives to be the main driver for the organization requesting their data (i.e., the motives function as a privacy protection tool). Two moderators, namely message appeal (Study 2) and message endorser credibility (Study 3) influence the two indirect pathways of the release of personal information.
The findings add to Communication Privacy Management Theory as well as Attribution Theory by suggesting motive-based pathways to the release of correct personal health data. Compared to not-for-profit organizations, for-profit organizations are particularly recommended to match their message appeal with the organizations' purposes (to provide personal benefit) and to use high-credibility endorsers in order to reduce inherent disadvantages in motive perceptions.
本研究旨在评估出于人工智能(AI)医疗研究目的向个人索取个人信息的请求方(如制药公司,作为营利组织的一个例子)或大学医院(作为非营利组织的一个例子)的后续影响,以及这些请求方对个人发布其健康相关个人信息的可能性的边界条件。对于后者,研究考虑了两个维度:自我披露的倾向(旨在较高,以便 AI 应用程序能够充分发挥其潜力)和伪造的倾向(旨在较低,以便 AI 应用程序基于有效和可靠的数据)。
通过在美国的亚马逊土耳其机器人工人(n=204、n=330 和 n=328)进行的三项实验研究,以新冠疫情作为医疗研究背景。
与制药公司相比,大学医院的利他主义得分更高,自我中心主义得分更低。如果个人认为请求组织的动机是利他主义(即动机充当开门器),他们更愿意披露数据。如果个人认为自我中心主义动机是组织请求其数据的主要驱动力,他们更有可能通过打算提供虚假信息来保护其数据(即动机充当隐私保护工具)。两个调节因素,即信息诉求(研究 2)和信息支持者可信度(研究 3),影响个人信息披露的两条间接途径。
研究结果通过提出基于动机的正确个人健康数据披露途径,为沟通隐私管理理论和归因理论增添了新内容。与非营利组织相比,营利组织特别需要将其信息诉求与组织目的(提供个人利益)相匹配,并使用高可信度的支持者,以减少动机认知方面的固有劣势。