King's College London, UK.
University of Lancaster, UK.
Public Underst Sci. 2021 Feb;30(2):196-211. doi: 10.1177/0963662520965490. Epub 2020 Oct 21.
This article reports how 18 UK and Canadian population health artificial intelligence researchers in Higher Education Institutions perceive the use of artificial intelligence systems in their research, and how this compares with their perceptions about the media portrayal of artificial intelligence systems. This is triangulated with a small scoping analysis of how UK and Canadian news articles portray artificial intelligence systems associated with health research and care. Interviewees had concerns about what they perceived as sensationalist reporting of artificial intelligence systems - a finding reflected in the media analysis. In line with Pickersgill's concept of 'epistemic modesty', they considered artificial intelligence systems better perceived as non-exceptionalist methodological tools that were uncertain and unexciting. Adopting 'epistemic modesty' was sometimes hindered by stakeholders to whom the research is disseminated, who may be less interested in hearing about the uncertainties of scientific practice, having implications on both research and policy.
本文报道了 18 名在高等教育机构工作的英国和加拿大人口健康人工智能研究人员如何看待人工智能系统在其研究中的应用,以及这与他们对媒体对人工智能系统描述的看法有何不同。这与对英国和加拿大新闻文章如何描述与健康研究和护理相关的人工智能系统的小型范围分析进行了三角剖分。受访者对他们认为是对人工智能系统耸人听闻的报道表示担忧——这一发现反映在媒体分析中。根据皮克希尔的“知识谦逊”概念,他们认为人工智能系统最好被视为不确定且不令人兴奋的非例外主义方法工具。采用“知识谦逊”有时会受到向其传播研究的利益相关者的阻碍,他们可能不太愿意听到科学实践不确定性的情况,这对研究和政策都有影响。