Sukhera Javeed, Ahmed Hasan
Institute of Living, Hartford Hospital, Hartford, CT, United States.
Centre for Education Research and Innovation, Western University, London, ON, Canada.
JMIR Med Educ. 2022 Mar 30;8(1):e33934. doi: 10.2196/33934.
Teaching and learning about topics such as bias are challenging due to the emotional nature of bias-related discourse. However, emotions can be challenging to study in health professions education for numerous reasons. With the emergence of machine learning and natural language processing, sentiment analysis (SA) has the potential to bridge the gap.
To improve our understanding of the role of emotions in bias-related discourse, we developed and conducted a SA of bias-related discourse among health professionals.
We conducted a 2-stage quasi-experimental study. First, we developed a SA (algorithm) within an existing archive of interviews with health professionals about bias. SA refers to a mechanism of analysis that evaluates the sentiment of textual data by assigning scores to textual components and calculating and assigning a sentiment value to the text. Next, we applied our SA algorithm to an archive of social media discourse on Twitter that contained equity-related hashtags to compare sentiment among health professionals and the general population.
When tested on the initial archive, our SA algorithm was highly accurate compared to human scoring of sentiment. An analysis of bias-related social media discourse demonstrated that health professional tweets (n=555) were less neutral than the general population (n=6680) when discussing social issues on professionally associated accounts (χ [2, n=555)]=35.455; P<.001), suggesting that health professionals attach more sentiment to their posts on Twitter than seen in the general population.
The finding that health professionals are more likely to show and convey emotions regarding equity-related issues on social media has implications for teaching and learning about sensitive topics related to health professions education. Such emotions must therefore be considered in the design, delivery, and evaluation of equity and bias-related education.
由于与偏见相关的话语具有情感性质,围绕偏见等主题的教学颇具挑战性。然而,出于多种原因,情感在卫生专业教育中难以研究。随着机器学习和自然语言处理的出现,情感分析(SA)有潜力弥合这一差距。
为了加深我们对情感在与偏见相关话语中的作用的理解,我们开发并实施了一项针对卫生专业人员中与偏见相关话语的情感分析。
我们进行了一项两阶段的准实验研究。首先,我们在现有的关于卫生专业人员对偏见的访谈存档中开发了一种情感分析(算法)。情感分析是指一种分析机制,通过给文本成分打分并计算和赋予文本一个情感值来评估文本数据的情感。接下来,我们将我们的情感分析算法应用于推特上包含与公平相关主题标签的社交媒体话语存档中,以比较卫生专业人员和普通人群的情感。
在初始存档上进行测试时,与人工情感评分相比,我们的情感分析算法非常准确。对与偏见相关的社交媒体话语的分析表明,当在专业相关账户上讨论社会问题时,卫生专业人员的推文(n = 555)比普通人群(n = 6680)的推文更不中立(χ[2, n = 555] = 35.455;P <.001),这表明卫生专业人员在推特上发布的内容比普通人群更带有情感。
卫生专业人员在社交媒体上更有可能就与公平相关的问题表现和传达情感这一发现,对卫生专业教育中敏感主题的教学具有启示意义。因此,在公平和偏见相关教育的设计、实施和评估中必须考虑这些情感。