Wihlidal Jacob G J, Wolter Nikolaus E, Propst Evan J, Lin Vincent, Au Michael, Amin Shaunak, Siu Jennifer M
Department of Otolaryngology-Head and Neck Surgery, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.
Department of Otolaryngology-Head and Neck Surgery, Hospital for Sick Children, Toronto, Ontario, Canada.
Laryngoscope. 2025 Apr 14. doi: 10.1002/lary.32188.
Generative Artificial Intelligence (GAI) interfaces have rapidly integrated into various societal domains. Widespread accessibility of GAI for drafting personal statements poses challenges for evaluators to gauge writing ability and personal insight. This study aims to compare the quality of GAI-generated personal statements to those written by successful applicants in OHNS residency programs, via integration of statistical and qualitative thematic analyses.
Personal statements were collected from successful OHNS residency applicants. Characteristic extraction from submitted statements was used to generate GAI-written personal statements using ChatGPT 4.0. All statements were blindly reviewed by 21 experienced evaluators on a 10-point Likert scale of authenticity, readability, personability, and overall quality. Thematic analysis of qualitative reviewer comments was conducted to extract deeper insights into evaluators' perceptions. Quantitative results were compared using independent t-tests, while thematic coding was performed inductively using NVivo software.
GAI-generated personal statements significantly outperformed applicant-written statements in all assessed domains, including authenticity (7.67 vs. 7.05, p = 0.002), readability (8.03 vs. 7.49, p = 0.002), personability (7.33 vs. 6.72, p = 0.004), and overall score (7.49 vs. 6.90, p = 0.005). Thematic analysis revealed that GAI statements were seen as "well-constructed but generic," while applicant statements were often "verbose and lacked focus." Additionally, reviewers noted concerns regarding personal insight and engagement in AI-generated statements.
GAI-generated personal statements were rated more favorably across all domains, raising critical questions about the future of personal statements in the residency application process. While AI in medical education continues to evolve, clear guidelines on its ethical use in residency applications are essential.
N/A.
生成式人工智能(GAI)界面已迅速融入各个社会领域。GAI广泛用于起草个人陈述,这给评估者评估写作能力和个人见解带来了挑战。本研究旨在通过整合统计分析和定性主题分析,比较GAI生成的个人陈述与耳鼻咽喉头颈外科学住院医师培训项目成功申请者所写个人陈述的质量。
收集耳鼻咽喉头颈外科学住院医师培训项目成功申请者的个人陈述。利用提交陈述中的特征提取,使用ChatGPT 4.0生成GAI撰写的个人陈述。21名经验丰富的评估者对所有陈述进行盲评,采用10分制李克特量表评估真实性、可读性、个性和整体质量。对评估者的定性评论进行主题分析,以深入了解评估者的看法。使用独立t检验比较定量结果,同时使用NVivo软件进行归纳主题编码。
在所有评估领域,GAI生成的个人陈述均显著优于申请者撰写的陈述,包括真实性(7.67对7.05,p = 0.002)、可读性(8.03对7.49,p = 0.002)、个性(7.33对6.72,p = 0.004)和总分(7.49对6.90,p = 0.005)。主题分析表明,GAI陈述被视为“结构良好但缺乏个性”,而申请者陈述往往“冗长且缺乏重点”。此外,评估者指出对人工智能生成陈述中的个人见解和参与度存在担忧。
GAI生成的个人陈述在所有领域的评分更高,这对住院医师申请过程中个人陈述的未来提出了关键问题。虽然医学教育中的人工智能不断发展,但在住院医师申请中对其道德使用制定明确指导方针至关重要。
无。