Marrella Domenico, Jiang Su, Ipaktchi Kyros, Liverneaux Philippe
Department of Hand Surgery, Strasbourg University Hospitals, FMTS, 1 Avenue Molière, 67200 Strasbourg, France.
Department of Hand Surgery, Huashan Hospital, Fudan University, No.12 Wulumuqi Middle Road, 200040, Shanghai, China.
Hand Surg Rehabil. 2025 Jul 19:102225. doi: 10.1016/j.hansur.2025.102225.
While the peer review process remains the gold standard for evaluating the quality of scientific articles, it is facing a crisis due to the increase in submissions and prolonged review times. This study assessed ChatGPT's ability to formulate editorial decisions and produce peer reviews for surgery-related manuscripts. We tested the hypothesis that ChatGPT's peer review quality exceeds that of human reviewers. Eleven published articles in the field of hand surgery, initially rejected by one journal and after accepted by another, were anonymized by removing the title page from the original PDF submission and subsequently evaluated by requesting ChatGPT 4o and o1 to determine each article's eligibility for publication and generate a peer review. The policy prohibiting the submission of unpublished manuscripts to large language models was not violated, as all articles had already been published at the time of the study. An experienced hand surgeon assessed all peer reviews (including the original human reviews from both the rejecting and the accepting journals and ChatGPT-generated) using the ARCADIA score, which consists of 20 items rated from 1 to 5 on a Likert scale. The average acceptance rate of ChatGPT 4o was 95%, while that of ChatGPT o1 was 98%. The concordance of ChatGPT 4o's decisions with those of the journal with the highest impact factor was 32%, whereas that of ChatGPT o1 was 29%. ChatGPT 4o's decisions were in accordance with those of the journal with the lowest impact factor, which was 68%, while ChatGPT o1's was 71%. The ARCADIA scores of peer reviews generated by human reviewers (2.8 for journals that accepted the article and 3.2 for those that rejected it) were lower than those of ChatGPT 4o (4.8) and o1 (4.9). In conclusion, ChatGPT can optimize the peer review process for scientific articles if it receives precise instructions to avoid "hallucinations." Many of its functionalities surpass human capabilities, but managing its limitations rigorously is essential to improving publication quality.
虽然同行评审过程仍然是评估科学文章质量的黄金标准,但由于投稿数量增加和评审时间延长,它正面临危机。本研究评估了ChatGPT对手术相关稿件做出编辑决策和进行同行评审的能力。我们检验了ChatGPT的同行评审质量超过人类评审的假设。在手外科领域选取了11篇已发表的文章,这些文章最初被一本期刊拒稿,后被另一本期刊录用,通过从原始PDF投稿中删除标题页进行匿名处理,随后要求ChatGPT 4o和o1评估每篇文章的发表资格并生成同行评审。由于所有文章在研究时都已发表,因此未违反禁止向大型语言模型提交未发表稿件的政策。一位经验丰富的手外科医生使用ARCADIA评分对所有同行评审(包括拒稿期刊和录用期刊的原始人类评审以及ChatGPT生成的评审)进行评估,该评分由20项内容组成,采用李克特量表从1到5评分。ChatGPT 4o的平均录用率为95%,而ChatGPT o1的为98%。ChatGPT 4o的决策与影响因子最高的期刊的决策一致性为32%,而ChatGPT o1的为29%。ChatGPT 4o的决策与影响因子最低的期刊的决策一致性为68%,而ChatGPT o1的为71%。人类评审生成的同行评审的ARCADIA评分(录用文章的期刊评分为2.8,拒稿期刊评分为3.2)低于ChatGPT 4o(4.8)和o1(4.9)。总之,如果ChatGPT收到精确指令以避免“幻觉”,它可以优化科学文章的同行评审过程。它的许多功能超过人类能力,但严格管理其局限性对于提高发表质量至关重要。