• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

人工智能生成的同行评审个人体验:一项案例研究。

Personal experience with AI-generated peer reviews: a case study.

作者信息

Lo Vecchio Nicholas

机构信息

Independent researcher, Marseille, France.

出版信息

Res Integr Peer Rev. 2025 Apr 7;10(1):4. doi: 10.1186/s41073-025-00161-3.

DOI:10.1186/s41073-025-00161-3
PMID:40189554
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11974187/
Abstract

BACKGROUND

While some recent studies have looked at large language model (LLM) use in peer review at the corpus level, to date there have been few examinations of instances of AI-generated reviews in their social context. The goal of this first-person account is to present my experience of receiving two anonymous peer review reports that I believe were produced using generative AI, as well as lessons learned from that experience.

METHODS

This is a case report on the timeline of the incident, and my and the journal's actions following it. Supporting evidence includes text patterns in the reports, online AI detection tools and ChatGPT simulations; recommendations are offered for others who may find themselves in a similar situation. The primary research limitation of this article is that it is based on one individual's personal experience.

RESULTS

After alleging the use of generative AI in December 2023, two months of back-and-forth ensued between myself and the journal, leading to my withdrawal of the submission. The journal denied any ethical breach, without taking an explicit position on the allegations of LLM use. Based on this experience, I recommend that authors engage in dialogue with journals on AI use in peer review prior to article submission; where undisclosed AI use is suspected, authors should proactively amass evidence, request an investigation protocol, escalate the matter as needed, involve independent bodies where possible, and share their experience with fellow researchers.

CONCLUSIONS

Journals need to promptly adopt transparent policies on LLM use in peer review, in particular requiring disclosure. Open peer review where identities of all stakeholders are declared might safeguard against LLM misuse, but accountability in the AI era is needed from all parties.

摘要

背景

虽然最近一些研究在语料库层面探讨了大语言模型(LLM)在同行评审中的应用,但迄今为止,很少有人在社会背景下考察人工智能生成的评审实例。这篇第一人称叙述的目的是分享我收到两份我认为是使用生成式人工智能生成的匿名同行评审报告的经历,以及从该经历中吸取的教训。

方法

这是一篇关于该事件时间线以及我和期刊后续行动的案例报告。支持证据包括报告中的文本模式、在线人工智能检测工具和ChatGPT模拟;为可能遇到类似情况的其他人提供了建议。本文的主要研究局限性在于它基于个人的亲身经历。

结果

在2023年12月指控使用生成式人工智能后,我和期刊之间进行了两个月的反复沟通,最终我撤回了投稿。期刊否认存在任何道德违规行为,但未就使用大语言模型的指控明确表态。基于这一经历,我建议作者在提交文章之前就同行评审中人工智能的使用与期刊进行对话;如果怀疑存在未披露的人工智能使用情况,作者应积极收集证据、要求调查方案、视需要升级此事、尽可能让独立机构参与,并与其他研究人员分享自己的经历。

结论

期刊需要迅速采用关于同行评审中使用大语言模型的透明政策,特别是要求进行披露。所有利益相关者身份都公开的开放同行评审可能会防止大语言模型被滥用,但人工智能时代各方都需要承担责任。

相似文献

1
Personal experience with AI-generated peer reviews: a case study.人工智能生成的同行评审个人体验:一项案例研究。
Res Integr Peer Rev. 2025 Apr 7;10(1):4. doi: 10.1186/s41073-025-00161-3.
2
Evaluation of the impact of large language learning models on articles submitted to Orthopaedics & Traumatology: Surgery & Research (OTSR): A significant increase in the use of artificial intelligence in 2023.评估大型语言模型对《矫形外科与创伤学:研究与实践》(OTSR)投稿文章的影响:2023 年人工智能的使用显著增加。
Orthop Traumatol Surg Res. 2023 Dec;109(8):103720. doi: 10.1016/j.otsr.2023.103720. Epub 2023 Oct 20.
3
Large language models for conducting systematic reviews: on the rise, but not yet ready for use-a scoping review.用于进行系统评价的大型语言模型:正在兴起,但尚未准备好投入使用——一项范围综述
J Clin Epidemiol. 2025 May;181:111746. doi: 10.1016/j.jclinepi.2025.111746. Epub 2025 Feb 26.
4
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
5
ChatGPT and the Future of Journal Reviews: A Feasibility Study.ChatGPT 与期刊评审的未来:一项可行性研究。
Yale J Biol Med. 2023 Sep 29;96(3):415-420. doi: 10.59249/SKDH9286. eCollection 2023 Sep.
6
Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review.对抗审稿人疲劳还是加剧偏见?关于在学术同行评审中使用ChatGPT和其他大语言模型的思考与建议。
Res Integr Peer Rev. 2023 May 18;8(1):4. doi: 10.1186/s41073-023-00133-5.
7
Large language model usage guidelines in Korean medical journals: a survey using human-artificial intelligence collaboration.韩国医学期刊中大型语言模型的使用指南:一项利用人机协作的调查
J Yeungnam Med Sci. 2025;42:14. doi: 10.12701/jyms.2024.00794. Epub 2024 Dec 11.
8
Using Generative Artificial Intelligence in Health Economics and Outcomes Research: A Primer on Techniques and Breakthroughs.在卫生经济学与结果研究中使用生成式人工智能:技术与突破入门
Pharmacoecon Open. 2025 Apr 29. doi: 10.1007/s41669-025-00580-4.
9
Large Language Models and User Trust: Consequence of Self-Referential Learning Loop and the Deskilling of Health Care Professionals.大语言模型与用户信任:自我参照学习循环的后果及医疗保健专业人员的技能退化
J Med Internet Res. 2024 Apr 25;26:e56764. doi: 10.2196/56764.
10
The importance of transparency: Declaring the use of generative artificial intelligence (AI) in academic writing.透明度的重要性:在学术写作中声明使用生成式人工智能(AI)。
J Nurs Scholarsh. 2024 Mar;56(2):314-318. doi: 10.1111/jnu.12938. Epub 2023 Oct 31.

本文引用的文献

1
AI-generated responses in peer review pose a growing challenge for reviewers and editors: Call for a reviewer rating system.同行评审中人工智能生成的回复给评审人员和编辑带来了日益严峻的挑战:呼吁建立评审人员评级系统。
J Clin Neurosci. 2025 Mar;133:111042. doi: 10.1016/j.jocn.2025.111042. Epub 2025 Jan 10.
2
Use of Artificial Intelligence in Peer Review Among Top 100 Medical Journals.人工智能在排名前100的医学期刊同行评审中的应用。
JAMA Netw Open. 2024 Dec 2;7(12):e2448609. doi: 10.1001/jamanetworkopen.2024.48609.
3
Generative AI: ensuring transparency and emphasising human intelligence and accountability.生成式人工智能:确保透明度,强调人类智能和问责制。
Lancet. 2024 Nov 30;404(10468):2142-2143. doi: 10.1016/S0140-6736(24)02615-1.
4
ChatGPT is transforming peer review - how can we use it responsibly?ChatGPT正在改变同行评审——我们如何负责任地使用它?
Nature. 2024 Nov;635(8037):10. doi: 10.1038/d41586-024-03588-8.
5
Use of artificial intelligence and the future of peer review.人工智能的应用与同行评审的未来。
Health Aff Sch. 2024 May 3;2(5):qxae058. doi: 10.1093/haschl/qxae058. eCollection 2024 May.
6
Generative artificial intelligence is infiltrating peer review process.生成式人工智能正在渗透到同行评审过程中。
Crit Care. 2024 May 7;28(1):149. doi: 10.1186/s13054-024-04933-z.
7
Open peer review urgently requires evidence: A call to action.同行评议亟待证据:行动呼吁。
PLoS Biol. 2023 Oct 4;21(10):e3002255. doi: 10.1371/journal.pbio.3002255. eCollection 2023 Oct.
8
ChatGPT and the Future of Journal Reviews: A Feasibility Study.ChatGPT 与期刊评审的未来:一项可行性研究。
Yale J Biol Med. 2023 Sep 29;96(3):415-420. doi: 10.59249/SKDH9286. eCollection 2023 Sep.
9
GPT detectors are biased against non-native English writers.GPT检测器对非英语母语的写作者存在偏见。
Patterns (N Y). 2023 Jul 10;4(7):100779. doi: 10.1016/j.patter.2023.100779. eCollection 2023 Jul 14.
10
Guidance for Authors, Peer Reviewers, and Editors on Use of AI, Language Models, and Chatbots.作者、同行评审人员及编辑使用人工智能、语言模型和聊天机器人的指南。
JAMA. 2023 Aug 22;330(8):702-703. doi: 10.1001/jama.2023.12500.