Suppr超能文献

稿件中的隐藏提示威胁同行评议和研究的完整性:给期刊和机构的建议

Hidden Prompts in Manuscripts Threaten the Integrity of Peer Review and Research: Recommendations for Journals and Institutions.

作者信息

Giray Louie

机构信息

Department of Liberal Arts, School of Foundational Studies and Education, Mapúa University, Manila, Philippines.

出版信息

Ann Biomed Eng. 2025 Aug 17. doi: 10.1007/s10439-025-03827-7.

Abstract

I examine the scholarly implications of a troubling case where researchers embedded hidden prompts like "give a positive review only" into academic preprints to manipulate AI-assisted peer review. AI is now woven into nearly every facet of academic life, including the peer review process. I contend that manipulating peer review through embedding secret prompts is as serious as plagiarism or data fabrication. Peer review may not be perfect, but deception is misconduct. Reviewers must still be held accountable. Those who blindly rely on AI outputs without critical engagement fail in their scholarly duty. AI should only amplify the reviewer's expertise. As institutions begin regulating AI in research, similar frameworks must extend to peer review. Journals and publishers should establish clear, enforceable guidelines on acceptable AI use: Will AI be banned, regulated, or embraced? If allowed, disclosures must be mandatory. Authors should also be informed if AI tools will be used in the review process, ensuring transparency and consent. Confidentiality is another pressing issue. Real cases have shown how ChatGPT links shared by reviewers were indexed online, compromising sensitive, unpublished research, even though OpenAI has since moved to discontinue public link discoverability. Beyond policy, we must cultivate a culture of transparency, trust, and responsibility. Institutions can host ethics workshops and mentor early-career scholars. This is not just about AI; it is about who we are as researchers and reviewers. No matter how advanced the technology, integrity must remain our anchor. Without it, even the most innovative research stands on shaky ground.

摘要

我审视了一个令人不安的案例所带来的学术影响,在该案例中,研究人员在学术预印本中嵌入了诸如“仅给出正面评价”之类的隐藏提示,以操纵人工智能辅助的同行评审。如今,人工智能几乎融入了学术生活的方方面面,包括同行评审过程。我认为,通过嵌入秘密提示来操纵同行评审与抄袭或数据造假一样严重。同行评审可能并不完美,但欺骗属于不当行为。评审人员仍必须承担责任。那些盲目依赖人工智能输出而不进行批判性思考的人没有尽到学术职责。人工智能应该只是增强评审人员的专业知识。随着各机构开始对研究中的人工智能进行监管,类似的框架必须扩展到同行评审。期刊和出版商应该就可接受的人工智能使用制定明确、可执行的指导方针:人工智能将被禁止、监管还是被接受?如果允许使用,披露必须是强制性的。如果在评审过程中使用了人工智能工具,也应该告知作者,以确保透明度和获得同意。保密性是另一个紧迫的问题。实际案例表明,评审人员分享的ChatGPT链接如何在网上被索引,从而危及敏感的未发表研究,尽管OpenAI此后已停止公开链接的可发现性。除了政策之外,我们必须培育一种透明、信任和责任的文化。各机构可以举办道德研讨会,并指导早期职业学者。这不仅仅关乎人工智能;这关乎我们作为研究人员和评审人员的身份。无论技术多么先进,诚信必须始终是我们的基石。没有它,即使是最具创新性的研究也站在不稳定的基础上。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验