Suppr超能文献

ChatGPT和其他人工智能方法之后的Ejtm3经验:价值、风险、对策。

Ejtm3 experiences after ChatGPT and other AI approaches: values, risks, countermeasures.

作者信息

Fanò-Illic Giorgio, Coraci Daniele, Maccarone Maria Chiara, Masiero Stefano, Quadrelli Marco, Morra Aldo, Ravara Barbara, Pond Amber, Forni Riccardo, Gargiulo Paolo

机构信息

Interuniversity Institute of Myology, Chieti-Pescara University, Italy; Free University of Alcatraz, Gubbio, Perugia, Italy; A&C M-C Foundation for Translational Myology, Padua.

Department of Neuroscience, Section of Rehabilitation, University of Padova, Padua.

出版信息

Eur J Transl Myol. 2025 Mar 31;35(1). doi: 10.4081/ejtm.2025.13670. Epub 2025 Jan 30.

Abstract

We invariably hear that Artificial Intelligence (AI), a rapidly evolving technology, does not just creatively assemble known knowledge. We are told that AI learns, processes and creates, starting from fixed points to arrive at innovative solutions. In the case of scientific work, AI can generate data without ever having entered a laboratory, (i.e., blatantly plagiarizing the existing literature, a despicable old trick). How does an editor of a scientific journal recognize when she or he is faced with something like this? The solution is for editors and referees to rigorously evaluate the track records of submitting authors and what they are doing. For example, false color evaluations of 2D and 3D CT and MRI images have been used to validate functional electrical stimulation for degenerated denervated muscle and a home Full-Body In-Bed Gym program. These have been recently published in Ejtm and other journals. The editors and referees of Ejtm can exclude the possibility that the images were invented by ChatGPT. Why? Because they know the researchers: Marco Quadrelli, Aldo Morra, Daniele Coraci, Paolo Gargiulo and their collaborators as well! Artificial intelligence is not banned by the EJTM, but when submitting their manuscripts to previous and to a new Thematic Section dedicated to Generative AI in Translational Mobility Medicine authors must openly declare whether they have used artificial intelligence, of what type and for what purposes. This will not avoid risks of plagiarism or worse, but it will better establish possible liabilities.

摘要

我们总是听到,人工智能(AI)作为一项快速发展的技术,不仅仅是创造性地整合已知知识。我们被告知,人工智能能够学习、处理并创造,从固定点出发,得出创新解决方案。就科学工作而言,人工智能可以在从未进入实验室的情况下生成数据(即公然抄袭现有文献,这是一个卑鄙的老把戏)。科学期刊的编辑如何识别自己面对的是不是这类情况呢?解决办法是编辑和审稿人要严格评估投稿作者的过往记录以及他们正在做的事情。例如,二维和三维CT及MRI图像的伪彩色评估已被用于验证针对退化失神经肌肉的功能性电刺激以及一项家用全身床上健身计划。这些研究最近已发表在《欧洲创伤与紧急手术杂志》(Ejtm)和其他期刊上。《欧洲创伤与紧急手术杂志》的编辑和审稿人能够排除图像是由ChatGPT生成的可能性。为什么呢?因为他们了解这些研究人员:马尔科·夸德雷利、阿尔多·莫拉、达尼埃莱·科拉奇、保罗·加尔吉洛以及他们的合作者!《欧洲创伤与紧急手术杂志》并不禁止使用人工智能,但当作者向之前以及一个新的专门探讨转化移动医学中的生成式人工智能的主题板块投稿时,必须公开声明他们是否使用了人工智能、使用的是何种类型以及出于何种目的。这并不能避免抄袭或更糟情况的风险,但能更好地明确可能的责任。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6709/12038559/0f580662babf/ejtm-35-1-13670-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验