Suppr超能文献

比较人类与ChatGPT撰写的信件:一项初步研究。

Comparing letters written by humans and ChatGPT: A preliminary study.

作者信息

Matsubara Shigeki

机构信息

Department of Obstetrics and Gynecology, Jichi Medical University, Tochigi, Japan.

Department of Obstetrics and Gynecology, Koga Red Cross Hospital, Ibaraki, Japan.

出版信息

Int J Gynaecol Obstet. 2025 Jan;168(1):320-325. doi: 10.1002/ijgo.15827. Epub 2024 Jul 31.

Abstract

OBJECTIVE

There are no criteria for what type of manuscript and to what extent ChatGPT use is permissible in writing manuscripts. I aimed to determine which, human or ChatGPT, writes more readable letters to the editor and whether ChatGPT writes letters mimicking a certain person. I aimed to provide hints as to what makes the difference between humans and ChatGPT.

METHODS

This is a descriptive pilot study. I tasked ChatGPT (version 3.5) with generating a disagreement letter to my previous article (Letter 0). I wrote a letter involving three weaknesses of the addressed article (Letter 1). I provided ChatGPT with these three weaknesses and tasked it with generating a letter (Letter 2). Then, I supplied my authored letters and tasked ChatGPT with emulating my writing style (Letter 3). Eight professors evaluated the letters' readability and ChatGPT assessed which letter was more likely to be accepted.

RESULTS

ChatGPT produced coherent letters (Letters 0 and 2). Professors rated the readability of Letters 1 and 2 similarly, finding Letter 3 less readable. ChatGPT determined that the human-authored Letter 1 had a slightly higher acceptance likelihood than the ChatGPT-generated Letter 2. Although ChatGPT identified personal writing styles, its mimicry did not enhance the letter's quality.

CONCLUSION

This preliminary experiment indicates that human-written letters are perceived to be as readable as, or no less readable than, ChatGPT-generated ones. It suggests that human touch, with its inherent enthusiasm, is essential for effective letter writing. Further comprehensive investigations are warranted to ascertain the extent to which ChatGPT can be used in this domain.

摘要

目的

对于何种类型的稿件以及在撰写稿件时允许使用ChatGPT的程度,目前尚无标准。我的目的是确定人类和ChatGPT谁写出的给编辑的信更具可读性,以及ChatGPT是否会模仿特定的人写信。我的目的是提供一些线索,说明人类与ChatGPT之间的差异所在。

方法

这是一项描述性的试点研究。我要求ChatGPT(3.5版本)针对我之前的一篇文章生成一封不同意见的信(信件0)。我写了一封信,指出所讨论文章的三个弱点(信件1)。我将这三个弱点告知ChatGPT,并要求它生成一封信(信件2)。然后,我提供了我撰写的信件,并要求ChatGPT模仿我的写作风格(信件3)。八位教授评估了这些信件的可读性,ChatGPT评估哪封信更有可能被接受。

结果

ChatGPT生成了连贯的信件(信件0和信件2)。教授们对信件1和信件2的可读性评价相似,认为信件3的可读性较低。ChatGPT确定,由人类撰写的信件1被接受的可能性略高于由ChatGPT生成的信件2。尽管ChatGPT识别出了个人写作风格,但其模仿并没有提高信件的质量。

结论

这项初步实验表明,人们认为人类撰写的信件与ChatGPT生成的信件具有相同的可读性,或者说人类撰写的信件的可读性并不亚于ChatGPT生成的信件。这表明,带有内在热情的人情味对于有效的信件写作至关重要。有必要进行进一步的全面调查,以确定ChatGPT在这一领域的可用程度。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验