• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

AUTOGEN:用于学术提升的个性化大型语言模型——伦理与原理验证。

AUTOGEN: A Personalized Large Language Model for Academic Enhancement-Ethics and Proof of Principle.

机构信息

University of Oxford.

Independent Researcher.

出版信息

Am J Bioeth. 2023 Oct;23(10):28-41. doi: 10.1080/15265161.2023.2233356. Epub 2023 Jul 24.

DOI:10.1080/15265161.2023.2233356
PMID:37487183
Abstract

In this article, we explore the potential of enhancing academic prose and idea generation by fine-tuning a large language model (here, GPT-3) on one's own previously published writings: AUTOGEN ("AI Unique Tailored Output GENerator"). We develop, test, and describe three distinct AUTOGEN models trained on the prior scholarly output of three of the current authors (SBM, BDE, JS), with a fourth model trained on the combined works of all three. Our AUTOGEN models demonstrate greater variance in quality than the base GPT-3 model, with many outputs outperforming the base model in format, style, overall quality, and novel idea generation. As proof of principle, we present and discuss examples of AUTOGEN-written sections of existing and hypothetical research papers. We further discuss ethical opportunities, concerns, and open questions associated with personalized academic prose and idea generators. Ethical opportunities for personalized LLMs such as AUTOGEN include increased productivity, preservation of writing styles and cultural traditions, and aiding consensus building. However, ethical concerns arise due to the potential for personalized LLMs to reduce output diversity, violate privacy and intellectual property rights, and facilitate plagiarism or fraud. The use of coauthored or multiple-source trained models further complicates issues surrounding ownership and attribution. Open questions concern a potential credit-blame asymmetry for LLM outputs, the legitimacy of licensing agreements in authorship ascription, and the ethical implications of coauthorship attribution for data contributors. Ensuring the output is sufficiently distinct from the source material is crucial to maintaining ethical standards in academic writing. These opportunities, risks, and open issues highlight the intricate ethical landscape surrounding the use of personalized LLMs in academia. We also discuss open technical questions concerning the integration of AUTOGEN-style personalized LLMs with other LLMs, such as GPT-4, for iterative refinement and improvement of generated text. In conclusion, we argue that AUTOGEN-style personalized LLMs offer significant potential benefits in terms of both prose generation and, to a lesser extent, idea generation. If associated ethical issues are appropriately addressed, AUTOGEN alone or in combination with other LLMs can be seen as a potent form of academic enhancement.

摘要

在本文中,我们探讨了通过在自己先前发表的作品上微调大型语言模型(此处为 GPT-3)来提高学术论文质量和产生新想法的潜力,即 AUTOGEN(“AI 独特定制输出生成器”)。我们开发、测试并描述了三个基于当前三位作者(SBM、BDE、JS)先前学术著作的不同 AUTOGEN 模型,以及一个基于三位作者所有作品的第四个模型。我们的 AUTOGEN 模型在质量上的变化比基础 GPT-3 模型更大,许多输出在格式、风格、整体质量和新想法产生方面都优于基础模型。作为原理验证,我们展示并讨论了现有和假设研究论文中 AUTOGEN 编写的部分。我们进一步讨论了与个性化学术论文和想法生成器相关的伦理机会、关注问题和开放性问题。个性化的像 AUTOGEN 这样的 LLM 的伦理机会包括提高生产力、保留写作风格和文化传统以及促进共识建立。然而,由于个性化 LLM 可能会降低输出多样性、侵犯隐私和知识产权以及促进抄袭或欺诈,因此也存在伦理问题。合著或多源训练模型的使用进一步使所有权和归属问题复杂化。开放性问题涉及到 LLM 输出的信用归咎不对称、授权协议在作者归因中的合法性以及数据贡献者共同归属的伦理含义。确保输出与源材料足够不同对于保持学术写作的伦理标准至关重要。这些机会、风险和开放性问题突显了在学术界使用个性化 LLM 所涉及的复杂伦理问题。我们还讨论了与将 AUTOGEN 风格的个性化 LLM 与其他 LLM(如 GPT-4)集成相关的一些开放性技术问题,以迭代改进生成文本。总之,我们认为 AUTOGEN 风格的个性化 LLM 在生成文本方面具有很大的潜力,在产生新想法方面的潜力则较小。如果适当解决相关的伦理问题,AUTOGEN 或与其他 LLM 结合使用,可以被视为一种强大的学术增强形式。

相似文献

1
AUTOGEN: A Personalized Large Language Model for Academic Enhancement-Ethics and Proof of Principle.AUTOGEN:用于学术提升的个性化大型语言模型——伦理与原理验证。
Am J Bioeth. 2023 Oct;23(10):28-41. doi: 10.1080/15265161.2023.2233356. Epub 2023 Jul 24.
2
Ethical Considerations and Fundamental Principles of Large Language Models in Medical Education: Viewpoint.医学教育中大型语言模型的伦理考量与基本原则:观点
J Med Internet Res. 2024 Aug 1;26:e60083. doi: 10.2196/60083.
3
Large Language Models and User Trust: Consequence of Self-Referential Learning Loop and the Deskilling of Health Care Professionals.大语言模型与用户信任:自我参照学习循环的后果及医疗保健专业人员的技能退化
J Med Internet Res. 2024 Apr 25;26:e56764. doi: 10.2196/56764.
4
Academic Surgery in the Era of Large Language Models: A Review.大语言模型时代的外科学术:综述。
JAMA Surg. 2024 Apr 1;159(4):445-450. doi: 10.1001/jamasurg.2023.6496.
5
Leveraging Large Language Models for Precision Monitoring of Chemotherapy-Induced Toxicities: A Pilot Study with Expert Comparisons and Future Directions.利用大语言模型进行化疗诱导毒性的精准监测:一项专家比较及未来方向的试点研究
Cancers (Basel). 2024 Aug 12;16(16):2830. doi: 10.3390/cancers16162830.
6
The Role of Large Language Models in Transforming Emergency Medicine: Scoping Review.大型语言模型在变革急诊医学中的作用:范围综述
JMIR Med Inform. 2024 May 10;12:e53787. doi: 10.2196/53787.
7
Quality of Answers of Generative Large Language Models Versus Peer Users for Interpreting Laboratory Test Results for Lay Patients: Evaluation Study.生成式大语言模型与同行用户对解释非专业患者实验室检测结果的答案质量比较:评估研究。
J Med Internet Res. 2024 Apr 17;26:e56655. doi: 10.2196/56655.
8
Potential of Large Language Models in Health Care: Delphi Study.大语言模型在医疗保健中的潜力:德尔菲研究。
J Med Internet Res. 2024 May 13;26:e52399. doi: 10.2196/52399.
9
Large Language Models in Ophthalmology: Potential and Pitfalls.大语言模型在眼科学中的应用:潜力与陷阱。
Semin Ophthalmol. 2024 May;39(4):289-293. doi: 10.1080/08820538.2023.2300808. Epub 2024 Jan 5.
10
Large language models are changing landscape of academic publications. A positive transformation?大型语言模型正在改变学术出版格局。这是积极的转变吗?
Cas Lek Cesk. 2024;162(7-8):294-297.

引用本文的文献

1
The Geometry of Language: Understanding LLMs in Bioethics.语言的几何学:理解生物伦理学中的大语言模型
J Bioeth Inq. 2025 Sep 11. doi: 10.1007/s11673-025-10480-1.
2
To take a different approach: Can large language models provide knowledge related to respiratory aspiration?换一种方式来看:大语言模型能否提供与呼吸道误吸相关的知识?
Digit Health. 2025 Jul 10;11:20552076251349616. doi: 10.1177/20552076251349616. eCollection 2025 Jan-Dec.
3
Open Science at the generative AI turn: An exploratory analysis of challenges and opportunities.
生成式人工智能时代的开放科学:挑战与机遇的探索性分析。
Quant Sci Stud. 2025;6:22-45. doi: 10.1162/qss_a_00337. Epub 2025 Jan 27.
4
Credit and blame for AI-generated content: Effects of personalization in four countries.人工智能生成内容的功过:四个国家的个性化影响
Ann N Y Acad Sci. 2024 Dec;1542(1):51-57. doi: 10.1111/nyas.15258. Epub 2024 Nov 25.
5
Digital Doppelgängers and Lifespan Extension: What Matters?数字分身与寿命延长:关键何在?
Am J Bioeth. 2025 Feb;25(2):95-110. doi: 10.1080/15265161.2024.2416133. Epub 2024 Nov 14.
6
Enabling Demonstrated Consent for Biobanking with Blockchain and Generative AI.通过区块链和生成式人工智能实现生物样本库的明确同意。
Am J Bioeth. 2025 Apr;25(4):96-111. doi: 10.1080/15265161.2024.2416117. Epub 2024 Nov 5.
7
The transformative impact of large language models on medical writing and publishing: current applications, challenges and future directions.大语言模型对医学写作与出版的变革性影响:当前应用、挑战及未来方向
Korean J Physiol Pharmacol. 2024 Sep 1;28(5):393-401. doi: 10.4196/kjpp.2024.28.5.393.
8
Ethics of artificial intelligence in medicine.医学人工智能的伦理问题。
Singapore Med J. 2024 Mar 1;65(3):150-158. doi: 10.4103/singaporemedj.SMJ-2023-279. Epub 2024 Mar 26.
9
AI and the need for justification (to the patient).人工智能与(向患者)说明理由的必要性。
Ethics Inf Technol. 2024;26(1):16. doi: 10.1007/s10676-024-09754-w. Epub 2024 Mar 4.
10
A Personalized Patient Preference Predictor for Substituted Judgments in Healthcare: Technically Feasible and Ethically Desirable.用于医疗保健中替代判断的个性化患者偏好预测器:技术上可行且伦理上可取。
Am J Bioeth. 2024 Jul;24(7):13-26. doi: 10.1080/15265161.2023.2296402. Epub 2024 Jan 16.