• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用 ChatGPT 提高整形外科网页的可读性并实现内容分析自动化。

Improving Readability and Automating Content Analysis of Plastic Surgery Webpages With ChatGPT.

机构信息

Division of Plastic and Reconstructive Surgery, Department of Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts.

Department of Plastic and Reconstructive Surgery, Ohio State University Wexner Medical Center, Columbus, Ohio.

出版信息

J Surg Res. 2024 Jul;299:103-111. doi: 10.1016/j.jss.2024.04.006. Epub 2024 May 14.

DOI:10.1016/j.jss.2024.04.006
PMID:38749313
Abstract

INTRODUCTION

The quality and readability of online health information are sometimes suboptimal, reducing their usefulness to patients. Manual evaluation of online medical information is time-consuming and error-prone. This study automates content analysis and readability improvement of private-practice plastic surgery webpages using ChatGPT.

METHODS

The first 70 Google search results of "breast implant size factors" and "breast implant size decision" were screened. ChatGPT 3.5 and 4.0 were utilized with two prompts (1: general, 2: specific) to automate content analysis and rewrite webpages with improved readability. ChatGPT content analysis outputs were classified as hallucination (false positive), accurate (true positive or true negative), or omission (false negative) using human-rated scores as a benchmark. Six readability metric scores of original and revised webpage texts were compared.

RESULTS

Seventy-five webpages were included. Significant improvements were achieved from baseline in six readability metric scores using a specific-instruction prompt with ChatGPT 3.5 (all P ≤ 0.05). No further improvements in readability scores were achieved with ChatGPT 4.0. Rates of hallucination, accuracy, and omission in ChatGPT content scoring varied widely between decision-making factors. Compared to ChatGPT 3.5, average accuracy rates increased while omission rates decreased with ChatGPT 4.0 content analysis output.

CONCLUSIONS

ChatGPT offers an innovative approach to enhancing the quality of online medical information and expanding the capabilities of plastic surgery research and practice. Automation of content analysis is limited by ChatGPT 3.5's high omission rates and ChatGPT 4.0's high hallucination rates. Our results also underscore the importance of iterative prompt design to optimize ChatGPT performance in research tasks.

摘要

简介

在线健康信息的质量和可读性有时不尽如人意,降低了其对患者的有用性。手动评估在线医疗信息既耗时又容易出错。本研究使用 ChatGPT 实现了私人整形外科网页内容分析和可读性改进的自动化。

方法

筛选了“乳房植入物大小因素”和“乳房植入物大小决策”的前 70 个谷歌搜索结果。使用两个提示(1:一般,2:具体)来自动化内容分析和重写具有改进可读性的网页,利用 ChatGPT3.5 和 4.0。使用人工评分作为基准,将 ChatGPT 内容分析输出分为幻觉(假阳性)、准确(真阳性或真阴性)或遗漏(假阴性)。比较原始和修订网页文本的六个可读性指标分数。

结果

共纳入 75 个网页。使用 ChatGPT 3.5 的特定指令提示,在六个可读性指标分数上均显著提高(均 P≤0.05)。使用 ChatGPT 4.0 并不能进一步提高可读性评分。在决策因素方面,ChatGPT 内容评分中的幻觉、准确性和遗漏率差异很大。与 ChatGPT 3.5 相比,使用 ChatGPT 4.0 内容分析输出时,平均准确率提高,遗漏率降低。

结论

ChatGPT 为提高在线医疗信息质量提供了一种创新方法,并扩展了整形外科学研究和实践的能力。内容分析的自动化受到 ChatGPT 3.5 高遗漏率和 ChatGPT 4.0 高幻觉率的限制。我们的结果还强调了迭代提示设计对于优化 ChatGPT 在研究任务中的性能的重要性。

相似文献

1
Improving Readability and Automating Content Analysis of Plastic Surgery Webpages With ChatGPT.利用 ChatGPT 提高整形外科网页的可读性并实现内容分析自动化。
J Surg Res. 2024 Jul;299:103-111. doi: 10.1016/j.jss.2024.04.006. Epub 2024 May 14.
2
Dr. Google to Dr. ChatGPT: assessing the content and quality of artificial intelligence-generated medical information on appendicitis.谷歌博士对 ChatGPT 博士:评估人工智能生成的关于阑尾炎的医学信息的内容和质量。
Surg Endosc. 2024 May;38(5):2887-2893. doi: 10.1007/s00464-024-10739-5. Epub 2024 Mar 5.
3
Can ChatGPT Aid Clinicians in Educating Patients on the Surgical Management of Glaucoma?ChatGPT 能否帮助临床医生向患者讲解青光眼的手术治疗?
J Glaucoma. 2024 Feb 1;33(2):94-100. doi: 10.1097/IJG.0000000000002338. Epub 2023 Nov 24.
4
Content and Readability of Online Recommendations for Breast Implant Size Selection.乳房植入物尺寸选择在线推荐内容及可读性
Plast Reconstr Surg Glob Open. 2023 Jan 24;11(1):e4787. doi: 10.1097/GOX.0000000000004787. eCollection 2023 Jan.
5
Optimizing Readability of Patient-Facing Hand Surgery Education Materials Using Chat Generative Pretrained Transformer (ChatGPT) 3.5.使用聊天生成预训练转换器(ChatGPT)3.5 优化面向患者的手部手术教育材料的可读性。
J Hand Surg Am. 2024 Oct;49(10):986-991. doi: 10.1016/j.jhsa.2024.05.007. Epub 2024 Jul 6.
6
Dr. Google vs. Dr. ChatGPT: Exploring the Use of Artificial Intelligence in Ophthalmology by Comparing the Accuracy, Safety, and Readability of Responses to Frequently Asked Patient Questions Regarding Cataracts and Cataract Surgery.谷歌医生与ChatGPT医生:通过比较关于白内障及白内障手术的常见患者问题的回答的准确性、安全性和可读性,探索人工智能在眼科领域的应用。
Semin Ophthalmol. 2024 Aug;39(6):472-479. doi: 10.1080/08820538.2024.2326058. Epub 2024 Mar 22.
7
BPPV Information on Google Versus AI (ChatGPT).谷歌与人工智能(ChatGPT)上的良性阵发性位置性眩晕信息
Otolaryngol Head Neck Surg. 2024 Jun;170(6):1504-1511. doi: 10.1002/ohn.506. Epub 2023 Aug 25.
8
The Use of Large Language Models to Generate Education Materials about Uveitis.使用大型语言模型生成有关葡萄膜炎的教育材料。
Ophthalmol Retina. 2024 Feb;8(2):195-201. doi: 10.1016/j.oret.2023.09.008. Epub 2023 Sep 15.
9
Analysis of the quality, accuracy, and readability of patient information on polycystic ovarian syndrome (PCOS) on the internet available in English: a cross-sectional study.多囊卵巢综合征(PCOS)相关互联网患者信息的质量、准确性和可读性分析:一项横断面研究。
Reprod Biol Endocrinol. 2023 May 15;21(1):44. doi: 10.1186/s12958-023-01100-x.
10
Accessibility of online self-management support websites for people with osteoarthritis: A text content analysis.骨关节炎患者在线自我管理支持网站的可及性:一项文本内容分析
Chronic Illn. 2019 Mar;15(1):27-40. doi: 10.1177/1742395317746471. Epub 2017 Dec 18.

引用本文的文献

1
Large language models in patient education: a scoping review of applications in medicine.用于患者教育的大语言模型:医学应用的范围综述
Front Med (Lausanne). 2024 Oct 29;11:1477898. doi: 10.3389/fmed.2024.1477898. eCollection 2024.