Suppr超能文献

通过聊天机器人生成的材料优化眼科患者教育:人工智能生成的患者教育材料和美国眼科整形重建外科学会患者手册的可读性分析。

Optimizing Ophthalmology Patient Education via ChatBot-Generated Materials: Readability Analysis of AI-Generated Patient Education Materials and The American Society of Ophthalmic Plastic and Reconstructive Surgery Patient Brochures.

机构信息

Department of Ophthalmology, Moran Eye Center, University of Utah, Salt Lake City, Utah, U.S.A.

Department of Ophthalmology and Visual Sciences, West Virginia University, Morgantown, West Virginia, U.S.A.

出版信息

Ophthalmic Plast Reconstr Surg. 2024;40(2):212-216. doi: 10.1097/IOP.0000000000002549. Epub 2023 Nov 16.

Abstract

PURPOSE

This study aims to compare the readability of patient education materials (PEM) of the American Society of Ophthalmic Plastic and Reconstructive Surgery to that of PEMs generated by the AI-chat bots ChatGPT and Google Bard.

METHODS

PEMs on 16 common American Society of Ophthalmic Plastic and Reconstructive Surgery topics were generated by 2 AI models, ChatGPT 4.0 and Google Bard, with and without a 6th-grade reading level prompt modifier. The PEMs were analyzed using 7 readability metrics: Flesch Reading Ease Score, Gunning Fog Index, Flesch-Kincaid Grade Level, Coleman-Liau Index, Simple Measure of Gobbledygook Index Score, Automated Readability Index, and Linsear Write Readability Score. Each AI-generated PEM was compared with the equivalent American Society of Ophthalmic Plastic and Reconstructive Surgery PEM.

RESULTS

Across all readability indices, PEM generated by ChatGPT 4.0 consistently had the highest readability scores, indicating that the material generated by this AI chatbot may be most difficult to read in its unprompted form (Flesch Reading Ease Score: 36.5; Simple Measure of Gobbledygook: 14.7). Google's Bard was able to generate content that was easier to read than both the American Society of Ophthalmic Plastic and Reconstructive Surgery and ChatGPT 4.0 (Flesch Reading Ease Score: 52.3; Simple Measure of Gobbledygook: 12.7). When prompted to produce PEM at a 6th-grade reading level, both ChatGPT 4.0 and Bard were able to significantly improve in their readability scores, with prompted ChatGPT 4.0 being able to consistently generate content that was easier to read (Flesch Reading Ease Score: 67.9, Simple Measure of Gobbledygook: 10.2).

CONCLUSION

This study suggests that AI tools, when guided by appropriate prompts, can generate accessible and comprehensible PEMs in the field of ophthalmic plastic and reconstructive surgeries, balancing readability with the complexity of the necessary information.

摘要

目的

本研究旨在比较美国眼科整形重建外科学会(ASOPRS)的患者教育材料(PEM)与 ChatGPT 和 Google Bard 这两种人工智能聊天机器人生成的 PEM 的可读性。

方法

使用两种人工智能模型 ChatGPT 4.0 和 Google Bard,生成 16 种常见的美国眼科整形重建外科学会主题的 PEM,并分别使用和不使用六年级阅读水平提示修饰符。使用 7 种可读性指标对 PEM 进行分析:弗莱什阅读容易度得分、冈宁 Fog 指数、弗莱什-金凯德年级水平、科尔曼-廖指数、简单的胡言乱语得分指数、自动可读性指数和林赛写作可读性得分。将每个 AI 生成的 PEM 与等效的美国眼科整形重建外科学会 PEM 进行比较。

结果

在所有可读性指标中,ChatGPT 4.0 生成的 PEM 始终具有最高的可读性得分,表明该 AI 聊天机器人生成的材料在未经提示的情况下可能最难阅读(弗莱什阅读容易度得分:36.5;简单的胡言乱语得分:14.7)。谷歌的 Bard 能够生成比美国眼科整形重建外科学会和 ChatGPT 4.0 更易读的内容(弗莱什阅读容易度得分:52.3;简单的胡言乱语得分:12.7)。当提示生成六年级阅读水平的 PEM 时,ChatGPT 4.0 和 Bard 都能够显著提高其可读性得分,提示后的 ChatGPT 4.0 能够始终生成更易读的内容(弗莱什阅读容易度得分:67.9,简单的胡言乱语得分:10.2)。

结论

本研究表明,人工智能工具在适当提示的引导下,可以在眼科整形重建外科学领域生成可访问且易于理解的 PEM,在可读性和必要信息的复杂性之间取得平衡。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验