Suppr超能文献

人工智能在骨质疏松症中的应用:评估 ChatGPT 的信息质量和可读性。

Artificial intelligence insights into osteoporosis: assessing ChatGPT's information quality and readability.

机构信息

Clinic of Physical Medicine and Rehabilitation, İzzet Baysal Physical Treatment and Rehabilitation Training and Research Hospital, Orüs Street, No. 59, 14020, Bolu, Turkey.

Department of Physical Medicine and Rehabilitation, Üsküdar State Hospital, Barbaros, Veysi Paşa Street, No. 14, 34662, Istanbul, Turkey.

出版信息

Arch Osteoporos. 2024 Mar 19;19(1):17. doi: 10.1007/s11657-024-01376-5.

Abstract

UNLABELLED

Accessible, accurate information, and readability play crucial role in empowering individuals managing osteoporosis. This study showed that the responses generated by ChatGPT regarding osteoporosis had serious problems with quality and were at a level of complexity that that necessitates an educational background of approximately 17 years.

PURPOSE

The use of artificial intelligence (AI) applications as a source of information in the field of health is increasing. Readable and accurate information plays a critical role in empowering patients to make decisions about their disease. The aim was to examine the quality and readability of responses provided by ChatGPT, an AI chatbot, to commonly asked questions regarding osteoporosis, representing a major public health problem.

METHODS

"Osteoporosis," "female osteoporosis," and "male osteoporosis" were identified by using Google trends for the 25 most frequently searched keywords on Google. A selected set of 38 keywords was sequentially inputted into the chat interface of the ChatGPT. The responses were evaluated with tools of the Ensuring Quality Information for Patients (EQIP), the Flesch-Kincaid Grade Level (FKGL), and the Flesch-Kincaid Reading Ease (FKRE).

RESULTS

The EQIP score of the texts ranged from a minimum of 36.36 to a maximum of 61.76 with a mean value of 48.71 as having "serious problems with quality." The FKRE scores spanned from 13.71 to 56.06 with a mean value of 28.71 and the FKGL varied between 8.48 and 17.63, with a mean value of 13.25. There were no statistically significant correlations between the EQIP score and the FKGL or FKRE scores.

CONCLUSIONS

Although ChatGPT is easily accessible for patients to obtain information about osteoporosis, its current quality and readability fall short of meeting comprehensive healthcare standards.

摘要

未加标签

可及性、准确性信息和易读性在增强骨质疏松症患者的能力方面起着至关重要的作用。本研究表明,ChatGPT 针对骨质疏松症生成的回复在质量上存在严重问题,其复杂程度需要大约 17 年的教育背景。

目的

人工智能 (AI) 应用作为健康领域信息源的使用正在增加。易读且准确的信息对于赋予患者做出与疾病相关决策的能力起着关键作用。目的是检查 ChatGPT(一种人工智能聊天机器人)针对骨质疏松症这一主要公共健康问题的常见问题提供的回复的质量和易读性。

方法

使用 Google 趋势,通过 Google 上搜索最多的 25 个关键词,确定了“骨质疏松症”“女性骨质疏松症”和“男性骨质疏松症”。然后,将一组 38 个关键词依次输入到 ChatGPT 的聊天界面中。使用 Ensuring Quality Information for Patients (EQIP)、Flesch-Kincaid Grade Level (FKGL) 和 Flesch-Kincaid Reading Ease (FKRE) 工具对回复进行评估。

结果

文本的 EQIP 评分为 36.36 分至 61.76 分,平均为 48.71 分,存在“严重质量问题”。FKRE 评分为 13.71 分至 56.06 分,平均为 28.71 分,FKGL 评分为 8.48 分至 17.63 分,平均为 13.25 分。EQIP 评分与 FKGL 或 FKRE 评分之间无统计学显著相关性。

结论

尽管 ChatGPT 易于患者获取有关骨质疏松症的信息,但它当前的质量和易读性不符合全面医疗保健标准。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验