Suppr超能文献

ChatGPT 生成的护理计划文本的可靠性和质量。

Reliability and Quality of the Nursing Care Planning Texts Generated by ChatGPT.

机构信息

Author Affiliation: Department of Nursing, Bezmialem Vakif University, Faculty of Health Sciences, Istanbul, Turkey.

出版信息

Nurse Educ. 2024;49(3):E109-E114. doi: 10.1097/NNE.0000000000001566. Epub 2023 Nov 22.

Abstract

BACKGROUND

The research on ChatGPT-generated nursing care planning texts is critical for enhancing nursing education through innovative and accessible learning methods, improving reliability and quality.

PURPOSE

The aim of the study was to examine the quality, authenticity, and reliability of the nursing care planning texts produced using ChatGPT.

METHODS

The study sample comprised 40 texts generated by ChatGPT selected nursing diagnoses that were included in NANDA 2021-2023. The texts were evaluated by using descriptive criteria form and DISCERN tool to evaluate health information.

RESULTS

DISCERN total average score of the texts was 45.93 ± 4.72. All texts had a moderate level of reliability and 97.5% of them provided moderate quality subscale score of information. A statistically significant relationship was found among the number of accessible references, reliability ( r = 0.408) and quality subscale score ( r = 0.379) of the texts ( P < .05).

CONCLUSION

ChatGPT-generated texts exhibited moderate reliability, quality of nursing care information, and overall quality despite low similarity rates.

摘要

背景

研究 ChatGPT 生成的护理计划文本对于通过创新和可访问的学习方法增强护理教育、提高可靠性和质量至关重要。

目的

本研究旨在检查使用 ChatGPT 生成的护理计划文本的质量、真实性和可靠性。

方法

研究样本包括 40 篇由 ChatGPT 生成的文本,这些文本选择了包含在 NANDA 2021-2023 中的护理诊断。使用描述性标准表格和 DISCERN 工具评估健康信息来评估文本。

结果

DISCERN 总平均得分为 45.93±4.72。所有文本的可靠性均为中等水平,97.5%的文本提供了中等质量的信息子量表得分。文本的可访问参考数量、可靠性(r=0.408)和质量子量表得分(r=0.379)之间存在统计学显著关系(P<.05)。

结论

尽管相似度较低,但 ChatGPT 生成的文本表现出中等的可靠性、护理信息质量和整体质量。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验