Suppr超能文献

使用大型预训练语言模型识别在线健康信息:混合方法研究。

Identification of Online Health Information Using Large Pretrained Language Models: Mixed Methods Study.

作者信息

Tan Dongmei, Huang Yi, Liu Ming, Li Ziyu, Wu Xiaoqian, Huang Cheng

机构信息

College of Medical Informatics, Chongqing Medical University, Chongqing, China.

Human Resources Department, Army Medical Center, Army Medical University (The Third Military Medical University), Chongqing, China.

出版信息

J Med Internet Res. 2025 May 14;27:e70733. doi: 10.2196/70733.

Abstract

BACKGROUND

Online health information is widely available, but a substantial portion of it is inaccurate or misleading, including exaggerated, incomplete, or unverified claims. Such misinformation can significantly influence public health decisions and pose serious challenges to health care systems. With advances in artificial intelligence and natural language processing, pretrained large language models (LLMs) have shown promise in identifying and distinguishing misleading health information, although their effectiveness in this area remains underexplored.

OBJECTIVE

This study aimed to evaluate the performance of 4 mainstream LLMs (ChatGPT-3.5, ChatGPT-4, Ernie Bot, and iFLYTEK Spark) in the identification of online health information, providing empirical evidence for their practical application in this field.

METHODS

Web scraping was used to collect data from rumor-refuting websites, resulting in 2708 samples of online health information, including both true and false claims. The 4 LLMs' application programming interfaces were used for authenticity verification, with expert results as benchmarks. Model performance was evaluated using semantic similarity, accuracy, recall, F-score, content analysis, and credibility.

RESULTS

This study found that the 4 models performed well in identifying online health information. Among them, ChatGPT-4 achieved the highest accuracy at 87.27%, followed by Ernie Bot at 87.25%, iFLYTEK Spark at 87%, and ChatGPT-3.5 at 81.82%. Furthermore, text length and semantic similarity analysis showed that Ernie Bot had the highest similarity to expert texts, whereas ChatGPT-4 showed good overall consistency in its explanations. In addition, the credibility assessment results indicated that ChatGPT-4 provided the most reliable evaluations. Further analysis suggested that the highest misjudgment probabilities with respect to the LLMs occurred within the topics of food and maternal-infant nutrition management and nutritional science and food controversies. Overall, the research suggests that LLMs have potential in online health information identification; however, their understanding of certain specialized health topics may require further improvement.

CONCLUSIONS

The results demonstrate that, while these models show potential in providing assistance, their performance varies significantly in terms of accuracy, semantic understanding, and cultural adaptability. The principal findings highlight the models' ability to generate accessible and context-aware explanations; however, they fall short in areas requiring specialized medical knowledge or updated data, particularly for emerging health issues and context-sensitive scenarios. Significant discrepancies were observed in the models' ability to distinguish scientifically verified knowledge from popular misconceptions and in their stability when processing complex linguistic and cultural contexts. These challenges reveal the importance of refining training methodologies to improve the models' reliability and adaptability. Future research should focus on enhancing the models' capability to manage nuanced health topics and diverse cultural and linguistic nuances, thereby facilitating their broader adoption as reliable tools for online health information identification.

摘要

背景

在线健康信息广泛可得,但其中很大一部分是不准确或具有误导性的,包括夸大、不完整或未经证实的说法。此类错误信息会显著影响公众健康决策,并给医疗保健系统带来严峻挑战。随着人工智能和自然语言处理技术的进步,预训练的大语言模型(LLMs)在识别和区分误导性健康信息方面显示出了潜力,尽管其在该领域的有效性仍未得到充分探索。

目的

本研究旨在评估4种主流大语言模型(ChatGPT - 3.5、ChatGPT - 4、文心一言和讯飞星火)在识别在线健康信息方面的性能,为其在该领域的实际应用提供实证依据。

方法

通过网络爬虫从辟谣网站收集数据,得到2708个在线健康信息样本,包括真假说法。使用这4种大语言模型的应用程序编程接口进行真实性验证,以专家结果作为基准。使用语义相似度、准确率、召回率、F值、内容分析和可信度来评估模型性能。

结果

本研究发现这4种模型在识别在线健康信息方面表现良好。其中,ChatGPT - 4的准确率最高,为87.27%,其次是文心一言,为87.25%,讯飞星火为87%,ChatGPT - 3.5为81.82%。此外,文本长度和语义相似度分析表明,文心一言与专家文本的相似度最高,而ChatGPT - 4在其解释中显示出良好的整体一致性。此外,可信度评估结果表明ChatGPT - 4提供了最可靠的评估。进一步分析表明,大语言模型在食品以及母婴营养管理和营养科学与食品争议等主题方面的误判概率最高。总体而言,该研究表明大语言模型在在线健康信息识别方面具有潜力;然而,它们对某些专业健康主题的理解可能需要进一步改进。

结论

结果表明,虽然这些模型在提供帮助方面显示出潜力,但其在准确性、语义理解和文化适应性方面的表现差异很大。主要研究结果突出了模型生成易懂且上下文感知解释的能力;然而,它们在需要专业医学知识或最新数据的领域存在不足,特别是对于新出现的健康问题和上下文敏感场景。在区分科学验证的知识与普遍误解的能力以及处理复杂语言和文化背景时的稳定性方面,模型存在显著差异。这些挑战揭示了改进训练方法以提高模型可靠性和适应性的重要性。未来的研究应专注于增强模型处理细微健康主题以及多样文化和语言细微差别的能力,从而促进它们作为在线健康信息识别可靠工具的更广泛应用。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/bde9/12120363/b926ce878fee/jmir_v27i1e70733_fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验