Kwong Jethro C C, Wang Serena C Y, Nickel Grace C, Cacciamani Giovanni E, Kvedar Joseph C
Division of Urology, Department of Surgery, University of Toronto, Toronto, ON, Canada.
Temerty Centre for AI Research and Education in Medicine, University of Toronto, Toronto, ON, Canada.
NPJ Digit Med. 2024 Jul 4;7(1):177. doi: 10.1038/s41746-024-01180-y.
Large language models (LLMs) have shown promise in reducing time, costs, and errors associated with manual data extraction. A recent study demonstrated that LLMs outperformed natural language processing approaches in abstracting pathology report information. However, challenges include the risks of weakening critical thinking, propagating biases, and hallucinations, which may undermine the scientific method and disseminate inaccurate information. Incorporating suitable guidelines (e.g., CANGARU), should be encouraged to ensure responsible LLM use.
大型语言模型(LLMs)在减少与手动数据提取相关的时间、成本和错误方面已显示出前景。最近的一项研究表明,在提取病理报告信息方面,大型语言模型的表现优于自然语言处理方法。然而,挑战包括削弱批判性思维、传播偏见和产生幻觉的风险,这些可能会破坏科学方法并传播不准确的信息。应鼓励采用适当的指导方针(如CANGARU),以确保负责任地使用大型语言模型。