Prasad Soumil, Langlie Jake, Pasick Luke, Chen Ryan, Franzmann Elizabeth
Department of Otolaryngology, University of Miami Miller School of Medicine, 1600 NW 10(th) Ave, Miami, FL 33131. United States of America..
Department of Otolaryngology, University of Miami Miller School of Medicine, 1600 NW 10(th) Ave, Miami, FL 33131. United States of America.
Am J Otolaryngol. 2025 May 10;46(4):104667. doi: 10.1016/j.amjoto.2025.104667.
This study aimed to evaluate the diagnostic accuracy, comprehensiveness, and clinical relevance of two advanced artificial intelligence (AI) models, OpenAI's ChatGPT-4.0 and DeepSeek-R1, in the field of otolaryngology.
Five common otolaryngology procedures-adenotonsillectomy, tympanoplasty, endoscopic sinus surgery, parotidectomy, and total laryngectomy-were analyzed through standardized queries posed to both AI models. Because the prompts replicate questions that patients typically search online, our evaluation focuses on patient-facing informational adequacy. Responses were independently evaluated by two study members for accuracy, clinical relevance, and comprehensiveness, with discrepancies resolved through consensus. The analysis included comparison with clinical guidelines.
ChatGPT-4.0 generally provided detailed procedural insights, effectively covering indications, methodologies, risks, and recovery processes. However, it occasionally suggested excessive diagnostic imaging and omitted subtle yet significant surgical nuances. DeepSeek-R1 delivered concise, structured responses clearly categorizing indications, treatment alternatives, and procedural risks. Nonetheless, it frequently lacked detailed elaboration, omitting important surgical techniques and minor complications. For instance, DeepSeek-R1 omitted specifics such as hemostatic techniques in adenotonsillectomy and graft stabilization details in tympanoplasty. Neither model adequately addressed critical elements like comprehensive staging, detailed surgical planning, and long-term recovery nuances, especially for complex procedures such as total laryngectomy.
Both ChatGPT-4.0 and DeepSeek-R1 demonstrated significant diagnostic potential but revealed limitations in precision, comprehensiveness, and nuanced clinical reasoning. Their clinical utility remains restricted, highlighting a continued need for AI refinement to enhance patient-specific decision-making capabilities in otolaryngology.
本研究旨在评估两种先进的人工智能(AI)模型,即OpenAI的ChatGPT-4.0和DeepSeek-R1,在耳鼻喉科领域的诊断准确性、全面性和临床相关性。
通过向这两种AI模型提出标准化问题,对五种常见的耳鼻喉科手术——腺样体扁桃体切除术、鼓室成形术、鼻内镜鼻窦手术、腮腺切除术和全喉切除术进行分析。由于这些提示复制了患者通常在网上搜索的问题,我们的评估重点在于面向患者的信息充分性。两名研究成员独立评估回答的准确性、临床相关性和全面性,分歧通过协商解决。分析包括与临床指南的比较。
ChatGPT-4.0通常提供详细的手术见解,有效地涵盖了适应症、方法、风险和恢复过程。然而,它偶尔会建议进行过多的诊断性影像学检查,并遗漏一些细微但重要的手术细节。DeepSeek-R1给出简洁、结构化的回答,清晰地对适应症、治疗选择和手术风险进行了分类。尽管如此,它经常缺乏详细阐述,遗漏了重要的手术技术和轻微并发症。例如,DeepSeek-R1遗漏了腺样体扁桃体切除术中的止血技术和鼓室成形术中移植物固定细节等具体内容。这两种模型都没有充分解决诸如全面分期、详细手术规划和长期恢复细微差别等关键要素,特别是对于全喉切除术等复杂手术。
ChatGPT-4.0和DeepSeek-R1都显示出显著的诊断潜力,但在精确性、全面性和细微的临床推理方面存在局限性。它们的临床实用性仍然有限,这突出表明仍需要改进人工智能,以增强耳鼻喉科中针对患者的决策能力。