Kim Hyunjae, Hwang Hyeon, Lee Jiwoo, Park Sihyeon, Kim Dain, Lee Taewhoo, Yoon Chanwoong, Sohn Jiwoong, Park Jungwoo, Reykhart Olga, Fetherston Thomas, Choi Donghee, Kwak Soo Heon, Chen Qingyu, Kang Jaewoo
Korea University, Seoul, Republic of Korea.
Yale University, New Haven, CT, USA.
NPJ Digit Med. 2025 May 2;8(1):240. doi: 10.1038/s41746-025-01653-8.
Small language models (SLM) offer promise for medical applications by addressing the privacy and hardware constraints of large language models; however, their limited parameters (often fewer than ten billion) hinder multi-step reasoning for complex medical tasks. This study presents Meerkat, a new family of medical SLMs designed to be lightweight while enhancing reasoning capabilities. We begin by designing an effective and efficient training method. This involves extracting high-quality chain-of-thought reasoning paths from 18 medical textbooks, which are then combined with diverse instruction-following datasets within the medical domain, totaling 441K training examples. Fine-tuning was conducted on open-source SLMs using this curated dataset. Our Meerkat-7B and Meerkat-8B models outperformed their counterparts by 22.3% and 10.6% across six exam datasets, respectively. They also improved scores on the NEJM Case Challenge from 7 to 16 and from 13 to 20, surpassing the human score of 13.7. Additionally, they demonstrated superiority in expert evaluations, excelling in all metrics-completeness, factuality, clarity, and logical consistency-of reasoning abilities.
小型语言模型(SLM)通过解决大型语言模型的隐私和硬件限制为医学应用带来了希望;然而,它们有限的参数(通常少于100亿)阻碍了复杂医学任务的多步推理。本研究提出了Meerkat,这是一个新的医学SLM系列,旨在在增强推理能力的同时保持轻量级。我们首先设计了一种有效且高效的训练方法。这包括从18本医学教科书中提取高质量的思维链推理路径,然后将其与医学领域内各种遵循指令的数据集相结合,总共441,000个训练示例。使用这个精心策划的数据集对开源SLM进行微调。我们的Meerkat-7B和Meerkat-8B模型在六个考试数据集中分别比同类模型高出22.3%和10.6%。它们还将《新英格兰医学杂志》病例挑战的分数从7提高到16,从13提高到20,超过了人类分数13.7。此外,它们在专家评估中表现出色,在推理能力的所有指标——完整性、事实性、清晰度和逻辑一致性方面都表现优异。