Omar Mahmud, Glicksberg Benjamin S, Nadkarni Girish N, Klang Eyal
The Windreich Department of Artificial Intelligence and Human Health, Mount Sinai Medical Center, NY, USA; The Hasso Plattner Institute for Digital Health at Mount Sinai, Mount Sinai Health System, NY, USA.
The Windreich Department of Artificial Intelligence and Human Health, Mount Sinai Medical Center, NY, USA; The Division of Data-Driven and Digital Medicine (D3M), Icahn School of Medicine at Mount Sinai, New York, NY, USA; The Hasso Plattner Institute for Digital Health at Mount Sinai, Mount Sinai Health System, NY, USA.
Comput Biol Med. 2025 Sep;196(Pt B):110731. doi: 10.1016/j.compbiomed.2025.110731. Epub 2025 Jul 16.
Large language models (LLMs) show promising accuracy on challenging tasks, including medical question answering. Yet, direct gains from model upgrades can plateau, and reliability issues persist. We introduce Iterative Consensus Ensemble (ICE), a proof-of-concept framework that refines answers through iterative reasoning and feedback among multiple LLMs. This ensemble method encourages diverse models to scrutinize each other's outputs, converging on a consensus solution. We tested ICE on four different datasets. These included over 4000 multiple-choice questions drawn from a newly curated primary care exam set, established medical benchmarks, and a PhD-level reasoning dataset. Compared to initial single-model attempts, ICE improved final overall accuracy by up to 27 %. It reached accuracies 81 % in medical subsets and 72 % in multi-domain tasks from initial scores of about 72 % and 60 %, respectively. In a particularly challenging PhD-level reasoning benchmark (GPQA-diamond), ICE raised performance from 46.9 % initially to 68.2 % at the final consensus, a relative gain exceeding 45 %. On a specialized family medicine dataset, ICE's results were statistically indistinguishable from those of a complex reasoning model (O1-preview), despite O1's higher cost and computational demands. Additional analyses showed that ICE's iterative consensus remained effective under different prompting styles. Our proposed framework leverages standard LLMs and repeated prompting, requiring no specialized reward models or intricate token-level fusion. These findings show that iterative collaboration can transform LLM ensembles into more reliable, cost-efficient solvers, advancing performance in medical and general reasoning domains. Future refinements may integrate chain-of-thought steps or specialist models, extending this approach to more complex challenges as LLMs and benchmarks continue to evolve.
大语言模型(LLMs)在包括医学问答在内的具有挑战性的任务上展现出了可观的准确率。然而,模型升级带来的直接收益可能会趋于平稳,可靠性问题依然存在。我们引入了迭代共识集成(ICE),这是一个概念验证框架,通过多个大语言模型之间的迭代推理和反馈来优化答案。这种集成方法鼓励不同的模型相互审查彼此的输出,从而趋向于达成一个共识解决方案。我们在四个不同的数据集上测试了ICE。这些数据集包括从新整理的初级保健考试集、既定的医学基准以及一个博士水平的推理数据集中抽取的4000多个多项选择题。与最初的单模型尝试相比,ICE将最终的总体准确率提高了多达27%。在医学子集中,它从最初约72%的准确率提高到了81%,在多领域任务中,从最初约60%的准确率提高到了72%。在一个特别具有挑战性的博士水平推理基准测试(GPQA - 钻石)中,ICE将性能从最初的46.9%提高到了最终共识时的68.2%,相对增益超过45%。在一个专门的家庭医学数据集上,尽管O1成本更高且计算需求更大,但ICE的结果在统计学上与一个复杂推理模型(O1 - 预览)的结果没有差异。进一步的分析表明,在不同的提示风格下,ICE的迭代共识仍然有效。我们提出的框架利用了标准的大语言模型和重复提示,不需要专门的奖励模型或复杂的令牌级融合。这些发现表明,迭代协作可以将大语言模型集成转变为更可靠、更具成本效益的求解器,提高医学和一般推理领域的性能。随着大语言模型和基准测试的不断发展,未来的改进可能会整合思维链步骤或专家模型,将这种方法扩展到更复杂的挑战中。