Gong Xinrong, Gao Jiaran, Sun Song, Zhong Zhijie, Shi Yifan, Zeng Huanqiang, Yang Kaixiang
IEEE J Biomed Health Inform. 2025 Apr 8;PP. doi: 10.1109/JBHI.2025.3558935.
The emergence of large language models (LLMs) has been a key enabler of technological innovation in healthcare. People can conveniently obtain a more accurate medical consultation service by utilizing LLMs' powerful knowledge inference capability. However, existing LLMs require users to upload explicit requests during remote healthcare consultations, which involves the risk of exposing personal privacy. Furthermore, the reliability of the response content generated by LLMs is not guaranteed. To tackle the above challenges, this paper proposes a novel privacy-preserving LLM for user-activated health, called Adaptive Compressed-based Privacy-preserving LLM (ACP2LLM). Specifically, an adaptive token compression method based on information entropy is carefully designed to ensure that ACP2LLM can preserve user-sensitive information when invoking the medical consultation of LLMs deployed on the cloud platform. Moreover, a multi-doctor one-chief physician mechanism is proposed to rationally split and collaboratively infer the patients' requests to achieve the privacy-utility trade-off. Notably, the proposed ACP2LLM also provides highly competitive performance in various token compression rates. Extensive experiments on multiple Medical Question and Answers datasets demonstrate that the proposed ACP2LLM has strong privacy protection capabilities and high answer precision, outperforming current state-of-the-art LLM methods.
大语言模型(LLMs)的出现是医疗保健领域技术创新的关键推动因素。人们可以通过利用大语言模型强大的知识推理能力,方便地获得更准确的医疗咨询服务。然而,现有的大语言模型在远程医疗咨询过程中要求用户上传明确的请求,这存在暴露个人隐私的风险。此外,大语言模型生成的响应内容的可靠性也无法保证。为应对上述挑战,本文提出了一种用于用户激活健康的新型隐私保护大语言模型,称为基于自适应压缩的隐私保护大语言模型(ACP2LLM)。具体而言,精心设计了一种基于信息熵的自适应令牌压缩方法,以确保ACP2LLM在调用部署在云平台上的大语言模型进行医疗咨询时能够保护用户敏感信息。此外,还提出了一种多医生一主任医师机制,以合理拆分并协同推理患者的请求,从而实现隐私与效用的权衡。值得注意的是,所提出的ACP2LLM在各种令牌压缩率下也具有极具竞争力的性能。在多个医学问答数据集上进行的广泛实验表明,所提出的ACP2LLM具有强大的隐私保护能力和高答案精度,优于当前最先进的大语言模型方法。