• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

PT-BitNet:通过训练后量化扩展1位大语言模型

PT-BitNet: Scaling up the 1-Bit large language model with post-training quantization.

作者信息

Guo Yufei, Hao Zecheng, Shao Jiahang, Zhou Jie, Liu Xiaode, Tong Xin, Zhang Yuhan, Chen Yuanpei, Peng Weihang, Ma Zhe

机构信息

Intelligent Science & Technology Academy of CASIC, China.

School of Computer Science, Peking University, China.

出版信息

Neural Netw. 2025 Nov;191:107855. doi: 10.1016/j.neunet.2025.107855. Epub 2025 Jul 9.

DOI:10.1016/j.neunet.2025.107855
PMID:40669405
Abstract

The deployment of Large Language Models (LLMs) has been constrained by their substantial hardware requirements and associated costs. Quantization techniques have emerged as a promising solution to address these challenges. Recently, BitNet [Wang et al., 2023] proposed to use ternary values (+1, 0, -1) for weight quantization showing particular promise in eliminating multiplication operations, further significantly reducing the latency and energy consumption. However, BitNet's requirement for training models from scratch limits its scalability to models larger than 3 billion parameters. This paper introduces PT-BitNet, a novel post-training quantization method that extends the benefits of BitNet's ternary quantization to large-scale language models up to 70B parameters. To effectively quantize the model parameters down to ±1,0, we propose a two-stage algorithm. In the first stage, we transform the weight distribution to a quantization-friendly one, and in the second stage, we optimize the weight elements in a block-wise manner. We demonstrate the effectiveness of PT-BitNet through comprehensive experiments on various model sizes and downstream tasks. Our results show that PT-BitNet achieves substantial reductions in model size and inference time, with minimal impact on task performance. For example, PT-BitNet scales to 70B parameters LLM with 61 % average downstream accuracy, significantly outperforming the BitNet b.158 with 51.2 % average accuracy.

摘要

大语言模型(LLMs)的部署受到其巨大硬件需求和相关成本的限制。量化技术已成为应对这些挑战的一种有前景的解决方案。最近,BitNet[Wang等人,2023]提出使用权重量化的三值(+1、0、-1),在消除乘法运算方面显示出特别的前景,进一步显著降低延迟和能耗。然而,BitNet从零开始训练模型的要求限制了其扩展到参数超过30亿的模型的能力。本文介绍了PT-BitNet,一种新颖的训练后量化方法,它将BitNet三值量化的优势扩展到高达700亿参数的大规模语言模型。为了有效地将模型参数量化到±1、0,我们提出了一种两阶段算法。在第一阶段,我们将权重分布转换为便于量化的分布,在第二阶段,我们以分块方式优化权重元素。我们通过对各种模型大小和下游任务的全面实验证明了PT-BitNet的有效性。我们的结果表明,PT-BitNet在模型大小和推理时间上实现了大幅减少,对任务性能的影响最小。例如,PT-BitNet可以扩展到700亿参数的大语言模型,平均下游准确率为61%,显著优于平均准确率为51.2%的BitNet b.158。

相似文献

1
PT-BitNet: Scaling up the 1-Bit large language model with post-training quantization.PT-BitNet:通过训练后量化扩展1位大语言模型
Neural Netw. 2025 Nov;191:107855. doi: 10.1016/j.neunet.2025.107855. Epub 2025 Jul 9.
2
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
3
NUPES: Non-Uniform Post-Training Quantization via Power Exponent Search.
IEEE Trans Pattern Anal Mach Intell. 2025 Nov;47(11):10012-10021. doi: 10.1109/TPAMI.2025.3593987.
4
LRQuant: A Unified and Learnable Framework to Post-training Quantization for Transformer-based Large Foundation Models.
IEEE Trans Pattern Anal Mach Intell. 2025 Aug 14;PP. doi: 10.1109/TPAMI.2025.3599479.
5
Leveraging Retrieval-Augmented Large Language Models for Dietary Recommendations With Traditional Chinese Medicine's Medicine Food Homology: Algorithm Development and Validation.利用检索增强大语言模型结合中医药食同源进行饮食推荐:算法开发与验证
JMIR Med Inform. 2025 Aug 21;13:e75279. doi: 10.2196/75279.
6
A survey of low-bit large language models: Basics, systems, and algorithms.
Neural Netw. 2025 Jul 10;192:107856. doi: 10.1016/j.neunet.2025.107856.
7
Evaluating and Improving Syndrome Differentiation Thinking Ability in Large Language Models: Method Development Study.评估和提高大语言模型中的辨证思维能力:方法开发研究
JMIR Med Inform. 2025 Jun 20;13:e75103. doi: 10.2196/75103.
8
Implementing Large Language Models in Health Care: Clinician-Focused Review With Interactive Guideline.在医疗保健中应用大语言模型:以临床医生为重点的回顾与交互式指南
J Med Internet Res. 2025 Jul 11;27:e71916. doi: 10.2196/71916.
9
Psychometric Evaluation of Large Language Model Embeddings for Personality Trait Prediction.用于人格特质预测的大语言模型嵌入的心理测量评估
J Med Internet Res. 2025 Jul 8;27:e75347. doi: 10.2196/75347.
10
Distilling knowledge from graph neural networks trained on cell graphs to non-neural student models.从在细胞图上训练的图神经网络中提取知识,用于非神经学生模型。
Sci Rep. 2025 Aug 10;15(1):29274. doi: 10.1038/s41598-025-13697-7.

引用本文的文献

1
Binary-Weighted Neural Networks Using FeRAM Array for Low-Power AI Computing.采用铁电随机存取存储器阵列实现低功耗人工智能计算的二进制加权神经网络。
Nanomaterials (Basel). 2025 Jul 28;15(15):1166. doi: 10.3390/nano15151166.