{"id":1095489,"date":"2024-10-19T21:13:57","date_gmt":"2024-10-20T04:13:57","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&#038;p=1095489"},"modified":"2024-10-23T09:14:26","modified_gmt":"2024-10-23T16:14:26","slug":"vptq-extreme-low-bit-vector-post-training-quantization-for-large-language-models","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/vptq-extreme-low-bit-vector-post-training-quantization-for-large-language-models\/","title":{"rendered":"VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models"},"content":{"rendered":"<p>Scaling model size significantly challenges the deployment and inference of Large Language Models (LLMs). Due to the redundancy in LLM weights, recent research has focused on pushing weight-only quantization to extremely low-bit (even down to 2 bits). It reduces memory requirements, optimizes storage costs, and decreases memory bandwidth needs during inference. However, due to numerical representation limitations, traditional scalar-based weight quantization struggles to achieve such extreme low-bit. Recent research on Vector Quantization (VQ) for LLMs has demonstrated the potential for extremely low-bit model quantization by compressing vectors into indices using lookup tables.<\/p>\n<p>In this paper, we introduce Vector Post-Training Quantization (VPTQ) for extremely low-bit quantization of LLMs. We use Second-Order Optimization to formulate the LLM VQ problem and guide our quantization algorithm design by solving the optimization. We further refine the weights using Channel-Independent Second-Order Optimization for a granular VQ. In addition, by decomposing the optimization problem, we propose a brief and effective codebook initialization algorithm. We also extend VPTQ to support residual and outlier quantization, which enhances model accuracy and further compresses the model. Our experimental results show that VPTQ reduces model quantization perplexity by\u00a0<span id=\"MathJax-Element-1-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-1\" class=\"math\"><span id=\"MathJax-Span-2\" class=\"mrow\"><span id=\"MathJax-Span-3\" class=\"mn\">0.01<\/span><\/span><\/span><\/span>&#8211;<span id=\"MathJax-Element-2-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-4\" class=\"math\"><span id=\"MathJax-Span-5\" class=\"mrow\"><span id=\"MathJax-Span-6\" class=\"mn\">0.34<\/span><\/span><\/span><\/span>\u00a0on LLaMA-2,\u00a0<span id=\"MathJax-Element-3-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-7\" class=\"math\"><span id=\"MathJax-Span-8\" class=\"mrow\"><span id=\"MathJax-Span-9\" class=\"mn\">0.38<\/span><\/span><\/span><\/span>&#8211;<span id=\"MathJax-Element-4-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-10\" class=\"math\"><span id=\"MathJax-Span-11\" class=\"mrow\"><span id=\"MathJax-Span-12\" class=\"mn\">0.68<\/span><\/span><\/span><\/span>\u00a0on Mistral-7B,\u00a0<span id=\"MathJax-Element-5-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-13\" class=\"math\"><span id=\"MathJax-Span-14\" class=\"mrow\"><span id=\"MathJax-Span-15\" class=\"mn\">4.41<\/span><\/span><\/span><\/span>&#8211;<span id=\"MathJax-Element-6-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-16\" class=\"math\"><span id=\"MathJax-Span-17\" class=\"mrow\"><span id=\"MathJax-Span-18\" class=\"mn\">7.34<\/span><\/span><\/span><\/span>\u00a0on LLaMA-3 over SOTA at 2-bit, with an average accuracy improvement of\u00a0<span id=\"MathJax-Element-7-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-19\" class=\"math\"><span id=\"MathJax-Span-20\" class=\"mrow\"><span id=\"MathJax-Span-21\" class=\"mn\">0.79<\/span><\/span><\/span><\/span>&#8211;<span id=\"MathJax-Element-8-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-22\" class=\"math\"><span id=\"MathJax-Span-23\" class=\"mrow\"><span id=\"MathJax-Span-24\" class=\"mn\">1.5<\/span><span id=\"MathJax-Span-25\" class=\"mi\">%<\/span><\/span><\/span><\/span>\u00a0on LLaMA-2,\u00a0<span id=\"MathJax-Element-9-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-26\" class=\"math\"><span id=\"MathJax-Span-27\" class=\"mrow\"><span id=\"MathJax-Span-28\" class=\"mn\">1<\/span><span id=\"MathJax-Span-29\" class=\"mi\">%<\/span><\/span><\/span><\/span>\u00a0on Mistral-7B,\u00a0<span id=\"MathJax-Element-10-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-30\" class=\"math\"><span id=\"MathJax-Span-31\" class=\"mrow\"><span id=\"MathJax-Span-32\" class=\"mn\">11<\/span><\/span><\/span><\/span>&#8211;<span id=\"MathJax-Element-11-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-33\" class=\"math\"><span id=\"MathJax-Span-34\" class=\"mrow\"><span id=\"MathJax-Span-35\" class=\"mn\">22<\/span><span id=\"MathJax-Span-36\" class=\"mi\">%<\/span><\/span><\/span><\/span>\u00a0on LLaMA-3 on QA tasks on average. We only utilize\u00a0<span id=\"MathJax-Element-12-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-37\" class=\"math\"><span id=\"MathJax-Span-38\" class=\"mrow\"><span id=\"MathJax-Span-39\" class=\"mn\">10.4<\/span><\/span><\/span><\/span>&#8211;<span id=\"MathJax-Element-13-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-40\" class=\"math\"><span id=\"MathJax-Span-41\" class=\"mrow\"><span id=\"MathJax-Span-42\" class=\"mn\">18.6<\/span><span id=\"MathJax-Span-43\" class=\"mi\">%<\/span><\/span><\/span><\/span>\u00a0of the quantization algorithm execution time, resulting in a\u00a0<span id=\"MathJax-Element-14-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-44\" class=\"math\"><span id=\"MathJax-Span-45\" class=\"mrow\"><span id=\"MathJax-Span-46\" class=\"mn\">1.6<\/span><\/span><\/span><\/span>&#8211;<span id=\"MathJax-Element-15-Frame\" class=\"MathJax\"><span id=\"MathJax-Span-47\" class=\"math\"><span id=\"MathJax-Span-48\" class=\"mrow\"><span id=\"MathJax-Span-49\" class=\"mn\">1.8<\/span><span id=\"MathJax-Span-50\" class=\"mo\">\u00d7<\/span><\/span><\/span><\/span> increase in inference throughput compared to SOTA.<\/p>\n<p>&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Scaling model size significantly challenges the deployment and inference of Large Language Models (LLMs). Due to the redundancy in LLM weights, recent research has focused on pushing weight-only quantization to extremely low-bit (even down to 2 bits). It reduces memory requirements, optimizes storage costs, and decreases memory bandwidth needs during inference. However, due to numerical [&hellip;]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":null,"msr_publishername":"","msr_publisher_other":"","msr_booktitle":"","msr_chapter":"","msr_edition":"","msr_editors":"","msr_how_published":"","msr_isbn":"","msr_issue":"","msr_journal":"","msr_number":"","msr_organization":"","msr_pages_string":"","msr_page_range_start":"","msr_page_range_end":"","msr_series":"","msr_volume":"","msr_copyright":"","msr_conference_name":"","msr_doi":"","msr_arxiv_id":"","msr_s2_paper_id":"","msr_mag_id":"","msr_pubmed_id":"","msr_other_authors":"","msr_other_contributors":"","msr_speaker":"","msr_award":"","msr_affiliation":"","msr_institution":"","msr_host":"","msr_version":"","msr_duration":"","msr_original_fields_of_study":"","msr_release_tracker_id":"","msr_s2_match_type":"","msr_citation_count_updated":"","msr_published_date":"2024-11-1","msr_highlight_text":"","msr_notes":"","msr_longbiography":"","msr_publicationurl":"","msr_external_url":"","msr_secondary_video_url":"","msr_conference_url":"https:\/\/2024.emnlp.org\/","msr_journal_url":"","msr_s2_pdf_url":"","msr_year":0,"msr_citation_count":0,"msr_influential_citations":0,"msr_reference_count":0,"msr_s2_match_confidence":0,"msr_microsoftintellectualproperty":true,"msr_s2_open_access":false,"msr_s2_author_ids":[],"msr_pub_ids":[],"msr_hide_image_in_river":0,"footnotes":""},"msr-research-highlight":[],"research-area":[13556,13547],"msr-publication-type":[193716],"msr-publisher":[],"msr-focus-area":[],"msr-locale":[268875],"msr-post-option":[269148,269142],"msr-field-of-study":[],"msr-conference":[],"msr-journal":[],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-1095489","msr-research-item","type-msr-research-item","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-systems-and-networking","msr-locale-en_us","msr-post-option-approved-for-river","msr-post-option-include-in-river"],"msr_publishername":"","msr_edition":"","msr_affiliation":"","msr_published_date":"2024-11-1","msr_host":"","msr_duration":"","msr_version":"","msr_speaker":"","msr_other_contributors":"","msr_booktitle":"","msr_pages_string":"","msr_chapter":"","msr_isbn":"","msr_journal":"","msr_volume":"","msr_number":"","msr_editors":"","msr_series":"","msr_issue":"","msr_organization":"","msr_how_published":"","msr_notes":"","msr_highlight_text":"","msr_release_tracker_id":"","msr_original_fields_of_study":"","msr_download_urls":"","msr_external_url":"","msr_secondary_video_url":"","msr_longbiography":"","msr_microsoftintellectualproperty":1,"msr_main_download":"","msr_publicationurl":"","msr_doi":"","msr_publication_uploader":[{"type":"url","viewUrl":"false","id":"false","title":"https:\/\/arxiv.org\/abs\/2409.17066","label_id":"243109","label":0}],"msr_related_uploader":[{"type":"url","viewUrl":"false","id":"false","title":"https:\/\/github.com\/microsoft\/VPTQ","label_id":"264520","label":0}],"msr_citation_count":0,"msr_citation_count_updated":"","msr_s2_paper_id":"","msr_influential_citations":0,"msr_reference_count":0,"msr_arxiv_id":"","msr_s2_author_ids":[],"msr_s2_open_access":false,"msr_s2_pdf_url":null,"msr_attachments":[],"msr-author-ordering":[{"type":"text","value":"Yifei Liu","user_id":0,"rest_url":false},{"type":"text","value":"Jicheng Wen","user_id":0,"rest_url":false},{"type":"user_nicename","value":"Yang Wang","user_id":42039,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Yang Wang"},{"type":"text","value":"Shengyu Ye","user_id":0,"rest_url":false},{"type":"user_nicename","value":"Li Lyna Zhang","user_id":38121,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Li Lyna Zhang"},{"type":"user_nicename","value":"Ting Cao","user_id":37446,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Ting Cao"},{"type":"text","value":"Cheng Li","user_id":0,"rest_url":false},{"type":"user_nicename","value":"Mao Yang","user_id":32798,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Mao Yang"}],"msr_impact_theme":[],"msr_research_lab":[199560,1012650],"msr_event":[],"msr_group":[510017],"msr_project":[],"publication":[],"video":[],"msr-tool":[1087395],"msr_publication_type":"inproceedings","related_content":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/1095489","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-research-item"}],"version-history":[{"count":3,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/1095489\/revisions"}],"predecessor-version":[{"id":1096551,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/1095489\/revisions\/1096551"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1095489"}],"wp:term":[{"taxonomy":"msr-research-highlight","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-highlight?post=1095489"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1095489"},{"taxonomy":"msr-publication-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-publication-type?post=1095489"},{"taxonomy":"msr-publisher","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-publisher?post=1095489"},{"taxonomy":"msr-focus-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-focus-area?post=1095489"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1095489"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1095489"},{"taxonomy":"msr-field-of-study","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-field-of-study?post=1095489"},{"taxonomy":"msr-conference","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-conference?post=1095489"},{"taxonomy":"msr-journal","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-journal?post=1095489"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1095489"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=1095489"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}