{"id":1031571,"date":"2024-05-06T18:16:06","date_gmt":"2024-05-07T01:16:06","guid":{"rendered":"https:\/\/www.microsoft.com\/en-us\/research\/?post_type=msr-research-item&#038;p=1031571"},"modified":"2024-05-06T18:16:06","modified_gmt":"2024-05-07T01:16:06","slug":"cachegen-fast-context-loading-for-language-model-applications-via-kv-cache-streaming","status":"publish","type":"msr-research-item","link":"https:\/\/www.microsoft.com\/en-us\/research\/publication\/cachegen-fast-context-loading-for-language-model-applications-via-kv-cache-streaming\/","title":{"rendered":"CacheGen: Fast Context Loading for Language Model Applications via KV Cache Streaming"},"content":{"rendered":"<p><span dir=\"ltr\" role=\"presentation\">As large language models (LLMs) take on complex tasks, <\/span><span dir=\"ltr\" role=\"presentation\">their inputs are supplemented with<\/span> <span dir=\"ltr\" role=\"presentation\">longer contexts<\/span> <span dir=\"ltr\" role=\"presentation\">that in<\/span><span dir=\"ltr\" role=\"presentation\">corporate domain knowledge or user-specific information. <\/span><span dir=\"ltr\" role=\"presentation\">Yet using long contexts poses a challenge for responsive LLM <\/span><span dir=\"ltr\" role=\"presentation\">systems, as nothing can be generated until the whole context <\/span><span dir=\"ltr\" role=\"presentation\">is processed by the LLM. While the context-processing delay <\/span><span dir=\"ltr\" role=\"presentation\">can be reduced by reusing the KV cache of a context across <\/span><span dir=\"ltr\" role=\"presentation\">different inputs, fetching the KV cache, which contains large <\/span><span dir=\"ltr\" role=\"presentation\">tensors, over the network can cause extra network delays. <\/span><\/p>\n<p><span dir=\"ltr\" role=\"presentation\">CacheGen is a fast context-loading module for LLM sys<\/span><span dir=\"ltr\" role=\"presentation\">tems. First, CacheGen uses a custom tensor encoder, which <\/span><span dir=\"ltr\" role=\"presentation\">embraces KV cache\u2019s distributional properties, to<\/span> <span dir=\"ltr\" role=\"presentation\">encode<\/span> <span dir=\"ltr\" role=\"presentation\">a <\/span><span dir=\"ltr\" role=\"presentation\">KV cache into more compact bitstream representations with <\/span><span dir=\"ltr\" role=\"presentation\">negligible encoding\/decoding overhead. This reduces the <\/span><span dir=\"ltr\" role=\"presentation\">bandwidth demand to fetch the KV cache. Second, to main<\/span><span dir=\"ltr\" role=\"presentation\">tain low context-loading delay and high generation qual<\/span><span dir=\"ltr\" role=\"presentation\">ity, CacheGen<\/span> <span dir=\"ltr\" role=\"presentation\">adapts<\/span> <span dir=\"ltr\" role=\"presentation\">the streaming strategies to cope with <\/span><span dir=\"ltr\" role=\"presentation\">changes in available bandwidth. When available bandwidth <\/span><span dir=\"ltr\" role=\"presentation\">drops, CacheGen may raise the compression level for a part <\/span><span dir=\"ltr\" role=\"presentation\">of the context or choose to recompute its KV cache on the fly.<\/span><\/p>\n<p><span dir=\"ltr\" role=\"presentation\">We test CacheGen on four popular LLMs of various sizes and <\/span><span dir=\"ltr\" role=\"presentation\">four datasets (662 contexts in total). Compared to the recent <\/span><span dir=\"ltr\" role=\"presentation\">systems that reuse the KV cache, CacheGen reduces the KV <\/span><span dir=\"ltr\" role=\"presentation\">cache size by 3.5-4.3<\/span><span dir=\"ltr\" role=\"presentation\">\u00d7<\/span> <span dir=\"ltr\" role=\"presentation\">and the<\/span> <span dir=\"ltr\" role=\"presentation\">total<\/span> <span dir=\"ltr\" role=\"presentation\">delay in fetching and pro<\/span><span dir=\"ltr\" role=\"presentation\">cessing contexts by 3.2-3.7<\/span><span dir=\"ltr\" role=\"presentation\">\u00d7<\/span> <span dir=\"ltr\" role=\"presentation\">while having negligible impact<\/span><br role=\"presentation\" \/><span dir=\"ltr\" role=\"presentation\">on the LLM response quality in accuracy or perplexity.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>As large language models (LLMs) take on complex tasks, their inputs are supplemented with longer contexts that incorporate domain knowledge or user-specific information. Yet using long contexts poses a challenge for responsive LLM systems, as nothing can be generated until the whole context is processed by the LLM. While the context-processing delay can be reduced [&hellip;]<\/p>\n","protected":false},"featured_media":0,"template":"","meta":{"msr-url-field":"","msr-podcast-episode":"","msrModifiedDate":"","msrModifiedDateEnabled":false,"ep_exclude_from_search":false,"_classifai_error":"","msr-author-ordering":null,"msr_publishername":"ACM","msr_publisher_other":"","msr_booktitle":"","msr_chapter":"","msr_edition":"","msr_editors":"","msr_how_published":"","msr_isbn":"","msr_issue":"","msr_journal":"","msr_number":"","msr_organization":"ACM SIGCOMM","msr_pages_string":"","msr_page_range_start":"","msr_page_range_end":"","msr_series":"","msr_volume":"","msr_copyright":"","msr_conference_name":"SIGCOMM","msr_doi":"","msr_arxiv_id":"","msr_s2_paper_id":"","msr_mag_id":"","msr_pubmed_id":"","msr_other_authors":"","msr_other_contributors":"","msr_speaker":"","msr_award":"","msr_affiliation":"","msr_institution":"","msr_host":"","msr_version":"","msr_duration":"","msr_original_fields_of_study":"","msr_release_tracker_id":"","msr_s2_match_type":"","msr_citation_count_updated":"","msr_published_date":"2024-8-4","msr_highlight_text":"","msr_notes":"","msr_longbiography":"","msr_publicationurl":"","msr_external_url":"","msr_secondary_video_url":"","msr_conference_url":"https:\/\/conferences.sigcomm.org\/sigcomm\/2024\/","msr_journal_url":"","msr_s2_pdf_url":"","msr_year":0,"msr_citation_count":0,"msr_influential_citations":0,"msr_reference_count":0,"msr_s2_match_confidence":0,"msr_microsoftintellectualproperty":true,"msr_s2_open_access":false,"msr_s2_author_ids":[],"msr_pub_ids":[],"msr_hide_image_in_river":0,"footnotes":""},"msr-research-highlight":[],"research-area":[13556,13547],"msr-publication-type":[193716],"msr-publisher":[],"msr-focus-area":[],"msr-locale":[268875],"msr-post-option":[],"msr-field-of-study":[],"msr-conference":[],"msr-journal":[],"msr-impact-theme":[],"msr-pillar":[],"class_list":["post-1031571","msr-research-item","type-msr-research-item","status-publish","hentry","msr-research-area-artificial-intelligence","msr-research-area-systems-and-networking","msr-locale-en_us"],"msr_publishername":"ACM","msr_edition":"","msr_affiliation":"","msr_published_date":"2024-8-4","msr_host":"","msr_duration":"","msr_version":"","msr_speaker":"","msr_other_contributors":"","msr_booktitle":"","msr_pages_string":"","msr_chapter":"","msr_isbn":"","msr_journal":"","msr_volume":"","msr_number":"","msr_editors":"","msr_series":"","msr_issue":"","msr_organization":"ACM SIGCOMM","msr_how_published":"","msr_notes":"","msr_highlight_text":"","msr_release_tracker_id":"","msr_original_fields_of_study":"","msr_download_urls":"","msr_external_url":"","msr_secondary_video_url":"","msr_longbiography":"","msr_microsoftintellectualproperty":1,"msr_main_download":"","msr_publicationurl":"","msr_doi":"","msr_publication_uploader":[{"type":"url","viewUrl":"false","id":"false","title":"https:\/\/arxiv.org\/abs\/2310.07240","label_id":"243103","label":0}],"msr_related_uploader":"","msr_citation_count":0,"msr_citation_count_updated":"","msr_s2_paper_id":"","msr_influential_citations":0,"msr_reference_count":0,"msr_arxiv_id":"","msr_s2_author_ids":[],"msr_s2_open_access":false,"msr_s2_pdf_url":null,"msr_attachments":[],"msr-author-ordering":[{"type":"text","value":"Yuhan Liu","user_id":0,"rest_url":false},{"type":"text","value":"Hanchen Li","user_id":0,"rest_url":false},{"type":"text","value":"Yihua Cheng","user_id":0,"rest_url":false},{"type":"text","value":"Siddhant Ray","user_id":0,"rest_url":false},{"type":"text","value":"Yuyang Huang","user_id":0,"rest_url":false},{"type":"text","value":"Qizheng Zhang","user_id":0,"rest_url":false},{"type":"text","value":"Kuntai Du","user_id":0,"rest_url":false},{"type":"text","value":"Jiayi Yao","user_id":0,"rest_url":false},{"type":"user_nicename","value":"Shan Lu","user_id":43215,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Shan Lu"},{"type":"user_nicename","value":"Ganesh Ananthanarayanan","user_id":31834,"rest_url":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/microsoft-research\/v1\/researchers?person=Ganesh Ananthanarayanan"},{"type":"text","value":"Michael Maire","user_id":0,"rest_url":false},{"type":"text","value":"Henry Hoffmann","user_id":0,"rest_url":false},{"type":"text","value":"Ari Holtzman","user_id":0,"rest_url":false},{"type":"text","value":"Junchen Jiang","user_id":0,"rest_url":false}],"msr_impact_theme":[],"msr_research_lab":[],"msr_event":[],"msr_group":[144927],"msr_project":[],"publication":[],"video":[],"msr-tool":[],"msr_publication_type":"inproceedings","related_content":[],"_links":{"self":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/1031571","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item"}],"about":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/types\/msr-research-item"}],"version-history":[{"count":1,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/1031571\/revisions"}],"predecessor-version":[{"id":1031586,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-item\/1031571\/revisions\/1031586"}],"wp:attachment":[{"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/media?parent=1031571"}],"wp:term":[{"taxonomy":"msr-research-highlight","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-research-highlight?post=1031571"},{"taxonomy":"msr-research-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/research-area?post=1031571"},{"taxonomy":"msr-publication-type","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-publication-type?post=1031571"},{"taxonomy":"msr-publisher","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-publisher?post=1031571"},{"taxonomy":"msr-focus-area","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-focus-area?post=1031571"},{"taxonomy":"msr-locale","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-locale?post=1031571"},{"taxonomy":"msr-post-option","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-post-option?post=1031571"},{"taxonomy":"msr-field-of-study","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-field-of-study?post=1031571"},{"taxonomy":"msr-conference","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-conference?post=1031571"},{"taxonomy":"msr-journal","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-journal?post=1031571"},{"taxonomy":"msr-impact-theme","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-impact-theme?post=1031571"},{"taxonomy":"msr-pillar","embeddable":true,"href":"https:\/\/www.microsoft.com\/en-us\/research\/wp-json\/wp\/v2\/msr-pillar?post=1031571"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}