Generative Caching for Structurally Similar Prompts and Responses
- Sarthak Chakraborty ,
- Suman Nath ,
- Xuchao Zhang ,
- Chetan Bansal ,
- Indranil Gupta
NeurIPS 2025 |
Large Language Models (LLMs) are increasingly being used to plan, reason, and execute tasks across various scenarios. Use cases like repeatable workflows, chatbots, and AI agents often involve recurring tasks and tend to reuse similar prompts when interacting with the LLM. This opens up opportunities for caching. With structurally similar prompts that differ in subtle yet important ways, which are also reflected in their corresponding responses, exact prompt matching fails, while semantic caching techniques may return cached responses that are incorrect since they ignore these variations. To address this, we introduce GenCache, a generative cache that produces variation-aware responses for structurally similar prompts. It identifies and reuses the pattern in which responses are generated for structurally similar prompts for new requests. We show that GenCache achieves an 83% cache hit rate with minimal negative hits on datasets without prompt repetition. In agentic workflows, it improves cache hit rate by 20% and reduces end-to-end execution latency by 34% for one workflow compared to standard prompt matching.