TY - GEN
T1 - Stateful Large Language Model Serving with Pensieve
AU - Yu, Lingfan
AU - Lin, Jinkun
AU - Li, Jinyang
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/3/30
Y1 - 2025/3/30
N2 - Large Language Models (LLMs) are wildly popular today and it is important to serve them efficiently. Existing LLM serving systems are stateless across requests. Consequently, when LLMs are used in the common setting of multi-turn conversations, a growing log of the conversation history must be processed alongside any request by the serving system at each turn, resulting in repeated processing. In this paper, we design Pensieve, a system optimized for multi-turn conversation LLM serving. Pensieve maintains the conversation state across requests by caching previously processed history to avoid duplicate processing. Pensieve’s multi-tier caching strategy can utilize both GPU and CPU memory to efficiently store and retrieve cached data. Pensieve also generalizes the recent PagedAttention kernel to support attention between multiple input tokens with a GPU cache spread over non-contiguous memory. Our evaluation shows that Pensieve can achieve 1.14-3.0× the throughput of vLLM and TensorRT-LLM and significantly reduce latency.
AB - Large Language Models (LLMs) are wildly popular today and it is important to serve them efficiently. Existing LLM serving systems are stateless across requests. Consequently, when LLMs are used in the common setting of multi-turn conversations, a growing log of the conversation history must be processed alongside any request by the serving system at each turn, resulting in repeated processing. In this paper, we design Pensieve, a system optimized for multi-turn conversation LLM serving. Pensieve maintains the conversation state across requests by caching previously processed history to avoid duplicate processing. Pensieve’s multi-tier caching strategy can utilize both GPU and CPU memory to efficiently store and retrieve cached data. Pensieve also generalizes the recent PagedAttention kernel to support attention between multiple input tokens with a GPU cache spread over non-contiguous memory. Our evaluation shows that Pensieve can achieve 1.14-3.0× the throughput of vLLM and TensorRT-LLM and significantly reduce latency.
KW - Cache
KW - LLM Serving
KW - Multi-turn Conversations
UR - http://www.scopus.com/inward/record.url?scp=105002257848&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105002257848&partnerID=8YFLogxK
U2 - 10.1145/3689031.3696086
DO - 10.1145/3689031.3696086
M3 - Conference contribution
AN - SCOPUS:105002257848
T3 - EuroSys 2025 - Proceedings of the 2025 20th European Conference on Computer Systems
SP - 144
EP - 158
BT - EuroSys 2025 - Proceedings of the 2025 20th European Conference on Computer Systems
PB - Association for Computing Machinery, Inc
T2 - 20th European Conference on Computer Systems, EuroSys 2025, co-located 30th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2025
Y2 - 30 March 2025 through 3 April 2025
ER -