TY - GEN
T1 - Cocktail edge caching
T2 - 40th IEEE Conference on Computer Communications, INFOCOM 2021
AU - Zong, Tongyu
AU - Li, Chen
AU - Lei, Yuanyuan
AU - Li, Guangyu
AU - Cao, Houwei
AU - Liu, Yong
N1 - Publisher Copyright:
© 2021 IEEE.
PY - 2021/5/10
Y1 - 2021/5/10
N2 - Edge caching will play a critical role in facilitating the emerging content-rich applications. However, it faces many new challenges, in particular, the highly dynamic content popularity and the heterogeneous caching configurations. In this paper, we propose Cocktail Edge Caching, that tackles the dynamic popularity and heterogeneity through ensemble learning. Instead of trying to find a single dominating caching policy for all the caching scenarios, we employ an ensemble of constituent caching policies and adaptively select the best-performing policy to control the cache. Towards this goal, we first show through formal analysis and experiments that different variations of the LFU and LRU polices have complementary performance in different caching scenarios. We further develop a novel caching algorithm that enhances LFU/LRU with deep recurrent neural network (LSTM) based time-series analysis. Finally, we develop a deep reinforcement learning agent that adaptively combines base caching policies according to their virtual hit ratios on parallel virtual caches. Through extensive experiments driven by real content requests from two large video streaming platforms, we demonstrate that CEC not only consistently outperforms all single policies, but also improves the robustness of them. CEC can be well generalized to different caching scenarios with low computation overheads for deployment.
AB - Edge caching will play a critical role in facilitating the emerging content-rich applications. However, it faces many new challenges, in particular, the highly dynamic content popularity and the heterogeneous caching configurations. In this paper, we propose Cocktail Edge Caching, that tackles the dynamic popularity and heterogeneity through ensemble learning. Instead of trying to find a single dominating caching policy for all the caching scenarios, we employ an ensemble of constituent caching policies and adaptively select the best-performing policy to control the cache. Towards this goal, we first show through formal analysis and experiments that different variations of the LFU and LRU polices have complementary performance in different caching scenarios. We further develop a novel caching algorithm that enhances LFU/LRU with deep recurrent neural network (LSTM) based time-series analysis. Finally, we develop a deep reinforcement learning agent that adaptively combines base caching policies according to their virtual hit ratios on parallel virtual caches. Through extensive experiments driven by real content requests from two large video streaming platforms, we demonstrate that CEC not only consistently outperforms all single policies, but also improves the robustness of them. CEC can be well generalized to different caching scenarios with low computation overheads for deployment.
KW - Deep reinforcement learning
KW - Edge caching
KW - LSTM
KW - Video
UR - http://www.scopus.com/inward/record.url?scp=85111922990&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85111922990&partnerID=8YFLogxK
U2 - 10.1109/INFOCOM42981.2021.9488910
DO - 10.1109/INFOCOM42981.2021.9488910
M3 - Conference contribution
AN - SCOPUS:85111922990
T3 - Proceedings - IEEE INFOCOM
BT - INFOCOM 2021 - IEEE Conference on Computer Communications
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 10 May 2021 through 13 May 2021
ER -