TY - JOUR
T1 - Neural Video Coding Using Multiscale Motion Compensation and Spatiotemporal Context Model
AU - Liu, Haojie
AU - Lu, Ming
AU - Ma, Zhan
AU - Wang, Fan
AU - Xie, Zhihuang
AU - Cao, Xun
AU - Wang, Yao
N1 - Funding Information:
Manuscript received July 9, 2020; revised September 22, 2020; accepted October 27, 2020. Date of publication November 3, 2020; date of current version August 4, 2021. This work was supported in part by the National Natural Science Foundation of China under Grant 62022038, in part by the China Scholarship Council under Grant 201906190086, and in part by OPPO Research Fund. This article was recommended by Associate Editor H. Meng. (Corresponding authors: Zhan Ma; Xun Cao.) Haojie Liu, Ming Lu, Zhan Ma, and Xun Cao are with the School of Electronic Science and Engineering, Nanjing University, Nanjing 210093, China (e-mail: haojie@smail.nju.edu.cn; luming@smail.nju.edu.cn; mazhan@nju.edu.cn; caoxun@nju.edu.cn).
Publisher Copyright:
© 1991-2012 IEEE.
PY - 2021/8
Y1 - 2021/8
N2 - Over the past two decades, traditional block-based video coding has made remarkable progress and spawned a series of well-known standards such as MPEG-4, H.264/AVC and H.265/HEVC. On the other hand, deep neural networks (DNNs) have shown their powerful capacity for visual content understanding, feature extraction and compact representation. Some previous works have explored the learnt video coding algorithms in an end-to-end manner, which show the great potential compared with traditional methods. In this paper, we propose an end-to-end deep neural video coding framework (NVC), which uses variational autoencoders (VAEs) with joint spatial and temporal prior aggregation (PA) to exploit the correlations in intra-frame pixels, inter-frame motions and inter-frame compensation residuals, respectively. Novel features of NVC include: 1) To estimate and compensate motion over a large range of magnitudes, we propose an unsupervised multiscale motion compensation network (MS-MCN) together with a pyramid decoder in the VAE for coding motion features that generates multiscale flow fields, 2) we design a novel adaptive spatiotemporal context model for efficient entropy coding for motion information, 3) we adopt nonlocal attention modules (NLAM) at the bottlenecks of the VAEs for implicit adaptive feature extraction and activation, leveraging its high transformation capacity and unequal weighting with joint global and local information, and 4) we introduce multi-module optimization and a multi-frame training strategy to minimize the temporal error propagation among P-frames. NVC is evaluated for the low-delay causal settings and compared with H.265/HEVC, H.264/AVC and the other learnt video compression methods following the common test conditions, demonstrating consistent gains across all popular test sequences for both PSNR and MS-SSIM distortion metrics.
AB - Over the past two decades, traditional block-based video coding has made remarkable progress and spawned a series of well-known standards such as MPEG-4, H.264/AVC and H.265/HEVC. On the other hand, deep neural networks (DNNs) have shown their powerful capacity for visual content understanding, feature extraction and compact representation. Some previous works have explored the learnt video coding algorithms in an end-to-end manner, which show the great potential compared with traditional methods. In this paper, we propose an end-to-end deep neural video coding framework (NVC), which uses variational autoencoders (VAEs) with joint spatial and temporal prior aggregation (PA) to exploit the correlations in intra-frame pixels, inter-frame motions and inter-frame compensation residuals, respectively. Novel features of NVC include: 1) To estimate and compensate motion over a large range of magnitudes, we propose an unsupervised multiscale motion compensation network (MS-MCN) together with a pyramid decoder in the VAE for coding motion features that generates multiscale flow fields, 2) we design a novel adaptive spatiotemporal context model for efficient entropy coding for motion information, 3) we adopt nonlocal attention modules (NLAM) at the bottlenecks of the VAEs for implicit adaptive feature extraction and activation, leveraging its high transformation capacity and unequal weighting with joint global and local information, and 4) we introduce multi-module optimization and a multi-frame training strategy to minimize the temporal error propagation among P-frames. NVC is evaluated for the low-delay causal settings and compared with H.265/HEVC, H.264/AVC and the other learnt video compression methods following the common test conditions, demonstrating consistent gains across all popular test sequences for both PSNR and MS-SSIM distortion metrics.
KW - Neural video coding
KW - multiscale compressed flows
KW - multiscale motion compensation
KW - neural network
KW - nonlocal attention
KW - pyramid decoder
KW - spatiotemporal priors
KW - temporal error propagation
UR - http://www.scopus.com/inward/record.url?scp=85112837466&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85112837466&partnerID=8YFLogxK
U2 - 10.1109/TCSVT.2020.3035680
DO - 10.1109/TCSVT.2020.3035680
M3 - Article
AN - SCOPUS:85112837466
VL - 31
SP - 3182
EP - 3196
JO - IEEE Transactions on Circuits and Systems for Video Technology
JF - IEEE Transactions on Circuits and Systems for Video Technology
SN - 1051-8215
IS - 8
M1 - 9247134
ER -