TY - GEN
T1 - Perceptual image quality assessment using a normalized Laplacian pyramid
AU - Laparra, Valero
AU - Balle, Johannes
AU - Berardino, Alexander
AU - Simoncelii, Eero P.
N1 - Funding Information:
This work was supported by the Howard Hughes Medical Institute, and the APOSTD/2Q14/095 Generalitat Valenciana grant (Spain).
PY - 2016
Y1 - 2016
N2 - We present an image quality metric based on the transformations associated with the early visual system: local luminance subtraction and local gain control. Images are decomposed using a Laplacian pyramid, which subtracts a local estimate of the mean luminance at multiple scales. Each pyramid coefficient is then divided by a local estimate of amplitude (weighted sum of absolute values of neighbors), where the weights are optimized for prediction of amplitude using (undistorted) images from a separate database. We define the quality of a distorted image, relative to its undistorted original, as the root mean squared error in this "normalized Laplacian " domain. We show that both luminance subtraction and amplitude division stages lead to significant reductions in redundancy relative to the original image pixels. We also show that the resulting quality metric provides a better account of human perceptual judgements than either MS-SSIM or a recently-published gain-control metric based on oriented filters.
AB - We present an image quality metric based on the transformations associated with the early visual system: local luminance subtraction and local gain control. Images are decomposed using a Laplacian pyramid, which subtracts a local estimate of the mean luminance at multiple scales. Each pyramid coefficient is then divided by a local estimate of amplitude (weighted sum of absolute values of neighbors), where the weights are optimized for prediction of amplitude using (undistorted) images from a separate database. We define the quality of a distorted image, relative to its undistorted original, as the root mean squared error in this "normalized Laplacian " domain. We show that both luminance subtraction and amplitude division stages lead to significant reductions in redundancy relative to the original image pixels. We also show that the resulting quality metric provides a better account of human perceptual judgements than either MS-SSIM or a recently-published gain-control metric based on oriented filters.
UR - http://www.scopus.com/inward/record.url?scp=85011092689&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85011092689&partnerID=8YFLogxK
U2 - 10.2352/ISSN.2470-1173.2016.16HVEI-103
DO - 10.2352/ISSN.2470-1173.2016.16HVEI-103
M3 - Conference contribution
AN - SCOPUS:85011092689
T3 - Human Vision and Electronic Imaging 2016, HVEI 2016
SP - 43
EP - 48
BT - Human Vision and Electronic Imaging 2016, HVEI 2016
A2 - Pappas, Thrasyvoulos N.
A2 - de Ridder, Huib
A2 - Rogowitz, Bernice E.
PB - Society for Imaging Science and Technology
T2 - Human Vision and Electronic Imaging 2016, HVEI 2016
Y2 - 14 February 2016 through 18 February 2016
ER -