Blind Image Quality Assessment by Learning from Multiple Annotators

Kede Ma, Xuelin Liu, Yuming Fang, Eero P. Simoncelli

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Models for image quality assessment (IQA) are generally optimized and tested by comparing to human ratings, which are expensive to obtain. Here, we develop a blind IQA (BIQA) model, and a method of training it without human ratings. We first generate a large number of corrupted image pairs, and use a set of existing IQA models to identify which image of each pair has higher quality. We then train a convolutional neural network to estimate perceived image quality along with the uncertainty, optimizing for consistency with the binary labels. The reliability of each IQA annotator is also estimated during training. Experiments demonstrate that our model outperforms state-of-the-art BIQA models in terms of correlation with human ratings in existing databases, as well in group maximum differentiation (gMAD) competition.

Original languageEnglish (US)
Title of host publication2019 IEEE International Conference on Image Processing, ICIP 2019 - Proceedings
PublisherIEEE Computer Society
Pages2344-2348
Number of pages5
ISBN (Electronic)9781538662496
DOIs
StatePublished - Sep 2019
Event26th IEEE International Conference on Image Processing, ICIP 2019 - Taipei, Taiwan, Province of China
Duration: Sep 22 2019Sep 25 2019

Publication series

NameProceedings - International Conference on Image Processing, ICIP
Volume2019-September
ISSN (Print)1522-4880

Conference

Conference26th IEEE International Conference on Image Processing, ICIP 2019
Country/TerritoryTaiwan, Province of China
CityTaipei
Period9/22/199/25/19

Keywords

  • Blind image quality assessment
  • convolutional neural networks
  • gMAD competition

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Signal Processing

Fingerprint

Dive into the research topics of 'Blind Image Quality Assessment by Learning from Multiple Annotators'. Together they form a unique fingerprint.

Cite this