Human Perception-based Image Enhancement Using a Deep Generative Model

  • Amir Nazemi
  • Shima Kamyab
  • Zohreh Azimifar
  • Paul Fieguth

Abstract

In this paper we propose a deep model for perceptual image en-
hancement based on generative modeling. The proposed frame-
work is inspired by the Conditional Variational AutoEncoder (CVAE)
which is a well-known deep generative structure. In generative
models, there are efficient regularizers for controlling the output
distributions using information from input data which lead to accu-
rate and visually plausible results with few parameters. Additionally,
we propose to use an image quality assessment network to deter-
mine the best result among those obtained by the implemented
CVAEs. The proposed CVAE structure models the histogram vec-
tors of different color channels and parameters of image data (i.e.,
the networks do not work directly on pixel values). This configu-
ration makes the proposed framework capable of using images of
different sizes. Qualitative and numerical evaluations on a related
dataset compared to state-of-the-art indicate superiority of the pro-
posed framework in improving image quality and content.

Published
2018-12-24
How to Cite
Nazemi, A., Kamyab, S., Azimifar, Z., & Fieguth, P. (2018). Human Perception-based Image Enhancement Using a Deep Generative Model. Journal of Computational Vision and Imaging Systems, 4(1), 3. Retrieved from https://openjournals.uwaterloo.ca/index.php/vsl/article/view/337