FAQ
logo of Jagiellonian University in Krakow

Image Inpainting with Gradient Attention

Publication date: 2018

Schedae Informaticae, 2018, Volume 27, pp. 81 - 91

https://doi.org/10.4467/20838476SI.18.007.10412

Authors

,
Michał Sadowski
Faculty of Mathematics and Computer Science, Jagiellonian University, Krakow, Poland
All publications →
Aleksandra Grzegorczyk
Samsung R&D Institute Poland
All publications →

Titles

Image Inpainting with Gradient Attention

Abstract

We present a novel modification of context encoder loss function, which results in more accurate and plausible inpainting. For this purpose, we introduce gradient attention loss component of loss function, to suppress the common problem of inconsistency in shapes and edges between the inpainted region and its context. To this end, the mean absolute error is computed not only for the input and output images, but also for their derivatives. Therefore, model concentrates on areas with larger gradient, which are crucial for accurate reconstruction. The positive effects on inpainting results are observed both for fully-connected and fully-convolutional models tested on MNIST and CelebA datasets.

References

[1] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester. Image inpainting. In Proceedings of the 27th annual conference on Computer graphics and interactive techniques, pages 417–424. ACM Press/Addison-Wesley Publishing Co., 2000.

[2] M. Bertalmio, L. Vese, G. Sapiro, and S. Osher. Simultaneous structure and texture image inpainting. IEEE transactions on image processing, 12(8):882–889, 2003.

[3] A. Criminisi, P. Perez, and K. Toyama. Object removal by exemplar-based inpainting. In Computer Vision and Pattern Recognition, 2003. Proceedings. 2003 IEEE Computer Society Conference on, volume 2, pages II–II. IEEE, 2003.

[4] A. Criminisi, P. Pérez, and K. Toyama. Region filling and object removal by exemplarbased image inpainting. IEEE Transactions on image processing, 13(9):1200–1212, 2004.

[5] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio. Generative adversarial nets. In NIPS, 2014.

[6] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. Courville. Improved training of wasserstein gans, 2017.

[7] Y. Guo, Q. Chen, J. Chen, J. Huang, Y. Xu, J. Cao, P. Zhao, and M. Tan. Dual reconstruction nets for image super-resolution with gradient sensitive loss, 2018.

[8] S. Iizuka, E. Simo-Serra, and H. Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics (TOG), 36(4):107, 2017.

[9] J. Johnson, A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In ECCV, 2016.

[10] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems, pages 1097–1105, 2012.

[11] Y. LeCun and C. Cortes. MNIST handwritten digit database. 2010.

[12] Y. Li, S. Liu, J. Yang, and M.-H. Yang. Generative face completion. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, volume 1, page 6, 2017.

[13] D. Liu, X. Sun, F. Wu, S. Li, and Y.-Q. Zhang. Image compression with edgebased inpainting. IEEE Transactions on Circuits and Systems for Video Technology, 17(10):1273–1287, 2007.

[14] G. Liu, F. A. Reda, K. J. Shih, T.-C. Wang, A. Tao, and B. Catanzaro. Image inpainting for irregular holes using partial convolutions. In ECCV, 2018.

[15] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), 2015.

[16] M. Mirza and S. Osindero. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784, 2014.

[17] T. Nguyen, Y. Xue, Y. Li, L. Tian, and G. Nehmetallah. Deep learning approach to fourier ptychographic microscopy, 2018.

[18] D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2536–2544, 2016.

[19] D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell, and A. A. Efros. Context encoders: Feature learning by inpainting. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2536–2544, 2016.

[20] P. Pérez, M. Gangnet, and A. Blake. Poisson image editing. ACM Trans. Graph., 22(3):313–318, July 2003.

[21] C. Yang, X. Lu, Z. Lin, E. Shechtman, O. Wang, and H. Li. High-resolution image inpainting using multi-scale neural patch synthesis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), volume 1, page 3, 2017.

[22] J. Yu, Z. Lin, J. Yang, X. Shen, X. Lu, and T. S. Huang. Generative image inpainting with contextual attention. arXiv preprint, 2018.

[23] G. Zhao, J. Liu, J. Jiang, and W. Wang. A deep cascade of neural networks for image inpainting, deblurring and denoising. Multimedia Tools and Applications, pages 1–16, 2017.

Information

Information: Schedae Informaticae, 2018, Volume 27, pp. 81 - 91

Article type: Original article

Titles:

Polish:

Image Inpainting with Gradient Attention

English:

Image Inpainting with Gradient Attention

Authors

Faculty of Mathematics and Computer Science, Jagiellonian University, Krakow, Poland

Samsung R&D Institute Poland

Published at: 2018

Article status: Open

Licence: CC BY-NC-ND  licence icon

Percentage share of authors:

Michał Sadowski (Author) - 50%
Aleksandra Grzegorczyk (Author) - 50%

Article corrections:

-

Publication languages:

English

View count: 1685

Number of downloads: 2512

<p> Image Inpainting with Gradient Attention</p>