Papers
arxiv:2105.02201

PD-GAN: Probabilistic Diverse GAN for Image Inpainting

Published on May 5, 2021
Authors:
,
,
,
,
,

Abstract

We propose PD-<PRE_TAG>GAN</POST_TAG>, a probabilistic diverse GAN for image inpainting. Given an input image with arbitrary hole regions, PD-<PRE_TAG>GAN</POST_TAG> produces multiple inpainting results with diverse and visually realistic content. Our PD-<PRE_TAG>GAN</POST_TAG> is built upon a vanilla GAN which generates images based on random noise. During image generation, we modulate deep features of input random noise from coarse-to-fine by injecting an initially restored image and the hole regions in multiple scales. We argue that during hole filling, the pixels near the hole boundary should be more deterministic (i.e., with higher probability trusting the context and initially restored image to create natural inpainting boundary), while those pixels lie in the center of the hole should enjoy more degrees of freedom (i.e., more likely to depend on the random noise for enhancing diversity). To this end, we propose spatially probabilistic diversity normalization (SPDNorm) inside the modulation to model the probability of generating a pixel conditioned on the context information. SPDNorm dynamically balances the realism and diversity inside the hole region, making the generated content more diverse towards the hole center and resemble neighboring image content more towards the hole boundary. Meanwhile, we propose a perceptual diversity loss to further empower PD-<PRE_TAG>GAN</POST_TAG> for diverse content generation. Experiments on benchmark datasets including CelebA-HQ, Places2 and Paris Street View indicate that PD-<PRE_TAG>GAN</POST_TAG> is effective for diverse and visually realistic image restoration.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2105.02201 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2105.02201 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2105.02201 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.