Varying k-Lipschitz Constraint for Generative Adversarial Networks

نویسنده

  • Kanglin Liu
چکیده

Kanglin Liu Abstract Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recent proposed Wasserstein GAN with gradient penalty (WGAN-GP) makes progress toward stable training. Gradient penalty acts as the role of enforcing a Lipschitz constraint. Further investigation on gradient penalty shows that gradient penalty may impose restriction on the capacity of discriminator. As a replacement, we introduce varying k-Lipschitz constraint. Proposed varying k-Lipschitz constraint witness better image quality and significantly improved training speed when testing on GAN architecture. Besides, we introduce an effective convergence measure, which correlates well with image quality.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improvement of generative adversarial networks for automatic text-to-image generation

This research is related to the use of deep learning tools and image processing technology in the automatic generation of images from text. Previous researches have used one sentence to produce images. In this research, a memory-based hierarchical model is presented that uses three different descriptions that are presented in the form of sentences to produce and improve the image. The proposed ...

متن کامل

On the regularization of Wasserstein GANs

Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data. Convergence problems during training are overcome by Wasserstein GANs which minimize the distance between the model and the empirical distribution in terms of a different metric, but thereby introduce a Lipschitz constraint into the optimiza...

متن کامل

Automatic Colorization of Grayscale Images Using Generative Adversarial Networks

Automatic colorization of gray scale images poses a unique challenge in Information Retrieval. The goal of this field is to colorize images which have lost some color channels (such as the RGB channels or the AB channels in the LAB color space) while only having the brightness channel available, which is usually the case in a vast array of old photos and portraits. Having the ability to coloriz...

متن کامل

Improved Training of Wasserstein GANs

Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the cri...

متن کامل

Gradient descent GAN optimization is locally stable

REFERENCES 1. H. K Khalil. Non-linear Systems. Prentice-Hall, New Jersey, 1996. 2. L. Metz, et al., Unrolled generative adversarial networks. (ICLR 2017) 3. M. Heusel et al., GANs trained by a TTUR converge to a local Nash equilibrium (NIPS 2017) 4. I. J. Goodfellow et al., Generative Adversarial Networks (NIPS 2014) An increasingly popular class of generative models — models that “understand” ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2018