Less pain, more gain: A simple method for VAE training with less of that KL-vanishing agony

There is a growing interest in exploring the use of variational auto-encoders (VAE), a deep latent variable model, for text generation. Compared to the standard RNN-based language model that generates sentences one word at a time without the explicit guidance of a global sentence representation, VAE is designed to learn a probabilistic representation of global … Continue reading Less pain, more gain: A simple method for VAE training with less of that KL-vanishing agony