Skip to content

Instantly share code, notes, and snippets.

@harveyslash
Created April 17, 2018 02:59
Show Gist options
  • Save harveyslash/2c397bf0b36f32e0a8913a8a06f22c4a to your computer and use it in GitHub Desktop.
Save harveyslash/2c397bf0b36f32e0a8913a8a06f22c4a to your computer and use it in GitHub Desktop.
Text generation models typically learn the distribution of a text and
used this learned information to generate text themselves. By tuning
various hyperparameters, it is possible to finetune the tradeoff between
the deviation of the text from the distribution and the robustness of
the text.
I present a way to extend this ability of generative models to learn
multiple distributions in order to generate a style of text
that is unique from both distributions.
I also discuss ways to regularize the presented model using Generative
Adversarial Networks(GANs) and governing the contribution of each particular
style of text.
Evaluating models that generate unique texts are hard to evaluate.I
also present a way to evaluate the generated texts from the model.
Recurrent Neural Networks(RNNs) have shown to have superior performance
when dealing with sequences that have long term dependencies.
I employ RNNs as the text generation model and adversarial networks as
the regularization mechanism.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment