Skip to content

Instantly share code, notes, and snippets.

Embed
What would you like to do?
tokenizer = AutoTokenizer.from_pretrained("./storage/gpt2-motivational_v6")
model = AutoModelWithLMHead.from_pretrained("./storage/gpt2-motivational_v6")
gpt2_finetune = pipeline('text-generation', model=model, tokenizer=tokenizer)
# gen_kwargs has different options like max_length,
# beam_search options, top-p, top-k, etc
gen_text = gpt2_finetune (seed, *gen_kwargs)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment