Skip to content

Instantly share code, notes, and snippets.

@MichelNivard
Created February 17, 2023 07:14
Show Gist options
  • Save MichelNivard/a6ef7f3deae2cb436aa04c0a73c8482d to your computer and use it in GitHub Desktop.
Save MichelNivard/a6ef7f3deae2cb436aa04c0a73c8482d to your computer and use it in GitHub Desktop.
from transformers import pipeline
from transformers import GPTJForCausalLM
from transformers import GPTJForCausalLM, AutoTokenizer
import torch
model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
prompt = (
"Monday Diary Entry: Had a busy weekend and started the day tired and empty, the good weather helps lift the mood. Worked fro home and spend (too much) time on learning about language models. Had 2 or 3 productive calls, tired and prob still a bit sick today, which put me in a somewhat somber mood. Had a long bath which maybe helped?"
"Tuesday Diary Entry: Today I did not get too a lot of work as A was sick and didnt go to daycare, me and R took shifts away form remote work to care, tireing and at times boring, had a very engaging and prodcuting call about a project I love, so while still tired the spirits where lifted quite a bit."
"Wednesday Diary Entry: Long day with productive interactions, didnt sleep a lot as the kids woke up a lot, clearly winter both have some soft of cold or flu? Day got stressy as a work emergency pushed back R's arivl at hoem so had to rush out to grab the kids, kept pushing back diner so kids got hungy,, lots to do still in a short evening, which made me stressed to the point of being unconfortable"
"Thursday Diary Entry: Slept welll but short, fet rested and went intot he office, felt relaxed catching up wiht several people and talked trough PhD student projects. Went to get S and hang out with her, which was fun. Felt like I let myself down a bi when I caught myself absentmindely scrolling social media whuile I shuld ebe hanging with S"
)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
len(input_ids)
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=250,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment