Skip to content

Instantly share code, notes, and snippets.

@Quantisan
Created April 11, 2023 22:03
Show Gist options
  • Save Quantisan/b31be771a5640c30fbcebade21443175 to your computer and use it in GitHub Desktop.
Save Quantisan/b31be771a5640c30fbcebade21443175 to your computer and use it in GitHub Desktop.
Motiva show and tell: problem solving with language models
# pip install torch transformers
from transformers import pipeline
generate = pipeline(model="declare-lab/flan-alpaca-xl")
prompt = "Write an email about an alpaca that likes flan"
generate(prompt, max_length=128, do_sample=True)
'''
[{'generated_text': "Dear [Possible Adjective], I hope this email finds you well and happy! I'm writing to let you know that I have [Alpaca Name] who loves to eat flan! He is very playful and I think you'll agree; he'll happily run over to your kitchen window and happily eat his food. So, if you would like to join us in our adventures together, he'll love to share some of his treats, too! Sincerely, [Your Name]"}]
[{'generated_text': 'Dear Alpaca, I am writing to tell you that our alpaca is a big fan of flan. She always seems to be in the middle of the kitchen when her humans are around, so I am making sure to provide her with the most delicious flan I can find. Please let me know if you are looking for flan that is absolutely delicious or would like to know about our favorite brand. Sincerely, [Your Name]'}]
[{'generated_text': "We at Happy Earth are excited to announce that our beloved alpaca, Tiptoe, is a big fan of flan. Tiptoe is a gentle giant and has a sweet tooth, so whenever he sees a flan-themed dessert on the menu, he starts sniffing it out and salivating. We can't wait to share our delicious alpaca-friendly recipes with you. Till then, Happy Earth wishes you a delicious day! Cheers,"}]
'''
# https://huggingface.co/declare-lab/flan-alpaca-xl
#
# declare-lab/flan-alpaca-xl # 11 GB
# declare-lab/flan-alpaca-base # 1GB
#
#
# Model: FLAN-T5 (actually fine-tuned T5)
# Fine-tuned on dataset: Alpaca
# Parameters (aka checkpoints): xl = 3B, base = 220M
generate(prompt, max_length=16)
prompt = "Answer the following in yes/no. Is this following sentence about cheese: where is my cheese"
# [{'generated_text': 'Yes'}]
prompt = "Answer the following in yes/no. Is this following sentence about cheese: how is your day?"
# [{'generated_text': 'No'}]
prompt = "Answer the following in yes/no. Is this following sentence about cheese: let's not be cheesy for a moment."
# [{'generated_text': 'No'}]
import time
start = time.time()
prompt = "Answer the following in yes/no. Is this following sentence about cheese: where is my cheese"
generate(prompt, max_length=16)
print("elapsed seconds:", time.time() - start)
# elapsed seconds: 5.069699048995972
# https://huggingface.co/tasks
# zero-shot-classification
classifier = pipeline("zero-shot-classification", model="facebook/bart-large-mnli")
start = time.time()
classifier("where is my cheese", ["cheese"], multi_label=False)
# {'sequence': 'where is my cheese', 'labels': ['cheese'], 'scores': [0.992667555809021]}
print("elapsed seconds:", time.time() - start)
# elapsed seconds: 0.15996122360229492
classifier("I can't find my cheese! Please help!", ["cheese", "urgent", "request"], multi_label=True)
# {'sequence': "I can't find my cheese! Please help!", 'labels': ['urgent', 'cheese', 'request'], 'scores': [0.9989798069000244, 0.994159996509552, 0.9373180270195007]}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment