Skip to content

Instantly share code, notes, and snippets.

@alexcg1
Created May 24, 2023 11:02
Show Gist options
  • Save alexcg1/39afd8a98e3f6fe1dbc6e329dbb378d7 to your computer and use it in GitHub Desktop.
Save alexcg1/39afd8a98e3f6fe1dbc6e329dbb378d7 to your computer and use it in GitHub Desktop.
StableLM Executor
from docarray import Document, DocumentArray
from jina import Executor, requests
from transformers import AutoModelForCausalLM, AutoTokenizer
class StableLM(Executor):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.tokenizer = AutoTokenizer.from_pretrained(
'StabilityAI/stablelm-base-alpha-3b'
)
self.model = AutoModelForCausalLM.from_pretrained(
'StabilityAI/stablelm-base-alpha-3b'
)
self.model.half().cuda()
@requests
def generate(self, docs: DocumentArray, **kwargs):
for doc in docs:
self._generate(doc)
def _generate(self, doc: Document, **kwargs):
prompt = doc.tags['prompt']
inputs = self.tokenizer(prompt, return_tensors='pt').to('cuda')
tokens = self.model.generate(
**inputs, max_new_tokens=64, temperature=0.7, do_sample=True
)
output = self.tokenizer.decode(tokens[0], skip_special_tokens=True)
doc.text = output
@samsja
Copy link

samsja commented May 24, 2023

Ideally, you should use the batch generator of hugging face to generate all of the doc at the same time instead of doing the for loop. It would be more efficient use of the GPU

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment