This gist assumes that:
- you are already using OpenAI services via the OpenAI API
- you want to migrate to an Azure-based workflow
- somebody else has already set up the Azure endpoint for you
from openai import OpenAI
This gist shares a little workflow and script for a task that most people using university HPCs for NLP research will need to do: downloading and storing HuggingFace models for use on compute nodes.
What this workflow is for:
AutoModel.from_pretrained('model/name')
at run time because compute nodes are not connected to the internet.AutoModel.from_pretrained()
on the head node is impractical because the model is too large to be loaded.~/.cache/
because you only get 10GB of storage on /home