Skip to content

Instantly share code, notes, and snippets.

Created August 14, 2022 17:10
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
Star You must be signed in to star a gist
What would you like to do?
from transformers import ViTFeatureExtractor, ViTForImageClassification
from transformers import pipeline
import torch
device = "cuda:0" if torch.cuda.is_available() else "cpu"
feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224')
model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224')
model =
pipe = pipeline("image-classification", model=model, feature_extractor=feature_extractor, device=0)
for batch_size in [1, 8, 32, 64, 128, 256, 512, 1024]:
print("-" * 30)
print(f"Streaming batch_size={batch_size}")
for out in tqdm(pipe(dataset, batch_size=batch_size), total=len(dataset)):
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment