Skip to content

Instantly share code, notes, and snippets.

@charlieoneill11
Created May 1, 2022 06:29
Show Gist options
  • Save charlieoneill11/0a8e773891007a5e5dbc616ac059a082 to your computer and use it in GitHub Desktop.
Save charlieoneill11/0a8e773891007a5e5dbc616ac059a082 to your computer and use it in GitHub Desktop.
from transformers import Trainer, TrainingArguments
batch_size = 64
logging_steps = len(offensive_encoded["train"]) // batch_size
model_name = f"{model_ckpt}-finetuned-tweet_eval-offensive"
training_args = TrainingArguments(output_dir=model_name,
num_train_epochs=2,
learning_rate=2e-5,
per_device_train_batch_size=batch_size,
per_device_eval_batch_size=batch_size,
weight_decay=0.01,
evaluation_strategy="epoch",
disable_tqdm=False,
logging_steps=logging_steps,
push_to_hub=True,
log_level="error")
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment