Skip to content

Instantly share code, notes, and snippets.

@pythonlessons
Created September 4, 2023 15:04
Show Gist options
  • Save pythonlessons/d9b05217f2b38e870840880439015bc5 to your computer and use it in GitHub Desktop.
Save pythonlessons/d9b05217f2b38e870840880439015bc5 to your computer and use it in GitHub Desktop.
transformers_training
# Define callbacks
warmupCosineDecay = WarmupCosineDecay(
lr_after_warmup=configs.lr_after_warmup,
final_lr=configs.final_lr,
warmup_epochs=configs.warmup_epochs,
decay_epochs=configs.decay_epochs,
initial_lr=configs.init_lr,
)
earlystopper = EarlyStopping(monitor="val_masked_accuracy", patience=5, verbose=1, mode="max")
checkpoint = ModelCheckpoint(f"{configs.model_path}/model.h5", monitor="val_masked_accuracy", verbose=1, save_best_only=True, mode="max", save_weights_only=False)
tb_callback = TensorBoard(f"{configs.model_path}/logs")
reduceLROnPlat = ReduceLROnPlateau(monitor="val_masked_accuracy", factor=0.9, min_delta=1e-10, patience=2, verbose=1, mode="max")
model2onnx = Model2onnx(f"{configs.model_path}/model.h5", metadata={"tokenizer": tokenizer.dict(), "detokenizer": detokenizer.dict()}, save_on_epoch_end=False)
encDecSplitCallback = EncDecSplitCallback(configs.model_path, encoder_metadata={"tokenizer": tokenizer.dict()}, decoder_metadata={"detokenizer": detokenizer.dict()})
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment