Skip to content

Instantly share code, notes, and snippets.

@myleott

myleott/test.md Secret

Last active January 16, 2021 21:55
Show Gist options
  • Save myleott/9a4d213fb88a02b00094ea074f5a2e2d to your computer and use it in GitHub Desktop.
Save myleott/9a4d213fb88a02b00094ea074f5a2e2d to your computer and use it in GitHub Desktop.

roberta_base:

CUDA_VISIBLE_DEVICES=0 python train.py --task dummy_masked_lm --arch roberta_base --criterion masked_lm --batch-size 8 --optimizer adam --lr 0.0001 --log-format json --log-interval 1 --max-update 5 --disable-validation --no-save

before:
2021-01-16 08:49:30 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.698", "ppl": "53156.6", "wps": "0", "ups": "0", "wpb": "4096", "bsz": "8", "num_updates": "1", "lr": "0.0001", "gnorm": "7.642", "train_wall": "1", "wall": "1"}
2021-01-16 08:49:30 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "13.554", "ppl": "12029.1", "wps": "14389", "ups": "3.51", "wpb": "4096", "bsz": "8", "num_updates": "2", "lr": "0.0001", "gnorm": "6.199", "train_wall": "0", "wall": "1"}
2021-01-16 08:49:31 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "12.179", "ppl": "4635.71", "wps": "16237", "ups": "3.96", "wpb": "4096", "bsz": "8", "num_updates": "3", "lr": "0.0001", "gnorm": "8.377", "train_wall": "0", "wall": "1"}
2021-01-16 08:49:31 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "10.251", "ppl": "1218.87", "wps": "16210.8", "ups": "3.96", "wpb": "4096", "bsz": "8", "num_updates": "4", "lr": "0.0001", "gnorm": "9.622", "train_wall": "0", "wall": "2"}
2021-01-16 08:49:31 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "8.778", "ppl": "438.86", "wps": "16277.8", "ups": "3.97", "wpb": "4096", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "9.697", "train_wall": "0", "wall": "2"}

after:
2021-01-16 08:49:54 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.698", "ppl": "53156.6", "wps": "0", "ups": "0", "wpb": "4096", "bsz": "8", "num_updates": "1", "lr": "0.0001", "gnorm": "7.642", "train_wall": "1", "wall": "1"}
2021-01-16 08:49:55 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "13.554", "ppl": "12029.1", "wps": "14263.5", "ups": "3.48", "wpb": "4096", "bsz": "8", "num_updates": "2", "lr": "0.0001", "gnorm": "6.199", "train_wall": "0", "wall": "1"}
2021-01-16 08:49:55 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "12.179", "ppl": "4635.71", "wps": "15499", "ups": "3.78", "wpb": "4096", "bsz": "8", "num_updates": "3", "lr": "0.0001", "gnorm": "8.377", "train_wall": "0", "wall": "1"}
2021-01-16 08:49:55 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "10.251", "ppl": "1218.87", "wps": "15469.2", "ups": "3.78", "wpb": "4096", "bsz": "8", "num_updates": "4", "lr": "0.0001", "gnorm": "9.622", "train_wall": "0", "wall": "2"}
2021-01-16 08:49:56 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "8.778", "ppl": "438.86", "wps": "15454.2", "ups": "3.77", "wpb": "4096", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "9.697", "train_wall": "0", "wall": "2"}

roberta_base with --encoder-normalize-before (this is expected to be different, since this option was ignored previously):

CUDA_VISIBLE_DEVICES=0 python train.py --task dummy_masked_lm --arch roberta_base --encoder-normalize-before --criterion masked_lm --batch-size 8 --optimizer adam --lr 0.0001 --log-format json --log-interval 1 --max-update 5 --disable-validation --no-save

before:
2021-01-16 11:00:19 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.698", "ppl": "53156.6", "wps": "0", "ups": "0", "wpb": "4096", "bsz": "8", "num_updates": "1", "lr": "0.0001", "gnorm": "7.642", "train_wall": "1", "wall": "1"}
2021-01-16 11:00:19 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "13.554", "ppl": "12029.1", "wps": "14014.7", "ups": "3.42", "wpb": "4096", "bsz": "8", "num_updates": "2", "lr": "0.0001", "gnorm": "6.199", "train_wall": "0", "wall": "1"}
2021-01-16 11:00:19 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "12.179", "ppl": "4635.71", "wps": "16203.7", "ups": "3.96", "wpb": "4096", "bsz": "8", "num_updates": "3", "lr": "0.0001", "gnorm": "8.377", "train_wall": "0", "wall": "1"}
2021-01-16 11:00:19 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "10.251", "ppl": "1218.87", "wps": "16145.2", "ups": "3.94", "wpb": "4096", "bsz": "8", "num_updates": "4", "lr": "0.0001", "gnorm": "9.622", "train_wall": "0", "wall": "2"}
2021-01-16 11:00:20 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "8.778", "ppl": "438.86", "wps": "16205", "ups": "3.96", "wpb": "4096", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "9.697", "train_wall": "0", "wall": "2"}

after:
2021-01-16 11:00:13 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.736", "ppl": "54564.2", "wps": "0", "ups": "0", "wpb": "4096", "bsz": "8", "num_updates": "1", "lr": "0.0001", "gnorm": "6.999", "train_wall": "1", "wall": "1"}
2021-01-16 11:00:13 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "13.381", "ppl": "10666.4", "wps": "13894.2", "ups": "3.39", "wpb": "4096", "bsz": "8", "num_updates": "2", "lr": "0.0001", "gnorm": "5.755", "train_wall": "0", "wall": "1"}
2021-01-16 11:00:13 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "11.728", "ppl": "3392.39", "wps": "15165.4", "ups": "3.7", "wpb": "4096", "bsz": "8", "num_updates": "3", "lr": "0.0001", "gnorm": "6.572", "train_wall": "0", "wall": "1"}
2021-01-16 11:00:14 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "9.958", "ppl": "994.83", "wps": "15130.8", "ups": "3.69", "wpb": "4096", "bsz": "8", "num_updates": "4", "lr": "0.0001", "gnorm": "7.464", "train_wall": "0", "wall": "2"}
2021-01-16 11:00:14 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "8.286", "ppl": "312.05", "wps": "15068.4", "ups": "3.68", "wpb": "4096", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "6.715", "train_wall": "0", "wall": "2"}

linformer_base:

CUDA_VISIBLE_DEVICES=1 python train.py --task dummy_masked_lm --arch linformer_roberta_base --user-dir examples/linformer/linformer_src/ --criterion masked_lm --batch-size 8 --optimizer adam --lr 0.0001 --log-format json --log-interval 1 --max-update 5 --disable-validation --no-save

before:
2021-01-16 09:21:21 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.875", "ppl": "60086.5", "wps": "0", "ups": "0", "wpb": "4096", "bsz": "8", "num_updates": "1", "lr": "0.0001", "gnorm": "7.323", "train_wall": "1", "wall": "1"}
2021-01-16 09:21:22 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "13.377", "ppl": "10641.8", "wps": "17606.9", "ups": "4.3", "wpb": "4096", "bsz": "8", "num_updates": "2", "lr": "0.0001", "gnorm": "6.61", "train_wall": "0", "wall": "1"}
2021-01-16 09:21:22 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "11.46", "ppl": "2816.24", "wps": "19365.8", "ups": "4.73", "wpb": "4096", "bsz": "8", "num_updates": "3", "lr": "0.0001", "gnorm": "8.146", "train_wall": "0", "wall": "1"}
2021-01-16 09:21:22 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "9.548", "ppl": "748.62", "wps": "19311.4", "ups": "4.71", "wpb": "4096", "bsz": "8", "num_updates": "4", "lr": "0.0001", "gnorm": "7.788", "train_wall": "0", "wall": "1"}
2021-01-16 09:21:22 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "8.021", "ppl": "259.73", "wps": "19375.6", "ups": "4.73", "wpb": "4096", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "6.534", "train_wall": "0", "wall": "2"}

after:
2021-01-16 09:40:53 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.875", "ppl": "60086.5", "wps": "0", "ups": "0", "wpb": "4096", "bsz": "8", "num_updates": "1", "lr": "0.0001", "gnorm": "7.323", "train_wall": "1", "wall": "1"}
2021-01-16 09:40:53 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "13.377", "ppl": "10641.8", "wps": "18136", "ups": "4.43", "wpb": "4096", "bsz": "8", "num_updates": "2", "lr": "0.0001", "gnorm": "6.61", "train_wall": "0", "wall": "1"}
2021-01-16 09:40:53 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "11.46", "ppl": "2816.24", "wps": "19303.6", "ups": "4.71", "wpb": "4096", "bsz": "8", "num_updates": "3", "lr": "0.0001", "gnorm": "8.146", "train_wall": "0", "wall": "1"}
2021-01-16 09:40:53 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "9.548", "ppl": "748.62", "wps": "19190.4", "ups": "4.69", "wpb": "4096", "bsz": "8", "num_updates": "4", "lr": "0.0001", "gnorm": "7.788", "train_wall": "0", "wall": "1"}
2021-01-16 09:40:54 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "8.021", "ppl": "259.73", "wps": "19244", "ups": "4.7", "wpb": "4096", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "6.534", "train_wall": "0", "wall": "2"}

linformer_base with --shared-kv-compressed=1:

CUDA_VISIBLE_DEVICES=0 python train.py --task dummy_masked_lm --arch linformer_roberta_base --user-dir examples/linformer/linformer_src/ --criterion masked_lm --batch-size 8 --optimizer adam --lr 0.0001 --log-format json --log-interval 1 --max-update 5 --disable-validation --no-save --shared-kv-compressed=1

before: 
2021-01-16 13:27:20 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.958", "ppl": "63643.3", "wps": "0", "ups": "0", "wpb": "4096", "bsz": "8", "num_updates": "1", "lr": "0.0001", "gnorm": "7.542", "train_wall": "1", "wall": "1"}
2021-01-16 13:27:20 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "13.234", "ppl": "9634.15", "wps": "16708.6", "ups": "4.08", "wpb": "4096", "bsz": "8", "num_updates": "2", "lr": "0.0001", "gnorm": "6.89", "train_wall": "0", "wall": "1"}
2021-01-16 13:27:20 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "11.269", "ppl": "2467.69", "wps": "19378.3", "ups": "4.73", "wpb": "4096", "bsz": "8", "num_updates": "3", "lr": "0.0001", "gnorm": "8.214", "train_wall": "0", "wall": "1"}
2021-01-16 13:27:20 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "9.302", "ppl": "631.35", "wps": "19460.9", "ups": "4.75", "wpb": "4096", "bsz": "8", "num_updates": "4", "lr": "0.0001", "gnorm": "7.748", "train_wall": "0", "wall": "2"}
2021-01-16 13:27:21 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "7.811", "ppl": "224.54", "wps": "19378.8", "ups": "4.73", "wpb": "4096", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "6.699", "train_wall": "0", "wall": "2"}

after:
2021-01-16 13:27:19 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.958", "ppl": "63643.3", "wps": "0", "ups": "0", "wpb": "4096", "bsz": "8", "num_updates": "1", "lr": "0.0001", "gnorm": "7.542", "train_wall": "1", "wall": "1"}
2021-01-16 13:27:19 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "13.234", "ppl": "9634.15", "wps": "17832.9", "ups": "4.35", "wpb": "4096", "bsz": "8", "num_updates": "2", "lr": "0.0001", "gnorm": "6.89", "train_wall": "0", "wall": "1"}
2021-01-16 13:27:19 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "11.269", "ppl": "2467.69", "wps": "19288.4", "ups": "4.71", "wpb": "4096", "bsz": "8", "num_updates": "3", "lr": "0.0001", "gnorm": "8.214", "train_wall": "0", "wall": "1"}
2021-01-16 13:27:19 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "9.302", "ppl": "631.35", "wps": "19263.4", "ups": "4.7", "wpb": "4096", "bsz": "8", "num_updates": "4", "lr": "0.0001", "gnorm": "7.748", "train_wall": "0", "wall": "2"}
2021-01-16 13:27:19 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "7.811", "ppl": "224.54", "wps": "19385.4", "ups": "4.73", "wpb": "4096", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "6.699", "train_wall": "0", "wall": "2"}

linformer_base with --shared-kv-compressed=1 --shared-layer-kv-compressed=1:

CUDA_VISIBLE_DEVICES=0 python train.py --task dummy_masked_lm --arch linformer_roberta_base --user-dir examples/linformer/linformer_src/ --criterion masked_lm --batch-size 8 --optimizer adam --lr 0.0001 --log-format json --log-interval 1 --max-update 5 --disable-validation --no-save --shared-kv-compressed=1 --shared-layer-kv-compressed=1

before:
2021-01-16 13:52:03 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.819", "ppl": "57789.5", "wps": "0", "ups": "0", "wpb": "4096", "bsz": "8", "num_updates": "1", "lr": "0.0001", "gnorm": "7.409", "train_wall": "1", "wall": "1"}
2021-01-16 13:52:03 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "13.274", "ppl": "9906.08", "wps": "17493.7", "ups": "4.27", "wpb": "4096", "bsz": "8", "num_updates": "2", "lr": "0.0001", "gnorm": "7", "train_wall": "0", "wall": "1"}
2021-01-16 13:52:03 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "11.164", "ppl": "2294.95", "wps": "19354.1", "ups": "4.73", "wpb": "4096", "bsz": "8", "num_updates": "3", "lr": "0.0001", "gnorm": "7.733", "train_wall": "0", "wall": "1"}
2021-01-16 13:52:03 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "9.287", "ppl": "624.53", "wps": "19380.9", "ups": "4.73", "wpb": "4096", "bsz": "8", "num_updates": "4", "lr": "0.0001", "gnorm": "7.729", "train_wall": "0", "wall": "1"}
2021-01-16 13:52:03 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "7.742", "ppl": "214.07", "wps": "19494.6", "ups": "4.76", "wpb": "4096", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "6.061", "train_wall": "0", "wall": "2"}

after:
2021-01-16 13:52:11 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.819", "ppl": "57789.5", "wps": "0", "ups": "0", "wpb": "4096", "bsz": "8", "num_updates": "1", "lr": "0.0001", "gnorm": "7.409", "train_wall": "1", "wall": "1"}
2021-01-16 13:52:11 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "13.274", "ppl": "9906.08", "wps": "18454.2", "ups": "4.51", "wpb": "4096", "bsz": "8", "num_updates": "2", "lr": "0.0001", "gnorm": "7", "train_wall": "0", "wall": "1"}
2021-01-16 13:52:11 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "11.164", "ppl": "2294.95", "wps": "19341.2", "ups": "4.72", "wpb": "4096", "bsz": "8", "num_updates": "3", "lr": "0.0001", "gnorm": "7.733", "train_wall": "0", "wall": "1"}
2021-01-16 13:52:11 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "9.287", "ppl": "624.53", "wps": "19389.1", "ups": "4.73", "wpb": "4096", "bsz": "8", "num_updates": "4", "lr": "0.0001", "gnorm": "7.729", "train_wall": "0", "wall": "1"}
2021-01-16 13:52:12 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "7.742", "ppl": "214.07", "wps": "19344.5", "ups": "4.72", "wpb": "4096", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "6.061", "train_wall": "0", "wall": "2"}

model_parallel_roberta (note: the default behavior is changed; the old behavior is renamed to --arch=model_parallel_roberta_v1):

before:
CUDA_VISIBLE_DEVICES=0,1 python train.py --task dummy_masked_lm --arch model_parallel_roberta_base --model-parallel-size 2 --criterion masked_lm --batch-size 8 --optimizer adam --lr 0.0001 --log-format json --log-interval 1 --max-update 5 --disable-validation --no-save
2021-01-16 11:04:16 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.616", "ppl": "50233.8", "wps": "0", "ups": "0", "wpb": "4096", "bsz": "8", "num_updates": "1", "lr": "0.0001", "gnorm": "4.429", "train_wall": "1", "wall": "3"}
2021-01-16 11:04:16 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.382", "ppl": "42701", "wps": "17780.4", "ups": "4.34", "wpb": "4096", "bsz": "8", "num_updates": "2", "lr": "0.0001", "gnorm": "3.413", "train_wall": "0", "wall": "4"}
2021-01-16 11:04:16 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.134", "ppl": "35957.9", "wps": "22432.9", "ups": "5.48", "wpb": "4096", "bsz": "8", "num_updates": "3", "lr": "0.0001", "gnorm": "3.24", "train_wall": "0", "wall": "4"}
2021-01-16 11:04:16 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "14.956", "ppl": "31790.2", "wps": "22062.9", "ups": "5.39", "wpb": "4096", "bsz": "8", "num_updates": "4", "lr": "0.0001", "gnorm": "3.229", "train_wall": "0", "wall": "4"}
2021-01-16 11:04:17 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "14.792", "ppl": "28360.9", "wps": "22416.1", "ups": "5.47", "wpb": "4096", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "3.232", "train_wall": "0", "wall": "4"}

after (using "_v1" architecture, which skips the final layer norm):
CUDA_VISIBLE_DEVICES=0,1 python train.py --task dummy_masked_lm --arch model_parallel_roberta_v1 --model-parallel-size 2 --user-dir examples/linformer/linformer_src/ --criterion masked_lm --batch-size 8 --optimizer adam --lr 0.0001 --log-format json --log-interval 1 --max-update 5 --disable-validation --no-save
2021-01-16 11:40:46 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.616", "ppl": "50233.8", "wps": "0", "ups": "0", "wpb": "4096", "bsz": "8", "num_updates": "1", "lr": "0.0001", "gnorm": "4.429", "train_wall": "1", "wall": "3"}
2021-01-16 11:40:46 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.382", "ppl": "42701", "wps": "15929.1", "ups": "3.89", "wpb": "4096", "bsz": "8", "num_updates": "2", "lr": "0.0001", "gnorm": "3.413", "train_wall": "0", "wall": "4"}
2021-01-16 11:40:46 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.134", "ppl": "35957.9", "wps": "18722.5", "ups": "4.57", "wpb": "4096", "bsz": "8", "num_updates": "3", "lr": "0.0001", "gnorm": "3.24", "train_wall": "0", "wall": "4"}
2021-01-16 11:40:46 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "14.956", "ppl": "31790.2", "wps": "18787.9", "ups": "4.59", "wpb": "4096", "bsz": "8", "num_updates": "4", "lr": "0.0001", "gnorm": "3.229", "train_wall": "0", "wall": "4"}
2021-01-16 11:40:47 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "14.792", "ppl": "28360.9", "wps": "18337.5", "ups": "4.48", "wpb": "4096", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "3.232", "train_wall": "0", "wall": "4"}

after (using new default architecture, which keeps the final layer norm):
CUDA_VISIBLE_DEVICES=0,1 python train.py --task dummy_masked_lm --arch model_parallel_roberta --model-parallel-size 2 --user-dir examples/linformer/linformer_src/ --criterion masked_lm --batch-size 8 --optimizer adam --lr 0.0001 --log-format json --log-interval 1 --max-update 5 --disable-validation --no-save
2021-01-16 11:41:26 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.616", "ppl": "50232.8", "wps": "0", "ups": "0", "wpb": "4096", "bsz": "8", "num_updates": "1", "lr": "0.0001", "gnorm": "4.423", "train_wall": "1", "wall": "4"}
2021-01-16 11:41:26 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.37", "ppl": "42343.2", "wps": "15413.7", "ups": "3.76", "wpb": "4096", "bsz": "8", "num_updates": "2", "lr": "0.0001", "gnorm": "3.387", "train_wall": "0", "wall": "4"}
2021-01-16 11:41:26 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "15.122", "ppl": "35665", "wps": "18649.7", "ups": "4.55", "wpb": "4096", "bsz": "8", "num_updates": "3", "lr": "0.0001", "gnorm": "3.24", "train_wall": "0", "wall": "4"}
2021-01-16 11:41:26 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "14.942", "ppl": "31478.5", "wps": "18503.9", "ups": "4.52", "wpb": "4096", "bsz": "8", "num_updates": "4", "lr": "0.0001", "gnorm": "3.232", "train_wall": "0", "wall": "4"}
2021-01-16 11:41:27 | INFO | train_inner | {"epoch": 1, "update": 0.0, "loss": "14.772", "ppl": "27984.3", "wps": "19196.1", "ups": "4.69", "wpb": "4096", "bsz": "8", "num_updates": "5", "lr": "0.0001", "gnorm": "3.233", "train_wall": "0", "wall": "4"}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment