This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| [2022-11-07 21:10:04,319] torch._dynamo.testing: [WARNING] High loss value alert - 10.97. Can result in unstable gradients. | |
| cuda train AllenaiLongformerBase [2022-11-07 21:10:04,744] torch._dynamo.testing: [WARNING] High loss value alert - 10.97. Can result in unstable gradients. | |
| Traceback (most recent call last): | |
| File "benchmarks/dynamo/huggingface.py", line 529, in <module> | |
| main(HuggingfaceRunner()) | |
| File "/scratch/whc/work/pytorch/benchmarks/dynamo/common.py", line 1579, in main | |
| return maybe_fresh_cache(run, args.cold_start_latency and args.only)( | |
| File "/scratch/whc/work/pytorch/benchmarks/dynamo/common.py", line 775, in inner | |
| return fn(*args, **kwargs) | |
| File "/scratch/whc/work/pytorch/benchmarks/dynamo/common.py", line 1905, in run |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| cuda train BERT_pytorch [2022-10-17 17:29:01,707] torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /scratch/whc2/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 331, in forward_and_backward_pass | |
| cloned_inputs = clone_inputs(inputs) | |
| [2022-10-17 17:29:01,814] torchdynamo.symbolic_convert: [WARNING] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 332, in <graph break in forward_and_backward_pass> | |
| mod.zero_grad(True) | |
| [2022-10-17 17:29:01,832] torchdynamo.symbolic_convert: [WARNING] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc2/work/benchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward | |
| x = self.embedding(x, segment_info) | |
| File "/scratch/whc2/work/benchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py", line |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| cuda train BERT_pytorch [2022-10-13 19:40:23,368] torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /scratch/whc2/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 325, in forward_and_backward_pass | |
| cloned_inputs = clone_inputs(inputs) | |
| [2022-10-13 19:40:23,482] torchdynamo.symbolic_convert: [WARNING] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 326, in <graph break in forward_and_backward_pass> | |
| mod.zero_grad(True) | |
| [2022-10-13 19:40:23,502] torchdynamo.symbolic_convert: [WARNING] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc2/work/benchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward | |
| x = self.embedding(x, segment_info) | |
| File "/scratch/whc2/work/benchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py", line |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] [2022-10-11 22:37:44,101] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 357, in forward_and_backward_pass | |
| cloned_inputs = clone_inputs(inputs) | |
| torchdynamo.symbolic_convert: [WARNING] [2022-10-11 22:37:44,202] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 358, in <graph break in forward_and_backward_pass> | |
| mod.zero_grad(True) | |
| torchdynamo.symbolic_convert: [WARNING] [2022-10-11 22:37:44,220] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward | |
| x = self.embedding(x, segment_info) | |
| File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] [2022-10-11 16:01:12,343] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 357, in forward_and_backward_pass | |
| cloned_inputs = clone_inputs(inputs) | |
| torchdynamo.symbolic_convert: [WARNING] [2022-10-11 16:01:12,444] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 358, in <graph break in forward_and_backward_pass> | |
| mod.zero_grad(True) | |
| torchdynamo.symbolic_convert: [WARNING] [2022-10-11 16:01:12,461] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward | |
| x = self.embedding(x, segment_info) | |
| File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] [2022-10-11 13:32:35,104] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 357, in forward_and_backward_pass | |
| cloned_inputs = clone_inputs(inputs) | |
| torchdynamo.symbolic_convert: [WARNING] [2022-10-11 13:32:35,210] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 358, in <graph break in forward_and_backward_pass> | |
| mod.zero_grad(True) | |
| torchdynamo.symbolic_convert: [WARNING] [2022-10-11 13:32:35,228] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward | |
| x = self.embedding(x, segment_info) | |
| File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] [2022-10-10 17:04:03,197] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 357, in forward_and_backward_pass | |
| cloned_inputs = clone_inputs(inputs) | |
| torchdynamo.symbolic_convert: [WARNING] [2022-10-10 17:04:03,299] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 358, in <graph break in forward_and_backward_pass> | |
| mod.zero_grad(True) | |
| torchdynamo.symbolic_convert: [WARNING] [2022-10-10 17:04:03,317] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward | |
| x = self.embedding(x, segment_info) | |
| File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 341, in forward_and_backward_pass | |
| cloned_inputs = clone_inputs(inputs) | |
| torchdynamo.symbolic_convert: [WARNING] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 342, in <graph break in forward_and_backward_pass> | |
| mod.zero_grad(True) | |
| torchdynamo.symbolic_convert: [WARNING] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward | |
| x = self.embedding(x, segment_info) | |
| File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py", line 32, in forward | |
| x = self.token(sequence) + self.position(sequence) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 341, in forward_and_backward_pass | |
| cloned_inputs = clone_inputs(inputs) | |
| torchdynamo.symbolic_convert: [WARNING] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 342, in <graph break in forward_and_backward_pass> | |
| mod.zero_grad(True) | |
| torchdynamo.symbolic_convert: [WARNING] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward | |
| x = self.embedding(x, segment_info) | |
| File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py", line 32, in forward | |
| x = self.token(sequence) + self.position(sequence) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 341, in forward_and_backward_pass | |
| cloned_inputs = clone_inputs(inputs) | |
| torchdynamo.symbolic_convert: [WARNING] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 342, in <graph break in forward_and_backward_pass> | |
| mod.zero_grad(True) | |
| torchdynamo.symbolic_convert: [WARNING] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward | |
| x = self.embedding(x, segment_info) | |
| File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py", line 32, in forward | |
| x = self.token(sequence) + self.position(sequence) |