Skip to content

Instantly share code, notes, and snippets.

[2022-11-07 21:10:04,319] torch._dynamo.testing: [WARNING] High loss value alert - 10.97. Can result in unstable gradients.
cuda train AllenaiLongformerBase [2022-11-07 21:10:04,744] torch._dynamo.testing: [WARNING] High loss value alert - 10.97. Can result in unstable gradients.
Traceback (most recent call last):
File "benchmarks/dynamo/huggingface.py", line 529, in <module>
main(HuggingfaceRunner())
File "/scratch/whc/work/pytorch/benchmarks/dynamo/common.py", line 1579, in main
return maybe_fresh_cache(run, args.cold_start_latency and args.only)(
File "/scratch/whc/work/pytorch/benchmarks/dynamo/common.py", line 775, in inner
return fn(*args, **kwargs)
File "/scratch/whc/work/pytorch/benchmarks/dynamo/common.py", line 1905, in run
@wconstab
wconstab / BERT_pytorch.log
Created October 17, 2022 21:13
Torchbench logs auto upload
cuda train BERT_pytorch [2022-10-17 17:29:01,707] torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /scratch/whc2/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 331, in forward_and_backward_pass
cloned_inputs = clone_inputs(inputs)
[2022-10-17 17:29:01,814] torchdynamo.symbolic_convert: [WARNING] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 332, in <graph break in forward_and_backward_pass>
mod.zero_grad(True)
[2022-10-17 17:29:01,832] torchdynamo.symbolic_convert: [WARNING] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc2/work/benchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward
x = self.embedding(x, segment_info)
File "/scratch/whc2/work/benchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py", line
@wconstab
wconstab / BERT_pytorch.log
Created October 13, 2022 20:19
Torchbench logs auto upload
cuda train BERT_pytorch [2022-10-13 19:40:23,368] torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /scratch/whc2/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 325, in forward_and_backward_pass
cloned_inputs = clone_inputs(inputs)
[2022-10-13 19:40:23,482] torchdynamo.symbolic_convert: [WARNING] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 326, in <graph break in forward_and_backward_pass>
mod.zero_grad(True)
[2022-10-13 19:40:23,502] torchdynamo.symbolic_convert: [WARNING] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc2/work/benchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward
x = self.embedding(x, segment_info)
File "/scratch/whc2/work/benchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py", line
@wconstab
wconstab / BERT_pytorch.log
Created October 11, 2022 22:56
Torchbench logs auto upload
cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] [2022-10-11 22:37:44,101] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 357, in forward_and_backward_pass
cloned_inputs = clone_inputs(inputs)
torchdynamo.symbolic_convert: [WARNING] [2022-10-11 22:37:44,202] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 358, in <graph break in forward_and_backward_pass>
mod.zero_grad(True)
torchdynamo.symbolic_convert: [WARNING] [2022-10-11 22:37:44,220] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward
x = self.embedding(x, segment_info)
File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py
@wconstab
wconstab / BERT_pytorch.log
Created October 11, 2022 16:13
Torchbench logs auto upload
cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] [2022-10-11 16:01:12,343] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 357, in forward_and_backward_pass
cloned_inputs = clone_inputs(inputs)
torchdynamo.symbolic_convert: [WARNING] [2022-10-11 16:01:12,444] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 358, in <graph break in forward_and_backward_pass>
mod.zero_grad(True)
torchdynamo.symbolic_convert: [WARNING] [2022-10-11 16:01:12,461] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward
x = self.embedding(x, segment_info)
File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py
@wconstab
wconstab / BERT_pytorch.log
Created October 11, 2022 15:45
Torchbench logs auto upload
cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] [2022-10-11 13:32:35,104] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 357, in forward_and_backward_pass
cloned_inputs = clone_inputs(inputs)
torchdynamo.symbolic_convert: [WARNING] [2022-10-11 13:32:35,210] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 358, in <graph break in forward_and_backward_pass>
mod.zero_grad(True)
torchdynamo.symbolic_convert: [WARNING] [2022-10-11 13:32:35,228] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward
x = self.embedding(x, segment_info)
File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py
@wconstab
wconstab / BERT_pytorch.log
Created October 10, 2022 22:20
Torchbench logs auto upload
cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] [2022-10-10 17:04:03,197] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 357, in forward_and_backward_pass
cloned_inputs = clone_inputs(inputs)
torchdynamo.symbolic_convert: [WARNING] [2022-10-10 17:04:03,299] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 358, in <graph break in forward_and_backward_pass>
mod.zero_grad(True)
torchdynamo.symbolic_convert: [WARNING] [2022-10-10 17:04:03,317] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward
x = self.embedding(x, segment_info)
File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py
@wconstab
wconstab / BERT_pytorch.log
Created October 7, 2022 18:44
Torchbench logs auto upload
cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 341, in forward_and_backward_pass
cloned_inputs = clone_inputs(inputs)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 342, in <graph break in forward_and_backward_pass>
mod.zero_grad(True)
torchdynamo.symbolic_convert: [WARNING] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward
x = self.embedding(x, segment_info)
File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py", line 32, in forward
x = self.token(sequence) + self.position(sequence)
@wconstab
wconstab / BERT_pytorch.log
Created October 7, 2022 03:28
Torchbench logs auto upload
cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 341, in forward_and_backward_pass
cloned_inputs = clone_inputs(inputs)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 342, in <graph break in forward_and_backward_pass>
mod.zero_grad(True)
torchdynamo.symbolic_convert: [WARNING] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward
x = self.embedding(x, segment_info)
File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py", line 32, in forward
x = self.token(sequence) + self.position(sequence)
@wconstab
wconstab / BERT_pytorch.log
Created October 4, 2022 14:25
Torchbench logs auto upload
cuda train BERT_pytorch torchdynamo.symbolic_convert: [WARNING] Graph break: call_function in skip_files /scratch/whc/work/torchdynamo/torchdynamo/utils.py from user code at File "benchmarks/torchbench.py", line 341, in forward_and_backward_pass
cloned_inputs = clone_inputs(inputs)
torchdynamo.symbolic_convert: [WARNING] Graph break: call_method NNModuleVariable() zero_grad [ConstantVariable(bool)] {} from user code at File "benchmarks/torchbench.py", line 342, in <graph break in forward_and_backward_pass>
mod.zero_grad(True)
torchdynamo.symbolic_convert: [WARNING] Graph break: Dynamic slicing not supported from user code at File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/bert.py", line 43, in forward
x = self.embedding(x, segment_info)
File "/scratch/whc/work/torchbenchmark/torchbenchmark/models/BERT_pytorch/bert_pytorch/model/embedding/bert.py", line 32, in forward
x = self.token(sequence) + self.position(sequence)