Skip to content

Instantly share code, notes, and snippets.

@Icemole
Created June 26, 2023 13:23
Show Gist options
  • Save Icemole/9a2aea10fb01304ebd3f74d95489682e to your computer and use it in GitHub Desktop.
Save Icemole/9a2aea10fb01304ebd3f74d95489682e to your computer and use it in GitHub Desktop.
PyTorch: 2.0.1+cu117 (e9ebda29d87ce0916ab08c06ab26fd3766a870e5) (<site-package> in /home/nbeneitez/.venv/returnn_pytorch/lib/python3.10/site-packages/torch)
/home/nbeneitez/Documentos/work/repos/returnn_pytorch/returnn/tensor/_tensor_extra.py:327: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if raw_shape[i] != self.batch_shape[i]:
/home/nbeneitez/Documentos/work/repos/returnn_pytorch/returnn/frontend/_backend.py:136: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
assert all(dim is None or dim == existing_shape[i] for i, dim in enumerate(shape))
WARNING:root:Cannot infer dynamic size for dim Dim{'time:var-unk:output'[B?]} from output 'output' Tensor{'raw_tensor', [B?,'time:var-unk:output'[B?],F'feature:output'(2)]}.Using Tensor{'full', [B?], dtype='int32'} as fallback.
Exported graph: graph(%data : Float(*, *, 9, strides=[144, 9, 1], requires_grad=0, device=cpu),
%classes:size0 : Int(requires_grad=0, device=cpu),
%model.layers.0.weight : Float(50, 9, 5, strides=[45, 5, 1], requires_grad=1, device=cpu),
%model.layers.0.bias : Float(50, strides=[1], requires_grad=1, device=cpu),
%model.layers.2.weight : Float(100, 50, 5, strides=[250, 5, 1], requires_grad=1, device=cpu),
%model.layers.2.bias : Float(100, strides=[1], requires_grad=1, device=cpu),
%model.layers.4.weight : Float(2, 100, 5, strides=[500, 5, 1], requires_grad=1, device=cpu),
%model.layers.4.bias : Float(2, strides=[1], requires_grad=1, device=cpu)):
%model/Transpose_output_0 : Float(*, 9, *, strides=[144, 1, 9], requires_grad=0, device=cpu) = onnx::Transpose[perm=[0, 2, 1], onnx_name="model/Transpose"](%data), scope: __returnn_config__.Model::model # demos/demo-torch.config:51:0
%model/layers/layers.0/Conv_output_0 : Float(*, 50, *, strides=[800, 16, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1], group=1, kernel_shape=[5], pads=[2, 2], strides=[1], onnx_name="model/layers/layers.0/Conv"](%model/Transpose_output_0, %model.layers.0.weight, %model.layers.0.bias), scope: __returnn_config__.Model::model/torch.nn.modules.container.Sequential::layers/torch.nn.modules.conv.Conv1d::layers.0 # /home/nbeneitez/.venv/returnn_pytorch/lib/python3.10/site-packages/torch/nn/modules/conv.py:309:0
%model/layers/layers.1/Relu_output_0 : Float(*, 50, *, strides=[800, 16, 1], requires_grad=1, device=cpu) = onnx::Relu[onnx_name="model/layers/layers.1/Relu"](%model/layers/layers.0/Conv_output_0), scope: __returnn_config__.Model::model/torch.nn.modules.container.Sequential::layers/torch.nn.modules.activation.ReLU::layers.1 # /home/nbeneitez/.venv/returnn_pytorch/lib/python3.10/site-packages/torch/nn/functional.py:1457:0
%model/layers/layers.2/Conv_output_0 : Float(*, 100, *, strides=[1600, 16, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1], group=1, kernel_shape=[5], pads=[2, 2], strides=[1], onnx_name="model/layers/layers.2/Conv"](%model/layers/layers.1/Relu_output_0, %model.layers.2.weight, %model.layers.2.bias), scope: __returnn_config__.Model::model/torch.nn.modules.container.Sequential::layers/torch.nn.modules.conv.Conv1d::layers.2 # /home/nbeneitez/.venv/returnn_pytorch/lib/python3.10/site-packages/torch/nn/modules/conv.py:309:0
%model/layers/layers.3/Relu_output_0 : Float(*, 100, *, strides=[1600, 16, 1], requires_grad=1, device=cpu) = onnx::Relu[onnx_name="model/layers/layers.3/Relu"](%model/layers/layers.2/Conv_output_0), scope: __returnn_config__.Model::model/torch.nn.modules.container.Sequential::layers/torch.nn.modules.activation.ReLU::layers.3 # /home/nbeneitez/.venv/returnn_pytorch/lib/python3.10/site-packages/torch/nn/functional.py:1457:0
%model/layers/layers.4/Conv_output_0 : Float(*, 2, *, strides=[32, 16, 1], requires_grad=0, device=cpu) = onnx::Conv[dilations=[1], group=1, kernel_shape=[5], pads=[2, 2], strides=[1], onnx_name="model/layers/layers.4/Conv"](%model/layers/layers.3/Relu_output_0, %model.layers.4.weight, %model.layers.4.bias), scope: __returnn_config__.Model::model/torch.nn.modules.container.Sequential::layers/torch.nn.modules.conv.Conv1d::layers.4 # /home/nbeneitez/.venv/returnn_pytorch/lib/python3.10/site-packages/torch/nn/modules/conv.py:309:0
%output : Float(*, *, 2, strides=[32, 1, 16], requires_grad=1, device=cpu) = onnx::Transpose[perm=[0, 2, 1], onnx_name="model/Transpose_1"](%model/layers/layers.4/Conv_output_0), scope: __returnn_config__.Model::model # demos/demo-torch.config:53:0
%onnx::Gather_19 : Long(3, strides=[1], device=cpu) = onnx::Shape(%output) # /home/nbeneitez/Documentos/work/repos/returnn_pytorch/returnn/torch/frontend/_backend.py:113:0
%onnx::Gather_20 : Long(device=cpu) = onnx::Constant[value={1}]() # /home/nbeneitez/Documentos/work/repos/returnn_pytorch/returnn/torch/frontend/_backend.py:113:0
%onnx::Cast_21 : Long(device=cpu) = onnx::Gather[axis=0](%onnx::Gather_19, %onnx::Gather_20) # /home/nbeneitez/Documentos/work/repos/returnn_pytorch/returnn/torch/frontend/_backend.py:113:0
%onnx::Add_22 : Int(requires_grad=0, device=cpu) = onnx::Cast[to=6](%onnx::Cast_21) # /home/nbeneitez/Documentos/work/repos/returnn_pytorch/returnn/torch/frontend/_backend.py:182:0
%onnx::Unsqueeze_23 : Long(requires_grad=0, device=cpu) = onnx::Cast[to=7](%classes:size0) # /home/nbeneitez/Documentos/work/repos/returnn_pytorch/returnn/torch/frontend/_backend.py:678:0
%onnx::Unsqueeze_24 : Long(1, strides=[1], device=cpu) = onnx::Constant[value={0}]()
%onnx::Concat_25 : Long(1, strides=[1], device=cpu) = onnx::Unsqueeze(%onnx::Unsqueeze_23, %onnx::Unsqueeze_24)
%onnx::ConstantOfShape_26 : Long(1, strides=[1], device=cpu) = onnx::Concat[axis=0](%onnx::Concat_25) # /home/nbeneitez/Documentos/work/repos/returnn_pytorch/returnn/torch/frontend/_backend.py:681:0
%onnx::Add_27 : Int(*, device=cpu) = onnx::ConstantOfShape[value={0}](%onnx::ConstantOfShape_26) # /home/nbeneitez/Documentos/work/repos/returnn_pytorch/returnn/torch/frontend/_backend.py:681:0
%output:size1 : Int(*, strides=[1], requires_grad=0, device=cpu) = onnx::Add(%onnx::Add_27, %onnx::Add_22) # /home/nbeneitez/Documentos/work/repos/returnn_pytorch/returnn/torch/frontend/_backend.py:681:0
%output:size0 : Int(requires_grad=0, device=cpu) = onnx::Identity(%classes:size0)
return (%output, %output:size0, %output:size1)
============= Diagnostic Run torch.onnx.export version 2.0.1+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Process finished with exit code 0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment