Skip to content

Instantly share code, notes, and snippets.

@wonjoolee95
Created January 5, 2023 02:56
Show Gist options
  • Save wonjoolee95/f47a50fdfc72ca585694ce1ce9290c14 to your computer and use it in GitHub Desktop.
Save wonjoolee95/f47a50fdfc72ca585694ce1ce9290c14 to your computer and use it in GitHub Desktop.
(base) jenkins@26d7adccbc26:/workspace/pytorch/xla$ TORCH_SHOW_DISPATCH_TRACE=1 python test/dynamo/test_dynamo.py -k test_simple_model
/opt/conda/lib/python3.7/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libc10_cuda.so: cannot open shared object file: No such file or directory
warn(f"Failed to load image Python extension: {e}")
2023-01-05 02:54:58.628640: W 458813 tensorflow/tsl/platform/default/dso_loader.cc:66] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
2023-01-05 02:54:58.628690: W 458813 tensorflow/compiler/xla/stream_executor/cuda/cuda_driver.cc:265] failed call to cuInit: UNKNOWN ERROR (303)
[call] op=[aten::empty.memory_format], key=[BackendSelect]
[redispatch] op=[aten::empty.memory_format], key=[CPU]
[call] op=[aten::to.device], key=[CPU]
[call] op=[aten::lift_fresh], key=[CPU]
[call] op=[aten::detach_], key=[AutogradCPU]
[call] op=[aten::empty.memory_format], key=[BackendSelect]
[redispatch] op=[aten::empty.memory_format], key=[CPU]
[call] op=[aten::to.device], key=[CPU]
[call] op=[aten::lift_fresh], key=[CPU]
[call] op=[aten::detach_], key=[AutogradCPU]
[call] op=[aten::to.dtype_layout], key=[AutogradCPU]
[call] op=[aten::_to_copy], key=[AutogradCPU]
[redispatch] op=[aten::_to_copy], key=[BackendSelect]
[redispatch] op=[aten::_to_copy], key=[XLA]
[call] op=[aten::to.dtype_layout], key=[BackendSelect]
[redispatch] op=[aten::to.dtype_layout], key=[CPU]
[call] op=[aten::item], key=[CPU]
[call] op=[aten::_local_scalar_dense], key=[CPU]
[call] op=[aten::to.dtype_layout], key=[BackendSelect]
[redispatch] op=[aten::to.dtype_layout], key=[CPU]
[call] op=[aten::_to_copy], key=[BackendSelect]
[redispatch] op=[aten::_to_copy], key=[CPU]
[call] op=[aten::empty_strided], key=[BackendSelect]
[redispatch] op=[aten::empty_strided], key=[CPU]
[call] op=[aten::copy_], key=[CPU]
[call] op=[aten::to.dtype_layout], key=[AutogradCPU]
[call] op=[aten::_to_copy], key=[AutogradCPU]
[redispatch] op=[aten::_to_copy], key=[BackendSelect]
[redispatch] op=[aten::_to_copy], key=[XLA]
[call] op=[aten::to.dtype_layout], key=[BackendSelect]
[redispatch] op=[aten::to.dtype_layout], key=[CPU]
[call] op=[aten::item], key=[CPU]
[call] op=[aten::_local_scalar_dense], key=[CPU]
[call] op=[aten::to.dtype_layout], key=[BackendSelect]
[redispatch] op=[aten::to.dtype_layout], key=[CPU]
[call] op=[aten::_to_copy], key=[BackendSelect]
[redispatch] op=[aten::_to_copy], key=[CPU]
[call] op=[aten::empty_strided], key=[BackendSelect]
[redispatch] op=[aten::empty_strided], key=[CPU]
[call] op=[aten::copy_], key=[CPU]
[call] op=[aten::cos], key=[AutogradCPU]
[redispatch] op=[aten::cos], key=[CPU]
[call] op=[aten::sin], key=[AutogradCPU]
[redispatch] op=[aten::sin], key=[CPU]
[call] op=[aten::add.Tensor], key=[AutogradCPU]
[redispatch] op=[aten::add.Tensor], key=[CPU]
[call] op=[aten::clone], key=[AutogradCPU]
[redispatch] op=[aten::clone], key=[CPU]
[call] op=[aten::empty_strided], key=[BackendSelect]
[redispatch] op=[aten::empty_strided], key=[CPU]
[call] op=[aten::copy_], key=[CPU]
[2023-01-05 02:54:58,798] torch._dynamo.utils: [WARNING] Unsupported: meta converter nyi with fake tensor propagation.
[call] op=[aten::clone], key=[AutogradCPU]
[redispatch] op=[aten::clone], key=[CPU]
[call] op=[aten::empty_strided], key=[BackendSelect]
[redispatch] op=[aten::empty_strided], key=[CPU]
[call] op=[aten::copy_], key=[CPU]
[2023-01-05 02:54:58,800] torch._dynamo.utils: [WARNING] Unsupported: meta converter nyi with fake tensor propagation.
[call] op=[aten::cos], key=[AutogradXLA]
[redispatch] op=[aten::cos], key=[Functionalize]
[callBoxed] op=[aten::cos], key=[XLA]
[call] op=[aten::sin], key=[AutogradXLA]
[redispatch] op=[aten::sin], key=[Functionalize]
[callBoxed] op=[aten::sin], key=[XLA]
[call] op=[aten::add.Tensor], key=[AutogradXLA]
[redispatch] op=[aten::add.Tensor], key=[Functionalize]
[callBoxed] op=[aten::add.Tensor], key=[XLA]
[call] op=[aten::result_type.Tensor], key=[XLA]
[call] op=[aten::result_type.Tensor], key=[XLA]
[call] op=[aten::to.dtype_layout], key=[AutogradXLA]
[call] op=[aten::_to_copy], key=[AutogradXLA]
[redispatch] op=[aten::_to_copy], key=[Functionalize]
[call] op=[aten::_to_copy], key=[BackendSelect]
[redispatch] op=[aten::_to_copy], key=[XLA]
[call] op=[aten::empty.memory_format], key=[BackendSelect]
[redispatch] op=[aten::empty.memory_format], key=[CPU]
[call] op=[aten::to.dtype_layout], key=[BackendSelect]
[redispatch] op=[aten::to.dtype_layout], key=[CPU]
[call] op=[aten::_to_copy], key=[BackendSelect]
[redispatch] op=[aten::_to_copy], key=[CPU]
[call] op=[aten::empty_strided], key=[BackendSelect]
[redispatch] op=[aten::empty_strided], key=[CPU]
[call] op=[aten::copy_], key=[CPU]
[call] op=[aten::allclose], key=[AutogradCPU]
[redispatch] op=[aten::allclose], key=[CPU]
[call] op=[aten::isclose], key=[CPU]
[call] op=[aten::eq.Tensor], key=[CPU]
[call] op=[aten::mul.Scalar], key=[CPU]
[call] op=[aten::mul.Tensor], key=[CPU]
[call] op=[aten::to.dtype], key=[CPU]
[call] op=[aten::_to_copy], key=[BackendSelect]
[redispatch] op=[aten::_to_copy], key=[CPU]
[call] op=[aten::empty_strided], key=[BackendSelect]
[redispatch] op=[aten::empty_strided], key=[CPU]
[call] op=[aten::copy_], key=[CPU]
[call] op=[aten::abs], key=[CPU]
[call] op=[aten::empty.memory_format], key=[BackendSelect]
[redispatch] op=[aten::empty.memory_format], key=[CPU]
[call] op=[aten::abs.out], key=[CPU]
[call] op=[aten::add.Scalar], key=[CPU]
[call] op=[aten::add.Tensor], key=[CPU]
[call] op=[aten::to.dtype], key=[CPU]
[call] op=[aten::_to_copy], key=[BackendSelect]
[redispatch] op=[aten::_to_copy], key=[CPU]
[call] op=[aten::empty_strided], key=[BackendSelect]
[redispatch] op=[aten::empty_strided], key=[CPU]
[call] op=[aten::copy_], key=[CPU]
[call] op=[aten::sub.Tensor], key=[CPU]
[call] op=[aten::abs], key=[CPU]
[call] op=[aten::empty.memory_format], key=[BackendSelect]
[redispatch] op=[aten::empty.memory_format], key=[CPU]
[call] op=[aten::abs.out], key=[CPU]
[call] op=[aten::isfinite], key=[CPU]
[call] op=[aten::eq.Tensor], key=[CPU]
[call] op=[aten::abs], key=[CPU]
[call] op=[aten::empty.memory_format], key=[BackendSelect]
[redispatch] op=[aten::empty.memory_format], key=[CPU]
[call] op=[aten::abs.out], key=[CPU]
[call] op=[aten::ne.Scalar], key=[CPU]
[call] op=[aten::to.dtype], key=[CPU]
[call] op=[aten::_to_copy], key=[BackendSelect]
[redispatch] op=[aten::_to_copy], key=[CPU]
[call] op=[aten::empty_strided], key=[BackendSelect]
[redispatch] op=[aten::empty_strided], key=[CPU]
[call] op=[aten::copy_], key=[CPU]
[call] op=[aten::mul.Tensor], key=[CPU]
[call] op=[aten::le.Tensor], key=[CPU]
[call] op=[aten::__iand__.Tensor], key=[CPU]
[call] op=[aten::bitwise_and_.Tensor], key=[CPU]
[call] op=[aten::__ior__.Tensor], key=[CPU]
[call] op=[aten::bitwise_or_.Tensor], key=[CPU]
[call] op=[aten::all], key=[CPU]
[call] op=[aten::view_as], key=[CPU]
[call] op=[aten::view], key=[CPU]
[call] op=[aten::to.dtype], key=[CPU]
[call] op=[aten::copy_], key=[CPU]
[call] op=[aten::item], key=[CPU]
[call] op=[aten::_local_scalar_dense], key=[CPU]
WONJOO: []
[call] op=[aten::cos], key=[AutogradXLA]
[redispatch] op=[aten::cos], key=[Functionalize]
[callBoxed] op=[aten::cos], key=[XLA]
[call] op=[aten::sin], key=[AutogradXLA]
[redispatch] op=[aten::sin], key=[Functionalize]
[callBoxed] op=[aten::sin], key=[XLA]
[call] op=[aten::add.Tensor], key=[AutogradXLA]
[redispatch] op=[aten::add.Tensor], key=[Functionalize]
[callBoxed] op=[aten::add.Tensor], key=[XLA]
[call] op=[aten::result_type.Tensor], key=[XLA]
[call] op=[aten::result_type.Tensor], key=[XLA]
F
======================================================================
FAIL: test_simple_model (__main__.DynamoBasicTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "test/dynamo/test_dynamo.py", line 44, in test_simple_model
self.assertNotIn('xla::add', met.counter_names())
AssertionError: 'xla::add' unexpectedly found in ['CreateXlaTensor', 'DestroyLtcTensor', 'DestroyXlaTensor', 'xla::add', 'xla::cos', 'xla::sin']
----------------------------------------------------------------------
Ran 1 test in 0.232s
FAILED (failures=1)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment