Skip to content

Instantly share code, notes, and snippets.

@boegel
Created November 29, 2021 20:25
Show Gist options
  • Save boegel/db8131d3fa4bdc88e8f78f79c8b633f3 to your computer and use it in GitHub Desktop.
Save boegel/db8131d3fa4bdc88e8f78f79c8b633f3 to your computer and use it in GitHub Desktop.
(partial) EasyBuild log for failed build of /tmp/eb-yrxa9t7t/files_pr14460/p/PyTorch/PyTorch-1.10.0-foss-2021a.eb (PR(s) #14460)
test_view_tensor_vsplit_meta_float64 (__main__.TestViewOpsMETA) ... skipped "onlyOnCPUAndCUDA: doesn't run on meta"
test_view_tensor_vsplit_meta_int16 (__main__.TestViewOpsMETA) ... skipped "onlyOnCPUAndCUDA: doesn't run on meta"
test_view_tensor_vsplit_meta_int32 (__main__.TestViewOpsMETA) ... skipped "onlyOnCPUAndCUDA: doesn't run on meta"
test_view_tensor_vsplit_meta_int64 (__main__.TestViewOpsMETA) ... skipped "onlyOnCPUAndCUDA: doesn't run on meta"
test_view_tensor_vsplit_meta_int8 (__main__.TestViewOpsMETA) ... skipped "onlyOnCPUAndCUDA: doesn't run on meta"
test_view_tensor_vsplit_meta_uint8 (__main__.TestViewOpsMETA) ... skipped "onlyOnCPUAndCUDA: doesn't run on meta"
test_view_view_meta (__main__.TestViewOpsMETA) ... skipped "not implemented: Could not run 'aten::isnan' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::isnan' is only available for these backends: [CPU, SparseCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nSparseCPU: registered at aten/src/ATen/RegisterSparseCPU.cpp:958 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: fallthrough registered at ../aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_1.cpp:10664 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
----------------------------------------------------------------------
Ran 456 tests in 6.804s
OK (skipped=212)
Running test_vmap ... [2021-11-29 21:17:56.277049]
Executing ['/user/gent/400/vsc40023/eb_arcaninescratch/CO7/skylake-ib/software/Python/3.9.5-GCCcore-10.3.0/bin/python', 'test_vmap.py', '-v'] ... [2021-11-29 21:17:56.277137]
test_accepts_nested_inputs (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:375: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
out = vmap(lambda z: z[0] + z[1])((x, y))
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:377: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
out = vmap(lambda z: z[0] + z[1], in_dims=(0,))((x, y))
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:379: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
out = vmap(lambda z: z[0] + z[1], in_dims=((0, 0),))((x, y))
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:382: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
out = vmap(lambda z: z[0] + z[1])([x, y])
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:384: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
out = vmap(lambda z: z[0] + z[1], in_dims=(0,))([x, y])
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:386: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
out = vmap(lambda z: z[0] + z[1], in_dims=([0, 0],))([x, y])
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:389: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
out = vmap(lambda z: z['x'] + z['y'])({'x': x, 'y': y})
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:391: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
out = vmap(lambda z: z['x'] + z['y'], in_dims=(0,))({'x': x, 'y': y})
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:393: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
out = vmap(lambda z: z['x'] + z['y'], in_dims=({'x': 0, 'y': 0},))({'x': x, 'y': y})
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:397: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
out_fn = vmap(lambda z: z['x'][0] + z['x'][1][0] + z['y'][0] + z['y'][1])
ok
test_backward_unsupported_interaction (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:747: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(backward_on_vmapped_tensor)(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:753: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(backward_with_vmapped_grad)(x, grad)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:759: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(completely_unrelated_backward)(y)
ok
test_batched_gradient_basic (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:791: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
jacobian = vmap(vjp_mul)(batched_v)
ok
test_constant_function (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:62: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(lambda x: torch.tensor(3.14))(torch.ones(3))
ok
test_different_map_dim_size_raises (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:40: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(torch.mul)(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:42: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda z: z[0] + z[1], in_dims=((0, 0),))((x, y))
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:44: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda z: z['x'] + z['y'], in_dims=({'x': 0, 'y': 0},))({'x': x, 'y': y})
ok
test_fallback_atan2 (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:553: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(op, (2, 0))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:559: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(vmap(op), (2, 0))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:565: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(vmap(vmap(op)))(x, y)
ok
test_fallback_does_not_warn_by_default (__main__.TestVmapAPI) ... ok
test_fallback_masked_fill (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:581: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(torch.index_add, (0, None, None, 0))(x, dim, index, values)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:581: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(torch.index_add, (0, None, None, 0))(x, dim, index, values)
ok
test_fallback_multiple_returns (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:599: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(torch.var_mean)(tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:605: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(vmap(torch.var_mean))(tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:611: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(vmap(vmap(torch.var_mean)))(tensor)
ok
test_fallback_warns_when_warnings_are_enabled (__main__.TestVmapAPI) ... ok
test_fallback_with_undefined_grad (__main__.TestVmapAPI) ... ok
test_fallback_zero_dim (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:524: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(op, (0, None))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:526: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(op, (None, 0))(y, x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:528: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(op)(x, x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:533: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(op, (0, None))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:535: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(op, (None, 0))(y, x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:537: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(op)(x, x)
ok
test_func_with_no_inputs (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:56: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(foo)()
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:59: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(bar)()
ok
test_functools_partial (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:797: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(functools.partial(torch.mul, x))(y)
ok
test_grad_unsupported_interaction (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:772: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(output_to_grad_is_vmapped)(input_tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:780: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(input_to_grad_is_vmapped)(input_tensor)
ok
test_in_dim_not_in_tensor_err_msg (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:462: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(foo)(torch.randn([]))
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:464: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(foo, in_dims=(0,))(torch.randn([]))
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:466: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(foo, in_dims=(-1,))(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:468: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(foo, in_dims=(2,))(y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:470: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda z: z[0] + z[1], in_dims=([3, 0],))([x, y])
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:472: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(foo, in_dims=(0,))(torch.randn(2, 3))
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:473: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(foo, in_dims=(1,))(torch.randn(2, 3))
ok
test_in_dims_wrong_type_err_msg (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:406: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(torch.mul, [0, 0])(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:408: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(torch.mul, set({0, 0}))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:410: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(torch.mul, 'lol')(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:412: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda z: z[0] + z[1], in_dims=[0, 0])([x, y])
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:414: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(torch.mul, (0, 0))(x, y)
ok
test_inplace_fallback_nary_different_levels (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:706: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(op, in_dims=(0, None))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:712: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(vmap(op, in_dims=(0, None)))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:720: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(op, in_dims=(None, 0))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:725: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(vmap(op, in_dims=(0, None)), in_dims=(None, 0))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:730: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(vmap(op, in_dims=(0, None)), in_dims=(None, 1))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:735: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(vmap(op, in_dims=(None, 0)))(x, y)
ok
test_inplace_fallback_nary_same_levels (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:671: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(op, (2, 0))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:679: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(vmap(op), (2, 0))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:687: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(vmap(vmap(op)))(x, y)
ok
test_inplace_fallback_unary (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:630: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(op)(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:637: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(op, out_dims=(1,))(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:644: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(vmap(op))(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:651: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(vmap(vmap(op)))(x)
ok
test_integer_in_dim_but_not_tensor_input_err_msg (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:445: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(torch.sum)(x, 0)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:447: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(torch.sum, (0, 0))(x, 0)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:449: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda z: z[0] + z[1], in_dims=([0, 0],))([x, 1])
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:451: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(torch.sum, (0, None))(x, 0)
ok
test_multiple_inputs (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:77: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(torch.mul)(x, y)
ok
test_multiple_out_dims (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:219: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(foo, out_dims=(0, 1))(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:222: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(bar, out_dims=(-1, 0, 1, 2))(x, y)
ok
test_multiple_outputs (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:85: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
outputs = vmap(foo)(x)
ok
test_multiple_outputs_error_cases (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:105: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(returns_tuple_of_tensors)(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:110: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(returns_list_of_two_tensors)(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:112: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(returns_list_of_one_tensor)(x)
ok
test_nested_non_default_in_dims (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:336: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(vmap(vmap(torch.mul), (1, 0)), (1, 2))(x, y)
ok
test_nested_out_dims (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:235: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(lambda y: vmap(lambda x: x, out_dims=1)(y))(y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:240: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(lambda y: vmap(lambda x: x, out_dims=1)(y), out_dims=1)(y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:245: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(lambda y: vmap(lambda x: x, out_dims=-1)(y), out_dims=-1)(y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:252: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(lambda y: vmap(lambda x: x * y, out_dims=1)(x), out_dims=-1)(y)
ok
test_nested_with_different_map_dim (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:126: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(lambda x: vmap(lambda y: x * y)(y))(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:131: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(lambda x: vmap(lambda y: vmap(lambda z: x * y * z)(z))(y))(x)
ok
test_nested_with_same_map_dim (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:117: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(vmap(torch.mul))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:120: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(vmap(vmap(torch.mul)))(x, y)
ok
test_nn_module (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:803: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(model)(tensor)
ok
test_non_default_in_dims_out_dims (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:343: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(lambda x: x, in_dims=1, out_dims=1)(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:348: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(lambda x: x, in_dims=2, out_dims=1)(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:357: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(foo, in_dims=1, out_dims=1)(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:361: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(foo, in_dims=2, out_dims=1)(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:366: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(vmap(foo, 1, 1), 1, 1)(x)
ok
test_non_tensor_output_raises (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:27: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(lambda x: 3.14)(torch.ones(3))
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:33: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(multiple_outputs)(torch.ones(3))
ok
test_non_zero_in_dims (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:309: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(lambda x: x, (1,))(tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:315: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(torch.mul, (0, 1))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:317: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(torch.mul, (1, 0))(x, y)
ok
test_none_in_dims (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:325: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(torch.mul, (0, None))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:330: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(torch.mul, (0, None))(x, 2)
ok
test_nonzero_out_dims (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:170: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(lambda x: x, out_dims=1)(tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:176: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(lambda x: x, out_dims=2)(tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:182: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(lambda x: x, out_dims=-1)(tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:189: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(lambda x, y: (x, y), out_dims=2)(tensor, other)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:197: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(lambda x: x, out_dims=2)(tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:205: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(foo, out_dims=1)(x, y)
ok
test_noop_in_inner_vmap (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:138: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(lambda x: vmap(lambda y: x)(y))(x)
ok
test_not_enough_in_dims_err_msg (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:422: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(torch.mul, (0,))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:424: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(torch.mul, (0, 0, 0))(x, y)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:426: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda z: z[0] + z[1], in_dims=([0],))([x, y])
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:428: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda z: z[0] + z[1], in_dims=((0, 0),))([x, y])
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:430: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(torch.mul, (0, 0))(x, y)
ok
test_out_dim_out_of_bounds_err_msg (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:301: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda x: x, out_dims=3)(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:303: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda x: x, out_dims=-4)(x)
ok
test_out_dims_and_num_outputs_mismatch_err_msg (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:284: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda x: x, out_dims=(0, 0))(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:286: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda x: (x, x, x), out_dims=(0, 0, 0, 0))(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:290: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda x: (x, x), out_dims=(0,))(x)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:292: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda x: (x, x, x), out_dims=(0, 0))(x)
ok
test_out_dims_edge_case (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:262: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
expected = vmap(foo, out_dims=1)(tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:263: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(foo, out_dims=(1,))(tensor)
ok
test_out_dims_must_be_int_or_tuple_of_int_err_msg (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:270: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda x: x, out_dims='lol')(tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:272: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda x: x, out_dims=('lol',))(tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:274: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda x: x, out_dims=None)(tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:276: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda x: x, out_dims=(None,))(tensor)
ok
test_single_input (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:71: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
output = vmap(square)(x)
ok
test_unsupported_op_err_msg (__main__.TestVmapAPI) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:149: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(torch.ravel)(tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:155: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(out_op)(tensor, tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:160: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(lambda t: torch.atleast_1d([t]))(tensor)
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:165: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(torch.Tensor.item)(tensor)
ok
test_add_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_binary_cross_entropy_cpu (__main__.TestVmapBatchedGradientCPU) ... /tmp/eb-yrxa9t7t/tmpkx1miuzy/lib/python3.9/site-packages/torch/nn/functional.py:1806: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:881: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(op, in_dims, out_dims)(*inputs)
ok
test_diagonal_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_div_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_expand_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_index_cpu (__main__.TestVmapBatchedGradientCPU) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:881: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(op, in_dims, out_dims)(*inputs)
ok
test_inplace_manyview_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_inplace_on_view_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_lgamma_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_log1p_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_log_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_logsumexp_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_max_cpu (__main__.TestVmapBatchedGradientCPU) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:881: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(op, in_dims, out_dims)(*inputs)
ok
test_median_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_min_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_mul_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_permute_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_reshape_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_select_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_sigmoid_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_slice_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_stack_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_sub_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_symeig_cpu (__main__.TestVmapBatchedGradientCPU) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:2418: UserWarning: torch.symeig is deprecated in favor of torch.linalg.eigh and will be removed in a future PyTorch release.
The default behavior has changed from using the upper triangular portion of the matrix by default to using the lower triangular portion.
L, _ = torch.symeig(A, upper=upper)
should be replaced with
L = torch.linalg.eigvalsh(A, UPLO='U' if upper else 'L')
and
L, V = torch.symeig(A, eigenvectors=True)
should be replaced with
L, V = torch.linalg.eigh(A, UPLO='U' if upper else 'L') (Triggered internally at ../aten/src/ATen/native/BatchLinearAlgebra.cpp:2487.)
return torch.symeig(x, eigenvectors=True)[0]
/tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:881: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(op, in_dims, out_dims)(*inputs)
ok
test_threshold_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_trace_cpu (__main__.TestVmapBatchedGradientCPU) ... ok
test_unrelated_output_cpu (__main__.TestVmapBatchedGradientCPU) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:2478: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(vjp)(gy)
ok
test_unrelated_output_multiple_grad_cpu (__main__.TestVmapBatchedGradientCPU) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:2493: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(vjp)(gy)
ok
test_vmap_fallback_check (__main__.TestVmapBatchedGradientCPU) ... ok
test_vmap_fallback_check_ok (__main__.TestVmapBatchedGradientCPU) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:963: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(op_using_fallback)(torch.rand(3))
ok
test_add_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::clamp_min.out' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::clamp_min.out' is only available for these backends: [CPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: fallthrough registered at ../aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: registered at ../torch/csrc/autograd/generated/ADInplaceOrViewType_1.cpp:2399 [kernel]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_binary_cross_entropy_meta (__main__.TestVmapBatchedGradientMETA) ... /tmp/eb-yrxa9t7t/tmpkx1miuzy/lib/python3.9/site-packages/torch/nn/functional.py:1806: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
skipped "not implemented: Could not run 'aten::binary_cross_entropy' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::binary_cross_entropy' is only available for these backends: [CPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: registered at ../aten/src/ATen/autocast_mode.cpp:470 [kernel]\nAutocast: registered at ../aten/src/ATen/autocast_mode.cpp:309 [kernel]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_diagonal_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::_cat' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_cat' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nQuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_div_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::clamp_min.out' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::clamp_min.out' is only available for these backends: [CPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: fallthrough registered at ../aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: registered at ../torch/csrc/autograd/generated/ADInplaceOrViewType_1.cpp:2399 [kernel]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_expand_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::_cat' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_cat' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nQuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_index_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::index.Tensor' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::index.Tensor' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nQuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_1.cpp:9548 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_1.cpp:10664 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_inplace_manyview_meta (__main__.TestVmapBatchedGradientMETA) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:881: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(op, in_dims, out_dims)(*inputs)
ok
test_inplace_on_view_meta (__main__.TestVmapBatchedGradientMETA) ... ok
test_lgamma_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::_cat' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_cat' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nQuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_log1p_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::clamp_min.out' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::clamp_min.out' is only available for these backends: [CPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: fallthrough registered at ../aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: registered at ../torch/csrc/autograd/generated/ADInplaceOrViewType_1.cpp:2399 [kernel]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_log_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::clamp_min.out' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::clamp_min.out' is only available for these backends: [CPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: fallthrough registered at ../aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: registered at ../torch/csrc/autograd/generated/ADInplaceOrViewType_1.cpp:2399 [kernel]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_logsumexp_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::clamp_min.out' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::clamp_min.out' is only available for these backends: [CPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: fallthrough registered at ../aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: registered at ../torch/csrc/autograd/generated/ADInplaceOrViewType_1.cpp:2399 [kernel]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_max_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::max' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::max' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nQuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: fallthrough registered at ../aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:8878 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:8878 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:8878 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:8878 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:8878 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:8878 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:8878 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:8878 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:8878 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:8878 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:8878 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_0.cpp:8878 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_0.cpp:10257 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_median_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::median' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::median' is only available for these backends: [CPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: fallthrough registered at ../aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_4.cpp:9308 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_min_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::min' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::min' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nQuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: fallthrough registered at ../aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_mul_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::clamp_min.out' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::clamp_min.out' is only available for these backends: [CPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: fallthrough registered at ../aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: registered at ../torch/csrc/autograd/generated/ADInplaceOrViewType_1.cpp:2399 [kernel]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_permute_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::_cat' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_cat' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nQuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_reshape_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::_cat' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_cat' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nQuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_select_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::_cat' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_cat' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nQuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_sigmoid_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::_cat' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_cat' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nQuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_slice_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::_cat' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_cat' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nQuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_stack_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::_cat' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_cat' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nQuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_sub_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::clamp_min.out' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::clamp_min.out' is only available for these backends: [CPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: fallthrough registered at ../aten/src/ATen/core/NamedRegistrations.cpp:11 [kernel]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: registered at ../torch/csrc/autograd/generated/ADInplaceOrViewType_1.cpp:2399 [kernel]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_symeig_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::_symeig_helper' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_symeig_helper' is only available for these backends: [CPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_2.cpp:10491 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_2.cpp:11425 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_threshold_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::_cat' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::_cat' is only available for these backends: [CPU, QuantizedCPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nQuantizedCPU: registered at aten/src/ATen/RegisterQuantizedCPU.cpp:1068 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_3.cpp:10141 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_3.cpp:11560 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:466 [backend fallback]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_trace_meta (__main__.TestVmapBatchedGradientMETA) ... skipped "not implemented: Could not run 'aten::trace' with arguments from the 'Meta' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::trace' is only available for these backends: [CPU, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].\n\nCPU: registered at aten/src/ATen/RegisterCPU.cpp:18433 [kernel]\nBackendSelect: fallthrough registered at ../aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback]\nPython: registered at ../aten/src/ATen/core/PythonFallbackKernel.cpp:47 [backend fallback]\nNamed: registered at ../aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback]\nConjugate: registered at ../aten/src/ATen/ConjugateFallback.cpp:18 [backend fallback]\nNegative: registered at ../aten/src/ATen/native/NegateFallback.cpp:18 [backend fallback]\nADInplaceOrView: fallthrough registered at ../aten/src/ATen/core/VariableFallbackKernel.cpp:64 [backend fallback]\nAutogradOther: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradCPU: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradCUDA: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradXLA: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradLazy: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradXPU: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradMLC: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradHPU: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradNestedTensor: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradPrivateUse1: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradPrivateUse2: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nAutogradPrivateUse3: registered at ../torch/csrc/autograd/generated/VariableType_4.cpp:8932 [autograd kernel]\nTracer: registered at ../torch/csrc/autograd/generated/TraceType_4.cpp:9308 [kernel]\nUNKNOWN_TENSOR_TYPE_ID: registered at ../aten/src/ATen/autocast_mode.cpp:470 [kernel]\nAutocast: fallthrough registered at ../aten/src/ATen/autocast_mode.cpp:305 [backend fallback]\nBatched: registered at ../aten/src/ATen/BatchingRegistrations.cpp:1020 [kernel]\nVmapMode: fallthrough registered at ../aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback]\n"
test_unrelated_output_meta (__main__.TestVmapBatchedGradientMETA) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:2478: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(vjp)(gy)
skipped 'not implemented: Cannot copy out of meta tensor; no data!'
test_unrelated_output_multiple_grad_meta (__main__.TestVmapBatchedGradientMETA) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:2493: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
result = vmap(vjp)(gy)
skipped 'not implemented: Cannot copy out of meta tensor; no data!'
test_vmap_fallback_check (__main__.TestVmapBatchedGradientMETA) ... ok
test_vmap_fallback_check_ok (__main__.TestVmapBatchedGradientMETA) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:963: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(op_using_fallback)(torch.rand(3))
ok
test_T_numpy (__main__.TestVmapOperators) ... ok
test_as_strided (__main__.TestVmapOperators) ... ok
test_binary_pointwise_ops (__main__.TestVmapOperators) ... ok
test_bmm (__main__.TestVmapOperators) ... ok
test_cat (__main__.TestVmapOperators) ... ok
test_chunk (__main__.TestVmapOperators) ... ok
test_clamp (__main__.TestVmapOperators) ... ok
test_clone (__main__.TestVmapOperators) ... ok
test_comparison_ops (__main__.TestVmapOperators) ... ok
test_conj (__main__.TestVmapOperators) ... ok
test_contiguous (__main__.TestVmapOperators) ... ok
test_diagonal (__main__.TestVmapOperators) ... ok
test_dot (__main__.TestVmapOperators) ... ok
test_expand_as (__main__.TestVmapOperators) ... ok
test_fill_and_zero_inplace (__main__.TestVmapOperators) ... ok
test_imag (__main__.TestVmapOperators) ... ok
test_is_complex (__main__.TestVmapOperators) ... ok
test_is_contiguous (__main__.TestVmapOperators) ... ok
test_is_floating_point (__main__.TestVmapOperators) ... ok
test_mm (__main__.TestVmapOperators) ... ok
test_movedim (__main__.TestVmapOperators) ... ok
test_mv (__main__.TestVmapOperators) ... ok
test_narrow (__main__.TestVmapOperators) ... ok
test_new_empty (__main__.TestVmapOperators) ... ok
test_new_empty_strided (__main__.TestVmapOperators) ... ok
test_new_zeros (__main__.TestVmapOperators) ... ok
test_no_random_op_support (__main__.TestVmapOperators) ... ok
test_real (__main__.TestVmapOperators) ... ok
test_reshape (__main__.TestVmapOperators) ... ok
test_reshape_as (__main__.TestVmapOperators) ... ok
test_result_type (__main__.TestVmapOperators) ... ok
test_select (__main__.TestVmapOperators) ... ok
test_slice (__main__.TestVmapOperators) ... ok
test_split (__main__.TestVmapOperators) ... ok
test_squeeze (__main__.TestVmapOperators) ... ok
test_stack (__main__.TestVmapOperators) ... ok
test_stride (__main__.TestVmapOperators) ... ok
test_sum_dim (__main__.TestVmapOperators) ... ok
test_t (__main__.TestVmapOperators) ... ok
test_tensor_split (__main__.TestVmapOperators) ... ok
test_to (__main__.TestVmapOperators) ... ok
test_trace (__main__.TestVmapOperators) ... ok
test_transpose (__main__.TestVmapOperators) ... ok
test_unary_pointwise_ops (__main__.TestVmapOperators) ... ok
test_unbind (__main__.TestVmapOperators) ... ok
test_unfold (__main__.TestVmapOperators) ... ok
test_view (__main__.TestVmapOperators) ... ok
test_view_as (__main__.TestVmapOperators) ... ok
test_view_as_complex (__main__.TestVmapOperators) ... ok
test_view_as_real (__main__.TestVmapOperators) ... ok
test_vmap_fallback_check (__main__.TestVmapOperators) ... ok
test_vmap_fallback_check_ok (__main__.TestVmapOperators) ... /tmp/vsc40023/easybuild_build/PyTorch/1.10.0/foss-2021a/pytorch/test/test_vmap.py:963: UserWarning: torch.vmap is an experimental prototype that is subject to change and/or deletion. Please use at your own risk. There may be unexpected performance cliffs due to certain operators not being implemented. To see detailed performance warnings please use `torch._C._debug_only_display_vmap_fallback_warnings(True) before the call to `vmap`.
vmap(op_using_fallback)(torch.rand(3))
ok
----------------------------------------------------------------------
Ran 155 tests in 1.980s
OK (skipped=26)
Running test_vulkan ... [2021-11-29 21:17:59.562938]
Executing ['/user/gent/400/vsc40023/eb_arcaninescratch/CO7/skylake-ib/software/Python/3.9.5-GCCcore-10.3.0/bin/python', 'test_vulkan.py', '-v'] ... [2021-11-29 21:17:59.563025]
test_conv (__main__.TestVulkanRewritePass) ... skipped 'Vulkan backend must be available for these tests.'
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK (skipped=1)
Running test_xnnpack_integration ... [2021-11-29 21:18:00.736697]
Executing ['/user/gent/400/vsc40023/eb_arcaninescratch/CO7/skylake-ib/software/Python/3.9.5-GCCcore-10.3.0/bin/python', 'test_xnnpack_integration.py', '-v'] ... [2021-11-29 21:18:00.736781]
test_conv1d_basic (__main__.TestXNNPACKConv1dTransformPass) ... /tmp/eb-yrxa9t7t/tmpkx1miuzy/lib/python3.9/site-packages/torch/testing/_internal/common_utils.py:400: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at ../c10/core/TensorImpl.h:1405.)
return callable(*args, **kwargs)
ok
test_conv1d_with_relu_fc (__main__.TestXNNPACKConv1dTransformPass) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
test_conv2d (__main__.TestXNNPACKOps) ... ok
test_conv2d_transpose (__main__.TestXNNPACKOps) ... ok
test_linear (__main__.TestXNNPACKOps) ... ok
test_linear_1d_input (__main__.TestXNNPACKOps) ... ok
test_decomposed_linear (__main__.TestXNNPACKRewritePass) ... ok
test_linear (__main__.TestXNNPACKRewritePass) ... ok
test_combined_model (__main__.TestXNNPACKSerDes) ... ok
test_conv2d (__main__.TestXNNPACKSerDes) ... ok
test_conv2d_transpose (__main__.TestXNNPACKSerDes) ... ok
test_linear (__main__.TestXNNPACKSerDes) ... ok
----------------------------------------------------------------------
Ran 12 tests in 435.812s
OK (skipped=1)
distributed/algorithms/test_join failed!
distributed/rpc/test_tensorpipe_agent failed!
(at easybuild/easybuild-framework/easybuild/tools/run.py:618 in parse_cmd_output)
== 2021-11-29 21:25:19,299 build_log.py:265 INFO ... (took 2 hours 23 mins 55 secs)
== 2021-11-29 21:25:19,299 config.py:638 DEBUG software install path as specified by 'installpath' and 'subdir_software': /user/gent/400/vsc40023/eb_arcaninescratch/CO7/skylake-ib/software
== 2021-11-29 21:25:19,299 filetools.py:1971 INFO Removing lock /user/gent/400/vsc40023/eb_arcaninescratch/CO7/skylake-ib/software/.locks/_user_gent_400_vsc40023_eb_arcaninescratch_CO7_skylake-ib_software_PyTorch_1.10.0-foss-2021a.lock...
== 2021-11-29 21:25:19,302 filetools.py:380 INFO Path /user/gent/400/vsc40023/eb_arcaninescratch/CO7/skylake-ib/software/.locks/_user_gent_400_vsc40023_eb_arcaninescratch_CO7_skylake-ib_software_PyTorch_1.10.0-foss-2021a.lock successfully removed.
== 2021-11-29 21:25:19,302 filetools.py:1975 INFO Lock removed: /user/gent/400/vsc40023/eb_arcaninescratch/CO7/skylake-ib/software/.locks/_user_gent_400_vsc40023_eb_arcaninescratch_CO7_skylake-ib_software_PyTorch_1.10.0-foss-2021a.lock
== 2021-11-29 21:25:19,302 easyblock.py:3993 WARNING build failed (first 300 chars): cmd "export PYTHONPATH=/tmp/eb-yrxa9t7t/tmpkx1miuzy/lib/python3.9/site-packages:$PYTHONPATH && cd test && PYTHONUNBUFFERED=1 /user/gent/400/vsc40023/eb_arcaninescratch/CO7/skylake-ib/software/Python/3.9.5-GCCcore-10.3.0/bin/python run_test.py --continue-through-error --verbose -x distributed/elast
== 2021-11-29 21:25:19,302 easyblock.py:307 INFO Closing log for application name PyTorch version 1.10.0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment