Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save jose-d/a7fefbb1eb8a06073681c56fa94356fe to your computer and use it in GitHub Desktop.
Save jose-d/a7fefbb1eb8a06073681c56fa94356fe to your computer and use it in GitHub Desktop.
(partial) EasyBuild log for failed build of /tmp/eb-ylqvi05p/files_pr19015/p/PyTorch/PyTorch-1.13.1-foss-2022a-CUDA-11.7.0.eb (PR(s) #19015)
test_index_copy_scalars_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_copy_scalars_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_copy_scalars_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_copy_scalars_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_copy_scalars_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_copy_scalars_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_copy_scalars_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_copy_scalars_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_fill_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_fill_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_fill_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_fill_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_fill_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_fill_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_fill_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_fill_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_fill_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_fill_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_fill_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_fill_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_fill_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_put_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_put_non_accumulate_deterministic_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amax_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb-ylqvi05p/tmpcqbtokk3/lib/python3.10/site-packages/torch/testing/_deprecated.py:35: FutureWarning: torch.testing.make_non_contiguous() is deprecated since 1.12 and will be removed in 1.14. Depending on the use case there a different replacement options:
- If you are using `make_non_contiguous` in combination with a creation function to create a noncontiguous tensor with random values, use `torch.testing.make_tensor(..., noncontiguous=True)` instead.
- If you are using `make_non_contiguous` with a specific tensor, you can replace this call with `torch.repeat_interleave(input, 2, dim=-1)[..., ::2]`.
- If you are using `make_non_contiguous` in the PyTorch test suite, use `torch.testing._internal.common_utils.noncontiguous_like` instead.
warnings.warn(msg, FutureWarning)
/tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:2967: UserWarning: index_reduce() is in beta and the API may change at any time. (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1008.)
dest.index_reduce_(dim, idx, src, reduce, include_self=include_self)
ok
test_index_reduce_reduce_amax_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amax_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amax_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amax_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amax_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amax_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amax_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amax_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amin_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amin_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amin_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amin_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amin_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amin_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amin_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amin_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_amin_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_mean_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_mean_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_mean_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_mean_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_mean_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_mean_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_mean_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_mean_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_mean_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_prod_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_prod_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_prod_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_prod_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_prod_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_prod_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_prod_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_prod_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_reduce_reduce_prod_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_select_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_select_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_select_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_select_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_select_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_select_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_select_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_select_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_select_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_select_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_select_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_index_select_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_invalid_shapes_grid_sampler_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_is_set_to_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_is_signed_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_item_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_item_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_item_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_item_cpu_complex32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_item_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_item_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_item_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_item_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_item_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_item_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_item_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_item_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_item_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_large_cumprod_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_large_cumsum_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_log_normal_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_log_normal_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_log_normal_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_log_normal_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_logcumsumexp_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_lognormal_kstest_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_lognormal_kstest_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_lognormal_kstest_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_lognormal_kstest_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_bool_tensor_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_bfloat16_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_bfloat16_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_bool_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_bool_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_complex128_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_complex128_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_complex64_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_complex64_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_float16_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_float16_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_float32_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_float32_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_float64_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_float64_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_int16_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_int16_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_int32_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_int32_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_int64_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_int64_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_int8_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_int8_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_uint8_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_cpu_uint8_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_fill_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_bool_tensor_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_scatter_large_tensor_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_masked_scatter_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_masked_select_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:3609: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1729.)
torch.masked_select(src, mask, out=dst3)
ok
test_masked_select_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:3609: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1729.)
torch.masked_select(src, mask, out=dst3)
ok
test_masked_select_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:3609: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1729.)
torch.masked_select(src, mask, out=dst3)
ok
test_masked_select_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:3609: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1729.)
torch.masked_select(src, mask, out=dst3)
ok
test_masked_select_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:3609: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1729.)
torch.masked_select(src, mask, out=dst3)
ok
test_masked_select_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:3609: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1729.)
torch.masked_select(src, mask, out=dst3)
ok
test_masked_select_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:3609: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1729.)
torch.masked_select(src, mask, out=dst3)
ok
test_masked_select_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:3609: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1729.)
torch.masked_select(src, mask, out=dst3)
ok
test_masked_select_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:3609: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1729.)
torch.masked_select(src, mask, out=dst3)
ok
test_masked_select_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:3609: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1729.)
torch.masked_select(src, mask, out=dst3)
ok
test_masked_select_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:3609: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1729.)
torch.masked_select(src, mask, out=dst3)
ok
test_masked_select_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:3609: UserWarning: masked_select received a mask with dtype torch.uint8, this behavior is now deprecated,please use a mask with dtype torch.bool instead. (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/TensorAdvancedIndexing.cpp:1729.)
torch.masked_select(src, mask, out=dst3)
ok
test_masked_select_discontiguous_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_memory_format_clone_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_memory_format_consistency_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_memory_format_cpu_and_cuda_ops_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_memory_format_empty_like_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_memory_format_factory_like_functions_preserve_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_memory_format_operators_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_memory_format_preserved_after_permute_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_memory_format_propagation_rules_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_memory_format_to_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_memory_format_type_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_memory_format_type_shortcuts_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_module_share_memory_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_multinomial_cpu_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_multinomial_cpu_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_multinomial_cpu_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_multinomial_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_multinomial_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_multinomial_deterministic_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_multinomial_deterministic_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_multinomial_deterministic_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_multinomial_device_constrain_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_multinomial_empty_w_replacement_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_multinomial_empty_wo_replacement_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_multinomial_gpu_device_constrain_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'fewer than 2 devices detected'
test_multinomial_rng_state_advance_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... skipped 'test is slow; run with PYTORCH_TEST_WITH_SLOW to enable test'
test_narrow_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_AdaptiveAvgPool2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_AdaptiveAvgPool3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_AdaptiveMaxPool2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_AvgPool3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_CTCLoss_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_EmbeddingBag_max_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_FractionalMaxPool2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_FractionalMaxPool3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_MaxPool3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_MaxUnpool1d_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'float16 not implemented on CPU'
test_nondeterministic_alert_MaxUnpool1d_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_MaxUnpool1d_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_MaxUnpool2d_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'float16 not implemented on CPU'
test_nondeterministic_alert_MaxUnpool2d_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_MaxUnpool2d_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_MaxUnpool3d_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... skipped 'float16 not implemented on CPU'
test_nondeterministic_alert_MaxUnpool3d_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_MaxUnpool3d_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_NLLLoss_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_ReflectionPad1d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_ReflectionPad2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_ReflectionPad3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_ReplicationPad1d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_ReplicationPad2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_ReplicationPad3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_bincount_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_cumsum_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_cumsum_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_cumsum_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_cumsum_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_cumsum_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_cumsum_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_cumsum_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_cumsum_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_cumsum_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_cumsum_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_grid_sample_2d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_grid_sample_3d_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_histc_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_interpolate_bicubic_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_interpolate_bilinear_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_interpolate_linear_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_interpolate_trilinear_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_kthvalue_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_median_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test_torch.py:1628: UserWarning: An output with one or more elements was resized since it had shape [10], which does not match the required output shape []. This behavior is deprecated, and in a future PyTorch release outputs will not be resized unless they have zero elements. You can explicitly reuse an out tensor t by resizing it, inplace, to zero elements with t.resize_(0). (Triggered internally at /tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/aten/src/ATen/native/Resize.cpp:17.)
torch.median(a, 0, out=(result, indices))
ok
test_nondeterministic_alert_put_accumulate_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_nondeterministic_alert_put_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_normal_kstest_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_normal_kstest_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_normal_kstest_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_nullary_op_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_pairwise_distance_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_pdist_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_pdist_norm_large_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_pickle_gradscaler_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_pin_memory_from_constructor_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_put_accumulate_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_accumulate_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_accumulate_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_accumulate_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_accumulate_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_accumulate_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_accumulate_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_accumulate_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_accumulate_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_accumulate_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_accumulate_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_put_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_repeat_interleave_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_scalar_check_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_add_bool_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_add_non_unique_index_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_add_one_dim_deterministic_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_add_to_large_input_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_bool_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_multiply_unsupported_dtypes_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_scatter_reduce_multiply_unsupported_dtypes_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_scatter_reduce_non_unique_index_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_non_unique_index_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_non_unique_index_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_non_unique_index_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_non_unique_index_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_non_unique_index_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_non_unique_index_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_non_unique_index_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_non_unique_index_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_non_unique_index_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_non_unique_index_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_non_unique_index_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_operations_to_large_input_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_operations_to_large_input_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_operations_to_large_input_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_operations_to_large_input_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_operations_to_large_input_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_operations_to_large_input_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_operations_to_large_input_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_operations_to_large_input_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_operations_to_large_input_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_operations_to_large_input_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_operations_to_large_input_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_operations_to_large_input_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_scalar_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_scalar_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_scalar_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_scalar_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_scalar_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_scalar_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_scalar_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_scalar_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_scalar_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_scalar_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_scalar_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_reduce_scalar_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_to_large_input_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_scatter_zero_size_index_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_serialization_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_set_storage_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_set_storage_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_set_storage_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_set_storage_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_set_storage_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_set_storage_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_set_storage_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_set_storage_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_set_storage_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_set_storage_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_set_storage_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_set_storage_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_shift_mem_overlap_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_skip_xla_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_all_devices_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_storage_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_errors_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_errors_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_errors_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_errors_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_errors_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_errors_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_errors_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_errors_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_errors_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_errors_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_errors_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_errors_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_from_tensor_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_from_tensor_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_from_tensor_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_from_tensor_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_from_tensor_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_from_tensor_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_from_tensor_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_from_tensor_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_from_tensor_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_from_tensor_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_from_tensor_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_meta_from_tensor_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_qint32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_qint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_quint4x2 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_quint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_storage_setitem_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_strides_propagation_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_sync_warning_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'Only runs on cuda'
test_take_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_take_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_take_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_take_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_take_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_take_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_take_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_take_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_take_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_take_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_take_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_take_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_take_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_from_storage_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_from_storage_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_from_storage_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_from_storage_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_from_storage_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_from_storage_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_from_storage_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_from_storage_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_from_storage_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_from_storage_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_from_storage_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_from_storage_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_set_errors_multigpu_cpu (__main__.TestTorchDeviceTypeCPU) ... skipped 'fewer than 2 devices detected'
test_tensor_shape_empty_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_storage_type_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_storage_type_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_storage_type_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_storage_type_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_storage_type_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_storage_type_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_storage_type_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_storage_type_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_storage_type_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_storage_type_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_storage_type_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_tensor_storage_type_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_ternary_op_mem_overlap_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_typed_storage_meta_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_typed_storage_meta_cpu_bool (__main__.TestTorchDeviceTypeCPU) ... ok
test_typed_storage_meta_cpu_complex128 (__main__.TestTorchDeviceTypeCPU) ... ok
test_typed_storage_meta_cpu_complex64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_typed_storage_meta_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_typed_storage_meta_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_typed_storage_meta_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_typed_storage_meta_cpu_int16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_typed_storage_meta_cpu_int32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_typed_storage_meta_cpu_int64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_typed_storage_meta_cpu_int8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_typed_storage_meta_cpu_uint8 (__main__.TestTorchDeviceTypeCPU) ... ok
test_unfold_all_devices_and_dtypes_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_unfold_scalars_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_uniform_kstest_cpu_bfloat16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_uniform_kstest_cpu_float16 (__main__.TestTorchDeviceTypeCPU) ... ok
test_uniform_kstest_cpu_float32 (__main__.TestTorchDeviceTypeCPU) ... ok
test_uniform_kstest_cpu_float64 (__main__.TestTorchDeviceTypeCPU) ... ok
test_untyped_storage_meta_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_warn_always_caught_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_where_scalar_handcrafted_values_cpu (__main__.TestTorchDeviceTypeCPU) ... ok
test_cuda_vitals_gpu_only_cpu (__main__.TestVitalSignsCudaCPU) ... skipped 'Only runs on cuda'
----------------------------------------------------------------------
Ran 827 tests in 11.829s
OK (skipped=39)
[TORCH_VITAL] Dataloader.enabled True
[TORCH_VITAL] Dataloader.basic_unit_test TEST_VALUE_STRING
[TORCH_VITAL] CUDA.used true
##[endgroup]
FINISHED PRINTING LOG FILE of test_torch (/tmp/eb/PyTorch/1.13.1/foss-2022a-CUDA-11.7.0/pytorch-v1.13.1/test/test-reports/test_torch_k4hk0kpe)
distributed/elastic/multiprocessing/api_test failed!
test_ops_gradients failed!
== 2023-10-19 21:46:15,602 filetools.py:383 INFO Path /tmp/eb-ylqvi05p/tmpcqbtokk3 successfully removed.
== 2023-10-19 21:46:15,891 pytorch.py:303 WARNING Found 3 individual tests that exited with an error: test_binary_incorrect_entrypoint, test_binary_incorrect_entrypoint, test_binary_incorrect_entrypoint
Found 2 individual tests with failed assertions: test_fn_grad_linalg_det_singular_cpu_complex128, test_forward_mode_AD_linalg_det_singular_cpu_complex128
== 2023-10-19 21:46:16,896 pytorch.py:417 WARNING 2 test failures, 3 test errors (out of 87820):
distributed/elastic/multiprocessing/api_test (60 total tests, errors=3)
test_ops_gradients (2 failed, 3454 passed, 4032 skipped, 72 xfailed, 152 warnings, 4 rerun)
The PyTorch test suite is known to include some flaky tests, which may fail depending on the specifics of the system or the context in which they are run. For this PyTorch installation, EasyBuild allows up to 2 tests to fail. We recommend to double check that the failing tests listed above are known to be flaky, or do not affect your intended usage of PyTorch. In case of doubt, reach out to the EasyBuild community (via GitHub, Slack, or mailing list).
== 2023-10-19 21:46:16,924 build_log.py:171 ERROR EasyBuild crashed with an error (at easybuild/tools/build_log.py:111 in caller_info): Too many failed tests (5), maximum allowed is 2 (at easybuild/easyblocks/p/pytorch.py:420 in test_step)
== 2023-10-19 21:46:16,931 build_log.py:267 INFO ... (took 5 hours 11 mins 32 secs)
== 2023-10-19 21:46:16,931 filetools.py:2012 INFO Removing lock /sw/phoebe/2022a/software/.locks/_sw_phoebe_2022a_software_PyTorch_1.13.1-foss-2022a-CUDA-11.7.0.lock...
== 2023-10-19 21:46:16,935 filetools.py:383 INFO Path /sw/phoebe/2022a/software/.locks/_sw_phoebe_2022a_software_PyTorch_1.13.1-foss-2022a-CUDA-11.7.0.lock successfully removed.
== 2023-10-19 21:46:16,935 filetools.py:2016 INFO Lock removed: /sw/phoebe/2022a/software/.locks/_sw_phoebe_2022a_software_PyTorch_1.13.1-foss-2022a-CUDA-11.7.0.lock
== 2023-10-19 21:46:16,935 easyblock.py:4277 WARNING build failed (first 300 chars): Too many failed tests (5), maximum allowed is 2
== 2023-10-19 21:46:16,935 easyblock.py:328 INFO Closing log for application name PyTorch version 1.13.1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment