-
-
Save ezyang/7d77d21303d117f6ae8c51901a373017 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
============================= test session starts ============================== | |
platform linux -- Python 3.9.12, pytest-7.1.3, pluggy-1.0.0 -- /scratch/ezyang/work/env/bin/python | |
cachedir: .pytest_cache | |
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000) | |
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/scratch/ezyang/work/pytorch/.hypothesis/examples') | |
rootdir: /scratch/ezyang/work/pytorch, configfile: pytest.ini | |
plugins: benchmark-3.4.1, hydra-core-1.1.2, csv-3.0.0, hypothesis-6.56.4 | |
collecting ... collected 1270 items / 658 deselected / 612 selected | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_H_cpu_float32 PASSED [ 0%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_T_cpu_float32 PASSED [ 0%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___getitem___cpu_float32 SKIPPED [ 0%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___radd___cpu_float32 PASSED [ 0%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rdiv___cpu_float32 PASSED [ 0%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rmatmul___cpu_float32 FAILED [ 0%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rmod___cpu_float32 PASSED [ 1%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rmul___cpu_float32 PASSED [ 1%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rpow___cpu_float32 PASSED [ 1%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rsub___cpu_float32 PASSED [ 1%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_abs_cpu_float32 PASSED [ 1%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_acos_cpu_float32 PASSED [ 1%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_acosh_cpu_float32 PASSED [ 2%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_add_cpu_float32 PASSED [ 2%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addbmm_cpu_float32 PASSED [ 2%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addcdiv_cpu_float32 PASSED [ 2%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addcmul_cpu_float32 PASSED [ 2%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addmm_cpu_float32 PASSED [ 2%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addmm_decomposed_cpu_float32 PASSED [ 3%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addmv_cpu_float32 FAILED [ 3%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addr_cpu_float32 FAILED [ 3%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_all_cpu_float32 SKIPPED [ 3%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_allclose_cpu_float32 SKIPPED [ 3%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_amax_cpu_float32 FAILED [ 3%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_amin_cpu_float32 FAILED [ 4%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_aminmax_cpu_float32 SKIPPED [ 4%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_angle_cpu_float32 PASSED [ 4%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_any_cpu_float32 SKIPPED [ 4%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_arange_cpu_float32 SKIPPED [ 4%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_argmax_cpu_float32 SKIPPED [ 4%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_argmin_cpu_float32 SKIPPED [ 5%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_argsort_cpu_float32 SKIPPED [ 5%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_argwhere_cpu_float32 SKIPPED [ 5%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_as_strided_cpu_float32 PASSED [ 5%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_as_strided_scatter_cpu_float32 SKIPPED [ 5%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_asin_cpu_float32 PASSED [ 5%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_asinh_cpu_float32 PASSED [ 6%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atan2_cpu_float32 PASSED [ 6%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atan_cpu_float32 PASSED [ 6%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atanh_cpu_float32 PASSED [ 6%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atleast_1d_cpu_float32 PASSED [ 6%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atleast_2d_cpu_float32 PASSED [ 6%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atleast_3d_cpu_float32 PASSED [ 7%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_baddbmm_cpu_float32 FAILED [ 7%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_bernoulli_cpu_float32 PASSED [ 7%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_bfloat16_cpu_float32 PASSED [ 7%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_block_diag_cpu_float32 FAILED [ 7%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_bmm_cpu_float32 PASSED [ 7%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_bool_cpu_float32 SKIPPED [ 8%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_broadcast_shapes_cpu_float32 SKIPPED [ 8%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_broadcast_tensors_cpu_float32 PASSED [ 8%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_broadcast_to_cpu_float32 PASSED [ 8%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_bucketize_cpu_float32 SKIPPED [ 8%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_byte_cpu_float32 SKIPPED [ 8%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cartesian_prod_cpu_float32 FAILED [ 8%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cat_cpu_float32 PASSED [ 9%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cdist_cpu_float32 FAILED [ 9%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cdouble_cpu_float32 FAILED [ 9%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ceil_cpu_float32 PASSED [ 9%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cfloat_cpu_float32 FAILED [ 9%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_chalf_cpu_float32 XFAIL [ 9%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_char_cpu_float32 SKIPPED [ 10%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cholesky_cpu_float32 XFAIL [ 10%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cholesky_inverse_cpu_float32 FAILED [ 10%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cholesky_solve_cpu_float32 FAILED [ 10%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_chunk_cpu_float32 PASSED [ 10%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_clamp_cpu_float32 PASSED [ 10%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_clamp_max_cpu_float32 PASSED [ 11%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_clamp_min_cpu_float32 PASSED [ 11%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_clone_cpu_float32 PASSED [ 11%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_column_stack_cpu_float32 FAILED [ 11%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_combinations_cpu_float32 FAILED [ 11%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_complex_cpu_float32 FAILED [ 11%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_conj_cpu_float32 PASSED [ 12%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_conj_physical_cpu_float32 PASSED [ 12%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_constant_pad_nd_cpu_float32 PASSED [ 12%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_contiguous_cpu_float32 PASSED [ 12%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_copysign_cpu_float32 PASSED [ 12%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_corrcoef_cpu_float32 XFAIL [ 12%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cos_cpu_float32 PASSED [ 13%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cosh_cpu_float32 PASSED [ 13%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_count_nonzero_cpu_float32 SKIPPED [ 13%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cov_cpu_float32 XFAIL [ 13%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cross_cpu_float32 FAILED [ 13%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cummax_cpu_float32 FAILED [ 13%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cummin_cpu_float32 FAILED [ 14%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cumprod_cpu_float32 FAILED [ 14%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cumsum_cpu_float32 FAILED [ 14%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cumulative_trapezoid_cpu_float32 FAILED [ 14%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_deg2rad_cpu_float32 FAILED [ 14%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diag_cpu_float32 PASSED [ 14%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diag_embed_cpu_float32 PASSED [ 15%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diagflat_cpu_float32 PASSED [ 15%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diagonal_copy_cpu_float32 PASSED [ 15%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diagonal_cpu_float32 PASSED [ 15%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diagonal_scatter_cpu_float32 PASSED [ 15%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diff_cpu_float32 FAILED [ 15%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_digamma_cpu_float32 FAILED [ 16%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_dist_cpu_float32 FAILED [ 16%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_div_floor_rounding_cpu_float32 PASSED [ 16%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_div_no_rounding_mode_cpu_float32 PASSED [ 16%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_div_trunc_rounding_cpu_float32 PASSED [ 16%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_dot_cpu_float32 PASSED [ 16%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_double_cpu_float32 PASSED [ 16%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_dsplit_cpu_float32 FAILED [ 17%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_dstack_cpu_float32 PASSED [ 17%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_einsum_cpu_float32 PASSED [ 17%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_empty_cpu_float32 SKIPPED [ 17%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_empty_like_cpu_float32 SKIPPED [ 17%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_eq_cpu_float32 SKIPPED [ 17%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_equal_cpu_float32 SKIPPED [ 18%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_erf_cpu_float32 PASSED [ 18%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_erfc_cpu_float32 PASSED [ 18%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_erfinv_cpu_float32 PASSED [ 18%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_exp2_cpu_float32 PASSED [ 18%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_exp_cpu_float32 PASSED [ 18%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_expand_as_cpu_float32 PASSED [ 19%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_expand_cpu_float32 PASSED [ 19%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_expm1_cpu_float32 PASSED [ 19%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_eye_cpu_float32 SKIPPED [ 19%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fft2_cpu_float32 FAILED [ 19%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fft_cpu_float32 FAILED [ 19%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fftn_cpu_float32 FAILED [ 20%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fftshift_cpu_float32 FAILED [ 20%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_hfft2_cpu_float32 FAILED [ 20%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_hfft_cpu_float32 FAILED [ 20%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_hfftn_cpu_float32 FAILED [ 20%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifft2_cpu_float32 FAILED [ 20%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifft_cpu_float32 FAILED [ 21%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifftn_cpu_float32 FAILED [ 21%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifftshift_cpu_float32 FAILED [ 21%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ihfft2_cpu_float32 FAILED [ 21%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ihfft_cpu_float32 FAILED [ 21%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ihfftn_cpu_float32 FAILED [ 21%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_irfft2_cpu_float32 FAILED [ 22%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_irfft_cpu_float32 FAILED [ 22%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_irfftn_cpu_float32 FAILED [ 22%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_rfft2_cpu_float32 FAILED [ 22%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_rfft_cpu_float32 FAILED [ 22%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_rfftn_cpu_float32 FAILED [ 22%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fill_cpu_float32 PASSED [ 23%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_flatten_cpu_float32 PASSED [ 23%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_flip_cpu_float32 PASSED [ 23%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fliplr_cpu_float32 PASSED [ 23%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_flipud_cpu_float32 PASSED [ 23%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_float_cpu_float32 PASSED [ 23%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_float_power_cpu_float32 PASSED [ 24%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_floor_cpu_float32 PASSED [ 24%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_floor_divide_cpu_float32 SKIPPED [ 24%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fmax_cpu_float32 PASSED [ 24%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fmin_cpu_float32 PASSED [ 24%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fmod_cpu_float32 PASSED [ 24%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_frac_cpu_float32 PASSED [ 25%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_frexp_cpu_float32 FAILED [ 25%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_full_cpu_float32 SKIPPED [ 25%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_full_like_cpu_float32 SKIPPED [ 25%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_gather_cpu_float32 PASSED [ 25%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ge_cpu_float32 SKIPPED [ 25%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_geqrf_cpu_float32 SKIPPED [ 25%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_gradient_cpu_float32 FAILED [ 26%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_gt_cpu_float32 SKIPPED [ 26%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_half_cpu_float32 PASSED [ 26%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_heaviside_cpu_float32 SKIPPED [ 26%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_histc_cpu_float32 SKIPPED [ 26%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_histogram_cpu_float32 SKIPPED [ 26%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_histogramdd_cpu_float32 SKIPPED [ 27%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_hsplit_cpu_float32 FAILED [ 27%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_hstack_cpu_float32 PASSED [ 27%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_hypot_cpu_float32 PASSED [ 27%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_i0_cpu_float32 FAILED [ 27%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_igamma_cpu_float32 SKIPPED [ 27%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_igammac_cpu_float32 SKIPPED [ 28%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_index_add_cpu_float32 PASSED [ 28%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_index_copy_cpu_float32 PASSED [ 28%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_index_fill_cpu_float32 PASSED [ 28%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_index_put_cpu_float32 SKIPPED [ 28%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_index_reduce_cpu_float32 XFAIL [ 28%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_index_select_cpu_float32 PASSED [ 29%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_inner_cpu_float32 FAILED [ 29%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_int_cpu_float32 SKIPPED [ 29%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isclose_cpu_float32 SKIPPED [ 29%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isfinite_cpu_float32 SKIPPED [ 29%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isin_cpu_float32 SKIPPED [ 29%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isinf_cpu_float32 SKIPPED [ 30%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isnan_cpu_float32 SKIPPED [ 30%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isneginf_cpu_float32 SKIPPED [ 30%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isposinf_cpu_float32 SKIPPED [ 30%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isreal_cpu_float32 SKIPPED [ 30%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_jiterator_2inputs_2outputs_cpu_float32 SKIPPED [ 30%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_jiterator_4inputs_with_extra_args_cpu_float32 SKIPPED [ 31%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_jiterator_binary_cpu_float32 SKIPPED [ 31%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_jiterator_binary_return_by_ref_cpu_float32 SKIPPED [ 31%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_jiterator_unary_cpu_float32 SKIPPED [ 31%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_kron_cpu_float32 FAILED [ 31%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_kthvalue_cpu_float32 FAILED [ 31%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ldexp_cpu_float32 PASSED [ 32%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_le_cpu_float32 SKIPPED [ 32%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lerp_cpu_float32 PASSED [ 32%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lgamma_cpu_float32 PASSED [ 32%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cholesky_cpu_float32 XFAIL [ 32%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cholesky_ex_cpu_float32 FAILED [ 32%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cond_cpu_float32 FAILED [ 33%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cross_cpu_float32 FAILED [ 33%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_det_cpu_float32 FAILED [ 33%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_det_singular_cpu_float32 FAILED [ 33%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eig_cpu_float32 XFAIL [ 33%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eigh_cpu_float32 FAILED [ 33%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eigvals_cpu_float32 FAILED [ 33%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eigvalsh_cpu_float32 FAILED [ 34%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_householder_product_cpu_float32 SKIPPED [ 34%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_inv_cpu_float32 FAILED [ 34%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_inv_ex_cpu_float32 FAILED [ 34%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_ldl_factor_cpu_float32 SKIPPED [ 34%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_ldl_factor_ex_cpu_float32 SKIPPED [ 34%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_ldl_solve_cpu_float32 SKIPPED [ 35%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lstsq_cpu_float32 FAILED [ 35%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lstsq_grad_oriented_cpu_float32 FAILED [ 35%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_cpu_float32 FAILED [ 35%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_factor_cpu_float32 FAILED [ 35%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_factor_ex_cpu_float32 FAILED [ 35%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_solve_cpu_float32 SKIPPED [ 36%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_matrix_norm_cpu_float32 FAILED [ 36%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_matrix_power_cpu_float32 FAILED [ 36%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_matrix_rank_cpu_float32 SKIPPED [ 36%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_matrix_rank_hermitian_cpu_float32 SKIPPED [ 36%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_multi_dot_cpu_float32 FAILED [ 36%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_norm_cpu_float32 FAILED [ 37%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_norm_subgradients_at_zero_cpu_float32 FAILED [ 37%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_pinv_cpu_float32 FAILED [ 37%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_pinv_hermitian_cpu_float32 FAILED [ 37%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_pinv_singular_cpu_float32 SKIPPED [ 37%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_qr_cpu_float32 FAILED [ 37%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_slogdet_cpu_float32 FAILED [ 38%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_solve_cpu_float32 FAILED [ 38%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_solve_ex_cpu_float32 FAILED [ 38%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_solve_triangular_cpu_float32 FAILED [ 38%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_svd_cpu_float32 FAILED [ 38%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_svdvals_cpu_float32 FAILED [ 38%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_tensorinv_cpu_float32 FAILED [ 39%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_tensorsolve_cpu_float32 FAILED [ 39%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_vander_cpu_float32 FAILED [ 39%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_vecdot_cpu_float32 PASSED [ 39%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_vector_norm_cpu_float32 FAILED [ 39%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linspace_cpu_float32 SKIPPED [ 39%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_log10_cpu_float32 PASSED [ 40%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_log1p_cpu_float32 PASSED [ 40%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_log2_cpu_float32 PASSED [ 40%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_log_cpu_float32 PASSED [ 40%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_log_softmax_cpu_float32 PASSED [ 40%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_log_softmax_with_dtype_cpu_float32 PASSED [ 40%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logaddexp2_cpu_float32 FAILED [ 41%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logaddexp_cpu_float32 FAILED [ 41%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logcumsumexp_cpu_float32 FAILED [ 41%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logdet_cpu_float32 FAILED [ 41%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logical_and_cpu_float32 SKIPPED [ 41%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logical_not_cpu_float32 SKIPPED [ 41%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logical_or_cpu_float32 SKIPPED [ 41%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logical_xor_cpu_float32 SKIPPED [ 42%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logit_cpu_float32 PASSED [ 42%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logspace_cpu_float32 SKIPPED [ 42%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logsumexp_cpu_float32 FAILED [ 42%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_long_cpu_float32 SKIPPED [ 42%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lt_cpu_float32 SKIPPED [ 42%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lu_cpu_float32 FAILED [ 43%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lu_solve_cpu_float32 FAILED [ 43%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lu_unpack_cpu_float32 FAILED [ 43%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mH_cpu_float32 PASSED [ 43%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mT_cpu_float32 PASSED [ 43%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_amax_cpu_float32 FAILED [ 43%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_amin_cpu_float32 FAILED [ 44%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_argmax_cpu_float32 SKIPPED [ 44%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_argmin_cpu_float32 SKIPPED [ 44%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_cumprod_cpu_float32 FAILED [ 44%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_cumsum_cpu_float32 FAILED [ 44%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_fill_cpu_float32 FAILED [ 44%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_log_softmax_cpu_float32 PASSED [ 45%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_logaddexp_cpu_float32 FAILED [ 45%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_logsumexp_cpu_float32 FAILED [ 45%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_mean_cpu_float32 PASSED [ 45%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_median_cpu_float32 PASSED [ 45%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_norm_cpu_float32 PASSED [ 45%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_normalize_cpu_float32 PASSED [ 46%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_prod_cpu_float32 FAILED [ 46%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_scatter_cpu_float32 FAILED [ 46%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_select_cpu_float32 FAILED [ 46%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_softmax_cpu_float32 PASSED [ 46%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_softmin_cpu_float32 PASSED [ 46%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_std_cpu_float32 PASSED [ 47%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_sum_cpu_float32 PASSED [ 47%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_var_cpu_float32 PASSED [ 47%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_matmul_cpu_float32 FAILED [ 47%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_matrix_exp_cpu_float32 FAILED [ 47%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_max_binary_cpu_float32 PASSED [ 47%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_max_reduction_no_dim_cpu_float32 PASSED [ 48%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_max_reduction_with_dim_cpu_float32 PASSED [ 48%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_maximum_cpu_float32 PASSED [ 48%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mean_cpu_float32 PASSED [ 48%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_median_cpu_float32 FAILED [ 48%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_meshgrid_list_of_tensors_cpu_float32 FAILED [ 48%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_meshgrid_variadic_tensors_cpu_float32 FAILED [ 49%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_min_binary_cpu_float32 PASSED [ 49%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_min_reduction_no_dim_cpu_float32 PASSED [ 49%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_min_reduction_with_dim_cpu_float32 FAILED [ 49%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_minimum_cpu_float32 PASSED [ 49%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mm_cpu_float32 PASSED [ 49%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mode_cpu_float32 FAILED [ 50%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_movedim_cpu_float32 PASSED [ 50%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_msort_cpu_float32 PASSED [ 50%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mul_cpu_float32 PASSED [ 50%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_multinomial_cpu_float32 SKIPPED [ 50%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mv_cpu_float32 FAILED [ 50%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mvlgamma_mvlgamma_p_1_cpu_float32 FAILED [ 50%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mvlgamma_mvlgamma_p_3_cpu_float32 FAILED [ 51%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mvlgamma_mvlgamma_p_5_cpu_float32 FAILED [ 51%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nan_to_num_cpu_float32 FAILED [ 51%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nanmean_cpu_float32 PASSED [ 51%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nanmedian_cpu_float32 PASSED [ 51%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nanquantile_cpu_float32 XFAIL [ 51%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nansum_cpu_float32 PASSED [ 52%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_narrow_copy_cpu_float32 SKIPPED [ 52%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_narrow_cpu_float32 XFAIL [ 52%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_native_batch_norm_cpu_float32 PASSED [ 52%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_native_layer_norm_cpu_float32 PASSED [ 52%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ne_cpu_float32 SKIPPED [ 52%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_neg_cpu_float32 PASSED [ 53%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_new_empty_cpu_float32 SKIPPED [ 53%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_new_empty_strided_cpu_float32 SKIPPED [ 53%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_new_full_cpu_float32 SKIPPED [ 53%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_new_ones_cpu_float32 SKIPPED [ 53%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_new_zeros_cpu_float32 SKIPPED [ 53%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nextafter_cpu_float32 SKIPPED [ 54%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional__scaled_dot_product_attention_cpu_float32 FAILED [ 54%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_avg_pool1d_cpu_float32 PASSED [ 54%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_avg_pool2d_cpu_float32 PASSED [ 54%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_avg_pool3d_cpu_float32 FAILED [ 54%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool1d_cpu_float32 FAILED [ 54%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool2d_cpu_float32 FAILED [ 55%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool3d_cpu_float32 FAILED [ 55%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_avg_pool1d_cpu_float32 PASSED [ 55%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_avg_pool2d_cpu_float32 PASSED [ 55%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_avg_pool3d_cpu_float32 FAILED [ 55%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_batch_norm_cpu_float32 SKIPPED [ 55%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_bilinear_cpu_float32 FAILED [ 56%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_binary_cross_entropy_cpu_float32 FAILED [ 56%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_binary_cross_entropy_with_logits_cpu_float32 SKIPPED [ 56%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_celu_cpu_float32 PASSED [ 56%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_conv1d_cpu_float32 PASSED [ 56%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_conv2d_cpu_float32 PASSED [ 56%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_conv_transpose1d_cpu_float32 PASSED [ 57%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_conv_transpose2d_cpu_float32 PASSED [ 57%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_conv_transpose3d_cpu_float32 PASSED [ 57%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_cosine_embedding_loss_cpu_float32 FAILED [ 57%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_cosine_similarity_cpu_float32 FAILED [ 57%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_cross_entropy_cpu_float32 FAILED [ 57%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_ctc_loss_cpu_float32 FAILED [ 58%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_dropout2d_cpu_float32 PASSED [ 58%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_dropout3d_cpu_float32 PASSED [ 58%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_dropout_cpu_float32 PASSED [ 58%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_elu_cpu_float32 PASSED [ 58%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_embedding_bag_cpu_float32 FAILED [ 58%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_embedding_cpu_float32 PASSED [ 58%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_feature_alpha_dropout_with_train_cpu_float32 PASSED [ 59%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_feature_alpha_dropout_without_train_cpu_float32 PASSED [ 59%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_fractional_max_pool2d_cpu_float32 FAILED [ 59%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_fractional_max_pool3d_cpu_float32 FAILED [ 59%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_gaussian_nll_loss_cpu_float32 XFAIL [ 59%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_gelu_cpu_float32 PASSED [ 59%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_glu_cpu_float32 PASSED [ 60%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_grid_sample_cpu_float32 FAILED [ 60%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_group_norm_cpu_float32 FAILED [ 60%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_hardshrink_cpu_float32 PASSED [ 60%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_hardsigmoid_cpu_float32 PASSED [ 60%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_hardswish_cpu_float32 PASSED [ 60%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_hardtanh_cpu_float32 PASSED [ 61%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_hinge_embedding_loss_cpu_float32 FAILED [ 61%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_huber_loss_cpu_float32 PASSED [ 61%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_instance_norm_cpu_float32 FAILED [ 61%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_area_cpu_float32 FAILED [ 61%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_bicubic_cpu_float32 FAILED [ 61%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_bilinear_cpu_float32 PASSED [ 62%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_linear_cpu_float32 FAILED [ 62%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_nearest_cpu_float32 FAILED [ 62%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_trilinear_cpu_float32 FAILED [ 62%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_kl_div_cpu_float32 PASSED [ 62%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_l1_loss_cpu_float32 PASSED [ 62%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_layer_norm_cpu_float32 PASSED [ 63%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_leaky_relu_cpu_float32 PASSED [ 63%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_linear_cpu_float32 PASSED [ 63%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_local_response_norm_cpu_float32 PASSED [ 63%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_logsigmoid_cpu_float32 PASSED [ 63%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_margin_ranking_loss_cpu_float32 SKIPPED [ 63%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool1d_cpu_float32 FAILED [ 64%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool2d_cpu_float32 PASSED [ 64%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool3d_cpu_float32 FAILED [ 64%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool1d_cpu_float32 FAILED [ 64%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool1d_grad_cpu_float32 FAILED [ 64%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool2d_cpu_float32 FAILED [ 64%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool2d_grad_cpu_float32 FAILED [ 65%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool3d_cpu_float32 FAILED [ 65%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool3d_grad_cpu_float32 FAILED [ 65%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_mish_cpu_float32 PASSED [ 65%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_mse_loss_cpu_float32 PASSED [ 65%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_multi_margin_loss_cpu_float32 FAILED [ 65%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_multilabel_margin_loss_cpu_float32 FAILED [ 66%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_multilabel_soft_margin_loss_cpu_float32 PASSED [ 66%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_nll_loss_cpu_float32 FAILED [ 66%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_normalize_cpu_float32 FAILED [ 66%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pad_circular_cpu_float32 PASSED [ 66%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pad_constant_cpu_float32 PASSED [ 66%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pad_reflect_cpu_float32 FAILED [ 66%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pad_replicate_cpu_float32 FAILED [ 67%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pairwise_distance_cpu_float32 FAILED [ 67%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pdist_cpu_float32 FAILED [ 67%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pixel_shuffle_cpu_float32 FAILED [ 67%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pixel_unshuffle_cpu_float32 FAILED [ 67%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_poisson_nll_loss_cpu_float32 PASSED [ 67%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_prelu_cpu_float32 FAILED [ 68%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_relu6_cpu_float32 PASSED [ 68%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_relu_cpu_float32 PASSED [ 68%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_rrelu_cpu_float32 FAILED [ 68%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_selu_cpu_float32 PASSED [ 68%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_silu_cpu_float32 PASSED [ 68%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_smooth_l1_loss_cpu_float32 FAILED [ 69%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_soft_margin_loss_cpu_float32 PASSED [ 69%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_softmin_cpu_float32 PASSED [ 69%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_softmin_with_dtype_cpu_float32 PASSED [ 69%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_softplus_cpu_float32 PASSED [ 69%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_softshrink_cpu_float32 PASSED [ 69%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_softsign_cpu_float32 PASSED [ 70%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_tanhshrink_cpu_float32 PASSED [ 70%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_threshold_cpu_float32 PASSED [ 70%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_triplet_margin_loss_cpu_float32 PASSED [ 70%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_triplet_margin_with_distance_loss_cpu_float32 PASSED [ 70%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_unfold_cpu_float32 PASSED [ 70%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_upsample_bilinear_cpu_float32 PASSED [ 71%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_upsample_nearest_cpu_float32 FAILED [ 71%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nonzero_cpu_float32 SKIPPED [ 71%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_norm_cpu_float32 FAILED [ 71%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_norm_fro_cpu_float32 PASSED [ 71%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_norm_inf_cpu_float32 PASSED [ 71%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_norm_nuc_cpu_float32 FAILED [ 72%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_normal_cpu_float32 FAILED [ 72%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_normal_number_mean_cpu_float32 FAILED [ 72%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ones_cpu_float32 SKIPPED [ 72%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ones_like_cpu_float32 SKIPPED [ 72%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ormqr_cpu_float32 FAILED [ 72%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_outer_cpu_float32 FAILED [ 73%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_pca_lowrank_cpu_float32 FAILED [ 73%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_permute_cpu_float32 PASSED [ 73%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_pinverse_cpu_float32 FAILED [ 73%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polar_cpu_float32 FAILED [ 73%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_0_cpu_float32 FAILED [ 73%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_1_cpu_float32 FAILED [ 74%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_2_cpu_float32 FAILED [ 74%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_3_cpu_float32 FAILED [ 74%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_4_cpu_float32 FAILED [ 74%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_positive_cpu_float32 PASSED [ 74%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_pow_cpu_float32 PASSED [ 74%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_prod_cpu_float32 FAILED [ 75%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_put_cpu_float32 FAILED [ 75%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_qr_cpu_float32 FAILED [ 75%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_quantile_cpu_float32 XFAIL [ 75%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_rad2deg_cpu_float32 FAILED [ 75%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_rand_like_cpu_float32 SKIPPED [ 75%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_randint_cpu_float32 SKIPPED [ 75%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_randint_like_cpu_float32 SKIPPED [ 76%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_randn_cpu_float32 SKIPPED [ 76%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_randn_like_cpu_float32 SKIPPED [ 76%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ravel_cpu_float32 PASSED [ 76%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_real_cpu_float32 PASSED [ 76%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_reciprocal_cpu_float32 PASSED [ 76%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_remainder_cpu_float32 PASSED [ 77%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_renorm_cpu_float32 FAILED [ 77%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_repeat_cpu_float32 PASSED [ 77%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_repeat_interleave_cpu_float32 SKIPPED [ 77%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_reshape_as_cpu_float32 PASSED [ 77%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_reshape_cpu_float32 PASSED [ 77%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_resize__cpu_float32 SKIPPED [ 78%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_resize_as__cpu_float32 SKIPPED [ 78%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_resolve_conj_cpu_float32 PASSED [ 78%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_resolve_neg_cpu_float32 PASSED [ 78%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_roll_cpu_float32 FAILED [ 78%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_rot90_cpu_float32 PASSED [ 78%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_round_cpu_float32 FAILED [ 79%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_round_decimals_0_cpu_float32 FAILED [ 79%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_round_decimals_3_cpu_float32 FAILED [ 79%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_round_decimals_neg_3_cpu_float32 FAILED [ 79%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_rsqrt_cpu_float32 PASSED [ 79%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_rsub_cpu_float32 PASSED [ 79%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scalar_tensor_cpu_float32 SKIPPED [ 80%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_add_cpu_float32 PASSED [ 80%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_cpu_float32 PASSED [ 80%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_reduce_amax_cpu_float32 PASSED [ 80%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_reduce_amin_cpu_float32 PASSED [ 80%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_reduce_mean_cpu_float32 PASSED [ 80%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_reduce_prod_cpu_float32 XFAIL [ 81%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_reduce_sum_cpu_float32 PASSED [ 81%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_searchsorted_cpu_float32 SKIPPED [ 81%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_segment_reduce_lengths_cpu_float32 FAILED [ 81%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_segment_reduce_offsets_cpu_float32 FAILED [ 81%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_select_cpu_float32 PASSED [ 81%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_select_scatter_cpu_float32 PASSED [ 82%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sgn_cpu_float32 FAILED [ 82%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_short_cpu_float32 SKIPPED [ 82%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sigmoid_cpu_float32 PASSED [ 82%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sign_cpu_float32 PASSED [ 82%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_signal_windows_cosine_cpu_float32 SKIPPED [ 82%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_signal_windows_exponential_cpu_float32 SKIPPED [ 83%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_signal_windows_gaussian_cpu_float32 SKIPPED [ 83%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_signal_windows_kaiser_cpu_float32 SKIPPED [ 83%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_signbit_cpu_float32 SKIPPED [ 83%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sin_cpu_float32 PASSED [ 83%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sinc_cpu_float32 PASSED [ 83%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sinh_cpu_float32 PASSED [ 83%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_slice_cpu_float32 PASSED [ 84%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_slice_scatter_cpu_float32 PASSED [ 84%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_softmax_cpu_float32 PASSED [ 84%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_softmax_with_dtype_cpu_float32 PASSED [ 84%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sort_cpu_float32 PASSED [ 84%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sparse_sampled_addmm_cpu_float32 XFAIL [ 84%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_airy_ai_cpu_float32 SKIPPED [ 85%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_bessel_j0_cpu_float32 SKIPPED [ 85%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_bessel_j1_cpu_float32 SKIPPED [ 85%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_bessel_y0_cpu_float32 SKIPPED [ 85%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_bessel_y1_cpu_float32 SKIPPED [ 85%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_chebyshev_polynomial_t_cpu_float32 SKIPPED [ 85%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_chebyshev_polynomial_u_cpu_float32 SKIPPED [ 86%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_chebyshev_polynomial_v_cpu_float32 SKIPPED [ 86%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_chebyshev_polynomial_w_cpu_float32 SKIPPED [ 86%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_entr_cpu_float32 PASSED [ 86%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_erfcx_cpu_float32 PASSED [ 86%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_hermite_polynomial_h_cpu_float32 SKIPPED [ 86%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_hermite_polynomial_he_cpu_float32 SKIPPED [ 87%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_i0e_cpu_float32 PASSED [ 87%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_i1_cpu_float32 FAILED [ 87%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_i1e_cpu_float32 PASSED [ 87%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_laguerre_polynomial_l_cpu_float32 SKIPPED [ 87%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_legendre_polynomial_p_cpu_float32 SKIPPED [ 87%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_log_ndtr_cpu_float32 PASSED [ 88%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_modified_bessel_i0_cpu_float32 SKIPPED [ 88%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_modified_bessel_i1_cpu_float32 SKIPPED [ 88%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_modified_bessel_k0_cpu_float32 SKIPPED [ 88%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_modified_bessel_k1_cpu_float32 SKIPPED [ 88%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_ndtr_cpu_float32 PASSED [ 88%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_ndtri_cpu_float32 PASSED [ 89%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_polygamma_special_polygamma_n_0_cpu_float32 FAILED [ 89%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_scaled_modified_bessel_k0_cpu_float32 SKIPPED [ 89%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_scaled_modified_bessel_k1_cpu_float32 SKIPPED [ 89%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_shifted_chebyshev_polynomial_t_cpu_float32 SKIPPED [ 89%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_shifted_chebyshev_polynomial_u_cpu_float32 SKIPPED [ 89%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_shifted_chebyshev_polynomial_v_cpu_float32 SKIPPED [ 90%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_shifted_chebyshev_polynomial_w_cpu_float32 SKIPPED [ 90%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_spherical_bessel_j0_cpu_float32 SKIPPED [ 90%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_xlog1py_cpu_float32 PASSED [ 90%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_zeta_cpu_float32 SKIPPED [ 90%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_split_cpu_float32 PASSED [ 90%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_split_list_args_cpu_float32 PASSED [ 91%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_split_with_sizes_cpu_float32 PASSED [ 91%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sqrt_cpu_float32 PASSED [ 91%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_square_cpu_float32 PASSED [ 91%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_squeeze_cpu_float32 PASSED [ 91%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_stack_cpu_float32 PASSED [ 91%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_std_cpu_float32 FAILED [ 91%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_std_mean_cpu_float32 FAILED [ 92%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_stft_cpu_float32 FAILED [ 92%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sub_cpu_float32 PASSED [ 92%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sum_cpu_float32 PASSED [ 92%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sum_to_size_cpu_float32 FAILED [ 92%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_svd_cpu_float32 FAILED [ 92%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_svd_lowrank_cpu_float32 FAILED [ 93%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_symeig_cpu_float32 FAILED [ 93%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_t_cpu_float32 PASSED [ 93%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_take_along_dim_cpu_float32 FAILED [ 93%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_take_cpu_float32 FAILED [ 93%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tan_cpu_float32 PASSED [ 93%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tanh_cpu_float32 PASSED [ 94%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tensor_split_cpu_float32 XFAIL [ 94%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tensordot_cpu_float32 FAILED [ 94%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tile_cpu_float32 PASSED [ 94%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_to_cpu_float32 FAILED [ 94%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_to_sparse_cpu_float32 XFAIL [ 94%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_topk_cpu_float32 PASSED [ 95%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trace_cpu_float32 FAILED [ 95%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_transpose_cpu_float32 PASSED [ 95%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trapezoid_cpu_float32 FAILED [ 95%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trapz_cpu_float32 FAILED [ 95%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_triangular_solve_cpu_float32 FAILED [ 95%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tril_cpu_float32 PASSED [ 96%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_triu_cpu_float32 PASSED [ 96%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_true_divide_cpu_float32 PASSED [ 96%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trunc_cpu_float32 PASSED [ 96%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unbind_cpu_float32 PASSED [ 96%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unflatten_cpu_float32 FAILED [ 96%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unfold_copy_cpu_float32 PASSED [ 97%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unfold_cpu_float32 PASSED [ 97%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_uniform_cpu_float32 SKIPPED [ 97%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unique_consecutive_cpu_float32 SKIPPED [ 97%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unique_cpu_float32 SKIPPED [ 97%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unsqueeze_cpu_float32 PASSED [ 97%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_var_cpu_float32 FAILED [ 98%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_var_mean_cpu_float32 FAILED [ 98%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_vdot_cpu_float32 PASSED [ 98%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_view_as_complex_cpu_float32 FAILED [ 98%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_view_as_cpu_float32 FAILED [ 98%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_view_cpu_float32 PASSED [ 98%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_vsplit_cpu_float32 FAILED [ 99%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_vstack_cpu_float32 PASSED [ 99%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_where_cpu_float32 PASSED [ 99%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_xlogy_cpu_float32 PASSED [ 99%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_zero__cpu_float32 PASSED [ 99%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_zeros_cpu_float32 SKIPPED [ 99%] | |
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_zeros_like_cpu_float32 SKIPPED [100%] | |
=================================== FAILURES =================================== | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive___rmatmul___cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0], [1], primals_2: f32[s1, s0], [s0, 1], tangents_1: f32[s1], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/torch/_decomp/decompositions.py:2266, code: return torch.mv(tensor1, tensor2) | |
mv: f32[s1], [1] = torch.ops.aten.mv.default(primals_2, primals_1); primals_2 = primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(mv, tangents_1); mv = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_addmv_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.addmv.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_addr_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 853, in __torch_dispatch__ | |
return decomposition_table[func](*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_prims_common/wrappers.py", line 212, in _fn | |
result = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_prims_common/wrappers.py", line 119, in _fn | |
result = fn(**bound.arguments) | |
File "/scratch/ezyang/work/pytorch/torch/_refs/__init__.py", line 2410, in addr | |
return beta * self + alpha * torch.outer(vec1, vec2) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 832, in __torch_dispatch__ | |
r = func.decompose(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_ops.py", line 306, in decompose | |
return self._op_dk(dk, *args, **kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_amax_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s1], [s1, 1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
amax: f32[s0], [1] = torch.ops.aten.amax.default(primals_1, [-1]); primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(amax, tangents_1); amax = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_amin_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s1], [s1, 1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
amin: f32[s0], [1] = torch.ops.aten.amin.default(primals_1, [-1]); primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(amin, tangents_1); amin = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_baddbmm_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.baddbmm.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_block_diag_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/functional.py", line 1173, in block_diag | |
return torch._C._VariableFunctions.block_diag(tensors) # type: ignore[attr-defined] | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cartesian_prod_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/functional.py", line 1137, in cartesian_prod | |
return _VF.cartesian_prod(tensors) # type: ignore[attr-defined] | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cdist_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/functional.py", line 1224, in cdist | |
return _VF.cdist(x1, x2, p, 1) # type: ignore[attr-defined] | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cdouble_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 465, in __torch_dispatch__ | |
return self.inner_torch_dispatch(func, types, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 490, in inner_torch_dispatch | |
out = proxy_call(self, func, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 323, in proxy_call | |
out = func(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_ops.py", line 284, in __call__ | |
return self._op(*args, **kwargs or {}) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.view_as_real.default - couldn't find symbolic meta function/decomposition | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s1], [s1, 1], tangents_1: c128[s0, s1], [s1, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
_to_copy: c128[s0, s1], [s1, 1] = torch.ops.aten._to_copy.default(primals_1, dtype = torch.complex128); primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(_to_copy, tangents_1); _to_copy = None | |
view_as_real = torch.ops.aten.view_as_real.default(tangents_1); tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cfloat_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 465, in __torch_dispatch__ | |
return self.inner_torch_dispatch(func, types, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 490, in inner_torch_dispatch | |
out = proxy_call(self, func, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 323, in proxy_call | |
out = func(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_ops.py", line 284, in __call__ | |
return self._op(*args, **kwargs or {}) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.view_as_real.default - couldn't find symbolic meta function/decomposition | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s1], [s1, 1], tangents_1: c64[s0, s1], [s1, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
_to_copy: c64[s0, s1], [s1, 1] = torch.ops.aten._to_copy.default(primals_1, dtype = torch.complex64); primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(_to_copy, tangents_1); _to_copy = None | |
view_as_real = torch.ops.aten.view_as_real.default(tangents_1); tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cholesky_inverse_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.cholesky_inverse.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cholesky_solve_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.cholesky_solve.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_column_stack_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_combinations_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_complex_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 465, in __torch_dispatch__ | |
return self.inner_torch_dispatch(func, types, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 490, in inner_torch_dispatch | |
out = proxy_call(self, func, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 323, in proxy_call | |
out = func(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_ops.py", line 284, in __call__ | |
return self._op(*args, **kwargs or {}) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.view_as_real.default - couldn't find symbolic meta function/decomposition | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0], [1], primals_2: f32[], [], tangents_1: c64[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
complex_1: c64[s0], [1] = torch.ops.aten.complex.default(primals_1, primals_2); primals_1 = primals_2 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(complex_1, tangents_1); complex_1 = None | |
view_as_real = torch.ops.aten.view_as_real.default(tangents_1); tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cross_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_cross.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cummax_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.cummax.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cummin_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.cummin.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cumprod_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.cumprod.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cumsum_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s0, s0], [s0**2, s0, 1], tangents_1: f32[s0, s0, s0], [s0**2, s0, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
cumsum: f32[s0, s0, s0], [s0**2, s0, 1] = torch.ops.aten.cumsum.default(primals_1, 0); primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(cumsum, tangents_1); cumsum = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cumulative_trapezoid_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s1], [s1, 1], primals_2: f32[s0, s1], [s1, 1], tangents_1: f32[s0, s1 - 1], [s1 - 1, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
slice_1: f32[s0, s1 - 1], [s1, 1] = torch.ops.aten.slice.Tensor(primals_2, 1, 0, -1) | |
slice_2: f32[s0, s1 - 1], [s1, 1] = torch.ops.aten.slice.Tensor(primals_2, 1, 1); primals_2 = None | |
sub: f32[s0, s1 - 1], [s1 - 1, 1] = torch.ops.aten.sub.Tensor(slice_2, slice_1); slice_2 = slice_1 = None | |
slice_3: f32[s0, s1 - 1], [s1, 1] = torch.ops.aten.slice.Tensor(primals_1, 1, 0, -1) | |
slice_4: f32[s0, s1 - 1], [s1, 1] = torch.ops.aten.slice.Tensor(primals_1, 1, 1); primals_1 = None | |
add: f32[s0, s1 - 1], [s1 - 1, 1] = torch.ops.aten.add.Tensor(slice_3, slice_4); slice_3 = slice_4 = None | |
mul: f32[s0, s1 - 1], [s1 - 1, 1] = torch.ops.aten.mul.Tensor(add, sub); add = sub = None | |
cumsum: f32[s0, s1 - 1], [s1 - 1, 1] = torch.ops.aten.cumsum.default(mul, 1); mul = None | |
div: f32[s0, s1 - 1], [s1 - 1, 1] = torch.ops.aten.div.Scalar(cumsum, 2.0); cumsum = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(div, tangents_1); div = None | |
# No stacktrace found for following nodes | |
_tensor_constant0 = self._tensor_constant0 | |
# Gradient addition node due to multiple use of tensor around: | |
lift_fresh_copy: f64[], [] = torch.ops.aten.lift_fresh_copy.default(_tensor_constant0); _tensor_constant0 = None | |
div_1: f32[s0, s1 - 1], [s1 - 1, 1] = torch.ops.aten.div.Tensor(tangents_1, lift_fresh_copy); tangents_1 = lift_fresh_copy = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_deg2rad_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.deg2rad.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_diff_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_digamma_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 465, in __torch_dispatch__ | |
return self.inner_torch_dispatch(func, types, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 490, in inner_torch_dispatch | |
out = proxy_call(self, func, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 323, in proxy_call | |
out = func(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_ops.py", line 284, in __call__ | |
return self._op(*args, **kwargs or {}) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.polygamma.default - couldn't find symbolic meta function/decomposition | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0], [1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
digamma: f32[s0], [1] = torch.ops.aten.digamma.default(primals_1) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(digamma, tangents_1); digamma = tangents_1 = None | |
polygamma = torch.ops.aten.polygamma.default(1, primals_1); primals_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_dist_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.dist.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_dsplit_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_fft2_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_fft_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_fftn_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_fftshift_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_hfft2_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_hfft_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_hfftn_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ifft2_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ifft_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ifftn_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ifftshift_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ihfft2_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ihfft_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ihfftn_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_irfft2_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_irfft_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_irfftn_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_rfft2_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_rfft_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_rfftn_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_frexp_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.frexp.Tensor - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_gradient_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_hsplit_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_i0_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.i0.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_inner_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_kron_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_kthvalue_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.kthvalue.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_cholesky_ex_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_cholesky_ex.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_cond_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_cross_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_cross.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_det_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._linalg_det.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_det_singular_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._linalg_det.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_eigh_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._linalg_eigh.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_eigvals_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_eig.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_eigvalsh_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._linalg_eigh.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_inv_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_inv_ex.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_inv_ex_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_inv_ex.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_lstsq_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_lstsq.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_lstsq_grad_oriented_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/opinfo/definitions/linalg.py", line 1499, in <lambda> | |
op=lambda a, b, driver: torch.linalg.lstsq(a, b, driver=driver)[0], | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_lstsq.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_lu_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_lu.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_lu_factor_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_lu_factor_ex.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_lu_factor_ex_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_lu_factor_ex.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_matrix_norm_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._linalg_svd.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_matrix_power_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_multi_dot_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_norm_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0], [1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
linalg_vector_norm: f32[], [] = torch.ops.aten.linalg_vector_norm.default(primals_1, 0.5) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(linalg_vector_norm, tangents_1); linalg_vector_norm = tangents_1 = None | |
abs_1: f32[s0], [1] = torch.ops.aten.abs.default(primals_1); primals_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_norm_subgradients_at_zero_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0], [1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
linalg_vector_norm: f32[], [] = torch.ops.aten.linalg_vector_norm.default(primals_1, 0.5) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(linalg_vector_norm, tangents_1); linalg_vector_norm = tangents_1 = None | |
abs_1: f32[s0], [1] = torch.ops.aten.abs.default(primals_1); primals_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_pinv_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_pinv.atol_rtol_tensor - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_pinv_hermitian_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_pinv.atol_rtol_tensor - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_qr_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_qr.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_slogdet_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._linalg_slogdet.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_solve_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._linalg_solve_ex.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_solve_ex_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._linalg_solve_ex.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_solve_triangular_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_solve_triangular.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_svd_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._linalg_svd.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_svdvals_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._linalg_svd.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_tensorinv_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_tensorsolve_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_vander_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_vector_norm_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0], [1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
linalg_vector_norm: f32[], [] = torch.ops.aten.linalg_vector_norm.default(primals_1, 6) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(linalg_vector_norm, tangents_1); linalg_vector_norm = tangents_1 = None | |
abs_1: f32[s0], [1] = torch.ops.aten.abs.default(primals_1); primals_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_logaddexp2_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.logaddexp2.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_logaddexp_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.logaddexp.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_logcumsumexp_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.logcumsumexp.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_logdet_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_logsumexp_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s0], [s0, 1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
logsumexp: f32[s0], [1] = torch.ops.aten.logsumexp.default(primals_1, [1]); primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(logsumexp, tangents_1); logsumexp = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_lu_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_jit_internal.py", line 483, in fn | |
return if_true(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/functional.py", line 1711, in _lu_with_infos | |
result = _lu_impl(A, pivot, get_infos, out) | |
File "/scratch/ezyang/work/pytorch/torch/functional.py", line 1690, in _lu_impl | |
return torch._lu_with_info(A, pivot=pivot, check_errors=(not get_infos)) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_lu_factor_ex.default - couldn't find symbolic meta function/decomposition | |
----------------------------- Captured stderr call ----------------------------- | |
/scratch/ezyang/work/pytorch/torch/functional.py:1690: UserWarning: torch.lu is deprecated in favor of torch.linalg.lu_factor / torch.linalg.lu_factor_ex and will be removed in a future PyTorch release. | |
LU, pivots = torch.lu(A, compute_pivots) | |
should be replaced with | |
LU, pivots = torch.linalg.lu_factor(A, compute_pivots) | |
and | |
LU, pivots, info = torch.lu(A, compute_pivots, get_infos=True) | |
should be replaced with | |
LU, pivots, info = torch.linalg.lu_factor_ex(A, compute_pivots) (Triggered internally at /scratch/ezyang/work/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:2028.) | |
return torch._lu_with_info(A, pivot=pivot, check_errors=(not get_infos)) | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_lu_solve_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_lu_solve.default - couldn't find symbolic meta function/decomposition | |
----------------------------- Captured stderr call ----------------------------- | |
/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239: UserWarning: torch.lu_solve is deprecated in favor of torch.linalg.lu_solveand will be removed in a future PyTorch release. | |
Note that torch.linalg.lu_solve has its arguments reversed. | |
X = torch.lu_solve(B, LU, pivots) | |
should be replaced with | |
X = torch.linalg.lu_solve(LU, pivots, B) (Triggered internally at /scratch/ezyang/work/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:2182.) | |
return op.op(*c_args, **c_kwargs) | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_lu_unpack_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.lu_unpack.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_amax_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s1], [s1, 1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/torch/masked/_ops.py:1228, code: return torch.amax(mask_input, dim_, bool(keepdim)).to(dtype=dtype) | |
amax: f32[s0], [1] = torch.ops.aten.amax.default(primals_1, [1]); primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(amax, tangents_1); amax = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_amin_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s1], [s1, 1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/torch/masked/_ops.py:1278, code: return torch.amin(mask_input, dim_, bool(keepdim)).to(dtype=dtype) | |
amin: f32[s0], [1] = torch.ops.aten.amin.default(primals_1, [1]); primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(amin, tangents_1); amin = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_cumprod_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/masked/_ops.py", line 1196, in cumprod | |
return torch.cumprod(mask_input, dim_, dtype=dtype).to(dtype=dtype) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.cumprod.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_cumsum_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0], [1], primals_2: b8[s0], [1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# No stacktrace found for following nodes | |
_tensor_constant0 = self._tensor_constant0 | |
# File: /scratch/ezyang/work/pytorch/torch/masked/_ops.py:417, code: return torch.tensor(0, dtype=dtype, device=device) | |
lift_fresh_copy: f32[], [] = torch.ops.aten.lift_fresh_copy.default(_tensor_constant0); _tensor_constant0 = None | |
# File: /scratch/ezyang/work/pytorch/torch/masked/_ops.py:849, code: return torch.where(mask, input, fill_value) | |
where: f32[s0], [1] = torch.ops.aten.where.self(primals_2, primals_1, lift_fresh_copy); primals_2 = primals_1 = lift_fresh_copy = None | |
# File: /scratch/ezyang/work/pytorch/torch/masked/_ops.py:1176, code: return torch.cumsum(mask_input, dim_, dtype=dtype).to(dtype=dtype) | |
cumsum: f32[s0], [1] = torch.ops.aten.cumsum.default(where, 0, dtype = torch.float32); where = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(cumsum, tangents_1); cumsum = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_fill_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 853, in __torch_dispatch__ | |
return decomposition_table[func](*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_refs/__init__.py", line 4649, in masked_fill | |
value = value.item() | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_logaddexp_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/masked/_ops.py", line 1496, in logaddexp | |
return torch.logaddexp(mask_input, mask_other).to(dtype=dtype) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.logaddexp.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_logsumexp_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0], [1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/torch/masked/_ops.py:1474, code: return torch.logsumexp(mask_input, dim_, keepdim=keepdim).to(dtype=dtype) | |
logsumexp: f32[], [] = torch.ops.aten.logsumexp.default(primals_1, [0]); primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(logsumexp, tangents_1); logsumexp = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_prod_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0], [1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/torch/masked/_ops.py:1128, code: result = result.prod(dim=d, keepdim=bool(keepdim)) | |
prod: f32[], [] = torch.ops.aten.prod.dim_int(primals_1, 0); primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(prod, tangents_1); prod = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_scatter_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.masked_scatter.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_select_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.masked_select.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_matmul_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s1], [s1, 1], primals_2: f32[s1], [1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/torch/_decomp/decompositions.py:2266, code: return torch.mv(tensor1, tensor2) | |
mv: f32[s0], [1] = torch.ops.aten.mv.default(primals_1, primals_2); primals_1 = primals_2 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(mv, tangents_1); mv = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_matrix_exp_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_matrix_exp.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_median_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.median.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_meshgrid_list_of_tensors_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/functional.py", line 489, in meshgrid | |
return _meshgrid(*tensors, indexing=indexing) | |
File "/scratch/ezyang/work/pytorch/torch/functional.py", line 504, in _meshgrid | |
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_meshgrid_variadic_tensors_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/functional.py", line 489, in meshgrid | |
return _meshgrid(*tensors, indexing=indexing) | |
File "/scratch/ezyang/work/pytorch/torch/functional.py", line 504, in _meshgrid | |
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_min_reduction_with_dim_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.min.dim - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_mode_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.mode.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_mv_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s1], [s1, 1], primals_2: f32[s1], [1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
mv: f32[s0], [1] = torch.ops.aten.mv.default(primals_1, primals_2); primals_1 = primals_2 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(mv, tangents_1); mv = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_mvlgamma_mvlgamma_p_1_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s0], [s0, 1], tangents_1: f32[s0, s0], [s0, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
mvlgamma: f32[s0, s0], [s0, 1] = torch.ops.aten.mvlgamma.default(primals_1, 1) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(mvlgamma, tangents_1); mvlgamma = tangents_1 = None | |
arange: f32[1], [1] = torch.ops.aten.arange.start_step(0.0, 0.5, 0.5, dtype = torch.float32, layout = torch.strided, device = device(type='cpu')) | |
unsqueeze: f32[s0, s0, 1], [s0, 1, 0] = torch.ops.aten.unsqueeze.default(primals_1, -1); primals_1 = None | |
add: f32[s0, s0, 1], [s0, 1, 1] = torch.ops.aten.add.Tensor(arange, unsqueeze); arange = unsqueeze = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_mvlgamma_mvlgamma_p_3_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s0], [s0, 1], tangents_1: f32[s0, s0], [s0, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
mvlgamma: f32[s0, s0], [s0, 1] = torch.ops.aten.mvlgamma.default(primals_1, 1) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(mvlgamma, tangents_1); mvlgamma = tangents_1 = None | |
arange: f32[1], [1] = torch.ops.aten.arange.start_step(0.0, 0.5, 0.5, dtype = torch.float32, layout = torch.strided, device = device(type='cpu')) | |
unsqueeze: f32[s0, s0, 1], [s0, 1, 0] = torch.ops.aten.unsqueeze.default(primals_1, -1); primals_1 = None | |
add: f32[s0, s0, 1], [s0, 1, 1] = torch.ops.aten.add.Tensor(arange, unsqueeze); arange = unsqueeze = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_mvlgamma_mvlgamma_p_5_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s0], [s0, 1], tangents_1: f32[s0, s0], [s0, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
mvlgamma: f32[s0, s0], [s0, 1] = torch.ops.aten.mvlgamma.default(primals_1, 1) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(mvlgamma, tangents_1); mvlgamma = tangents_1 = None | |
arange: f32[1], [1] = torch.ops.aten.arange.start_step(0.0, 0.5, 0.5, dtype = torch.float32, layout = torch.strided, device = device(type='cpu')) | |
unsqueeze: f32[s0, s0, 1], [s0, 1, 0] = torch.ops.aten.unsqueeze.default(primals_1, -1); primals_1 = None | |
add: f32[s0, s0, 1], [s0, 1, 1] = torch.ops.aten.add.Tensor(arange, unsqueeze); arange = unsqueeze = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nan_to_num_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 853, in __torch_dispatch__ | |
return decomposition_table[func](*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_prims_common/wrappers.py", line 212, in _fn | |
result = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_refs/__init__.py", line 727, in nan_to_num | |
posinf = prims.maximum_value(a.dtype) | |
File "/scratch/ezyang/work/pytorch/torch/_ops.py", line 284, in __call__ | |
return self._op(*args, **kwargs or {}) | |
RuntimeError: Cannot cast FakeTensor(FakeTensor(..., device='meta', size=()), cpu) to number | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional__scaled_dot_product_attention_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 11792, in <lambda> | |
wrapper_set_seed(torch.nn.functional._scaled_dot_product_attention, inp, *args, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_avg_pool3d_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 465, in __torch_dispatch__ | |
return self.inner_torch_dispatch(func, types, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 490, in inner_torch_dispatch | |
out = proxy_call(self, func, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 323, in proxy_call | |
out = func(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_ops.py", line 284, in __call__ | |
return self._op(*args, **kwargs or {}) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._adaptive_avg_pool3d_backward.default - couldn't find symbolic meta function/decomposition | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[0, s0, s0, s0, s0], [s0**4, s0**3, s0**2, s0, 1], tangents_1: f32[0, s0, 5, 7, 4], [140*s0, 140, 28, 4, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/torch/nn/functional.py:1231, code: return torch._C._nn.adaptive_avg_pool3d(input, _output_size) | |
_adaptive_avg_pool3d: f32[0, s0, 5, 7, 4], [140*s0, 140, 28, 4, 1] = torch.ops.aten._adaptive_avg_pool3d.default(primals_1, [5, 7, 4]) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(_adaptive_avg_pool3d, tangents_1); _adaptive_avg_pool3d = None | |
_adaptive_avg_pool3d_backward = torch.ops.aten._adaptive_avg_pool3d_backward.default(tangents_1, primals_1); tangents_1 = primals_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool1d_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_jit_internal.py", line 483, in fn | |
return if_true(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 1080, in adaptive_max_pool1d_with_indices | |
return torch.adaptive_max_pool1d(input, output_size) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.adaptive_max_pool2d.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool2d_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_jit_internal.py", line 483, in fn | |
return if_true(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 1121, in adaptive_max_pool2d_with_indices | |
return torch._C._nn.adaptive_max_pool2d(input, output_size) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.adaptive_max_pool2d.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool3d_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_jit_internal.py", line 483, in fn | |
return if_true(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 1162, in adaptive_max_pool3d_with_indices | |
return torch._C._nn.adaptive_max_pool3d(input, output_size) | |
TypeError: adaptive_max_pool3d(): argument 'output_size' (position 2) must be tuple of ints, but found element of type SymInt at pos 1 | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_avg_pool3d_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.avg_pool3d.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_bilinear_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_binary_cross_entropy_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[], [], primals_2: f32[], [], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/torch/nn/functional.py:3097, code: return torch._C._nn.binary_cross_entropy(input, target, weight, reduction_enum) | |
binary_cross_entropy: f32[], [] = torch.ops.aten.binary_cross_entropy.default(primals_1, primals_2) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(binary_cross_entropy, tangents_1); binary_cross_entropy = None | |
binary_cross_entropy_backward: f32[], [] = torch.ops.aten.binary_cross_entropy_backward.default(tangents_1, primals_1, primals_2, None); primals_2 = None | |
empty_like: f32[], [] = torch.ops.aten.empty_like.default(primals_1, memory_format = torch.preserve_format) | |
fill: f32[], [] = torch.ops.aten.fill.Scalar(empty_like, 1.0); empty_like = None | |
sub: f32[], [] = torch.ops.aten.sub.Tensor(fill, primals_1); fill = None | |
log: f32[], [] = torch.ops.aten.log.default(sub); sub = None | |
log_1: f32[], [] = torch.ops.aten.log.default(primals_1); primals_1 = None | |
sub_1: f32[], [] = torch.ops.aten.sub.Tensor(log, log_1); log = log_1 = None | |
mul: f32[], [] = torch.ops.aten.mul.Tensor(sub_1, tangents_1); sub_1 = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_cosine_embedding_loss_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 3492, in cosine_embedding_loss | |
return torch.cosine_embedding_loss(input1, input2, target, margin, reduction_enum) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.sqrt_.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_cosine_similarity_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_cross_entropy_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 3028, in cross_entropy | |
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_ctc_loss_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 2630, in ctc_loss | |
return torch.ctc_loss( | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._ctc_loss.Tensor - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_embedding_bag_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 465, in __torch_dispatch__ | |
return self.inner_torch_dispatch(func, types, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 490, in inner_torch_dispatch | |
out = proxy_call(self, func, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 323, in proxy_call | |
out = func(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_ops.py", line 284, in __call__ | |
return self._op(*args, **kwargs or {}) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._embedding_bag_per_sample_weights_backward.default - couldn't find symbolic meta function/decomposition | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s1], [s1, 1], primals_2: i64[s1], [1], primals_3: i64[s2], [1], primals_4: f32[s1], [1], tangents_1: f32[s2, s1], [s1, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/torch/nn/functional.py:2392, code: ret, _, _, _ = torch.embedding_bag( | |
_embedding_bag = torch.ops.aten._embedding_bag.default(primals_1, primals_2, primals_3, False, 0, False, primals_4); primals_4 = None | |
getitem: f32[s2, s1], [s1, 1] = _embedding_bag[0] | |
getitem_1: i64[0], [1] = _embedding_bag[1] | |
getitem_2: i64[s2], [1] = _embedding_bag[2] | |
getitem_3: i64[s2], [1] = _embedding_bag[3]; _embedding_bag = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(getitem, tangents_1); getitem = None | |
_embedding_bag_per_sample_weights_backward = torch.ops.aten._embedding_bag_per_sample_weights_backward.default(tangents_1, primals_1, primals_2, primals_3, getitem_1, 0); tangents_1 = primals_1 = primals_2 = primals_3 = getitem_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_fractional_max_pool2d_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 11394, in <lambda> | |
wrapper_set_seed(torch.nn.functional.fractional_max_pool2d, input, *args, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_jit_internal.py", line 485, in fn | |
return if_false(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 503, in _fractional_max_pool2d | |
return fractional_max_pool2d_with_indices( | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 481, in fractional_max_pool2d_with_indices | |
_random_samples = torch.rand(n_batch, input.size(-3), 2, dtype=input.dtype, device=input.device) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.rand.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_fractional_max_pool3d_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 11415, in <lambda> | |
wrapper_set_seed(torch.nn.functional.fractional_max_pool3d, input, *args, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_jit_internal.py", line 485, in fn | |
return if_false(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 608, in _fractional_max_pool3d | |
return fractional_max_pool3d_with_indices( | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 586, in fractional_max_pool3d_with_indices | |
_random_samples = torch.rand(n_batch, input.size(-4), 3, dtype=input.dtype, device=input.device) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.rand.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_grid_sample_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 4241, in grid_sample | |
return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 853, in __torch_dispatch__ | |
return decomposition_table[func](*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_decomp/decompositions.py", line 68, in inner | |
r = f(*tree_map(increase_prec, args), **tree_map(increase_prec, kwargs)) | |
File "/scratch/ezyang/work/pytorch/torch/_decomp/decompositions.py", line 2127, in grid_sampler_2d | |
ix_nearest = ix.round() | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.round.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_group_norm_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 465, in __torch_dispatch__ | |
return self.inner_torch_dispatch(func, types, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 490, in inner_torch_dispatch | |
out = proxy_call(self, func, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 323, in proxy_call | |
out = func(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_ops.py", line 284, in __call__ | |
return self._op(*args, **kwargs or {}) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 853, in __torch_dispatch__ | |
return decomposition_table[func](*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_decomp/decompositions.py", line 68, in inner | |
r = f(*tree_map(increase_prec, args), **tree_map(increase_prec, kwargs)) | |
File "/scratch/ezyang/work/pytorch/torch/_decomp/decompositions.py", line 1185, in native_group_norm_backward | |
cpg, _rem = divmod(C, group) | |
TypeError: unsupported operand type(s) for divmod(): 'SymInt' and 'int' | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[1, s0, s1], [s0*s1, s1, 1], primals_2: f32[s0], [1], primals_3: f32[s0], [1], tangents_1: f32[1, 2*s0//2, s1], [2*s1*s0//2, s1, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/torch/utils/_pytree.py:244, code: return f(x) | |
sym_size: Sym(1) = torch.ops.aten.sym_size(primals_1, 0) | |
sym_size_1: Sym(s0) = torch.ops.aten.sym_size(primals_1, 1) | |
sym_size_2: Sym(s1) = torch.ops.aten.sym_size(primals_1, 2) | |
# File: /scratch/ezyang/work/pytorch/torch/nn/functional.py:2530, code: return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled) | |
native_group_norm = torch.ops.aten.native_group_norm.default(primals_1, primals_2, primals_3, sym_size, sym_size_1, sym_size_2, 2, 0.5); primals_3 = None | |
getitem: f32[1, 2*s0//2, s1], [2*s1*s0//2, s1, 1] = native_group_norm[0] | |
getitem_1: f32[1, 2], [2, 1] = native_group_norm[1] | |
getitem_2: f32[1, 2], [2, 1] = native_group_norm[2]; native_group_norm = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(getitem, tangents_1); getitem = None | |
native_group_norm_backward = torch.ops.aten.native_group_norm_backward.default(tangents_1, primals_1, getitem_1, getitem_2, primals_2, sym_size, sym_size_1, sym_size_2, 2, [True, True, True]); tangents_1 = primals_1 = getitem_1 = getitem_2 = primals_2 = sym_size = sym_size_1 = sym_size_2 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_hinge_embedding_loss_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 3363, in hinge_embedding_loss | |
return torch.hinge_embedding_loss(input, target, margin, reduction_enum) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.clamp_min_.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_instance_norm_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 499, in aot_dispatch_autograd | |
assert_functional_graph(fx_g.graph) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 401, in assert_functional_graph | |
fx_g.print_readable() | |
AttributeError: 'Graph' object has no attribute 'print_readable' | |
----------------------------- Captured stdout call ----------------------------- | |
====== Buggy post-functionalization graph ====== | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_area_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 3941, in interpolate | |
return adaptive_avg_pool1d(input, output_size) | |
TypeError: adaptive_avg_pool1d(): argument 'output_size' (position 2) must be tuple of ints, but found element of type SymInt at pos 0 | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_bicubic_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 3964, in interpolate | |
return torch._C._nn.upsample_bicubic2d(input, output_size, align_corners, scale_factors) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_linear_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 3951, in interpolate | |
return torch._C._nn.upsample_linear1d(input, output_size, align_corners, scale_factors) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_nearest_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 3926, in interpolate | |
return torch._C._nn.upsample_nearest1d(input, output_size, scale_factors) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_trilinear_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 3959, in interpolate | |
return torch._C._nn.upsample_trilinear3d(input, output_size, align_corners, scale_factors) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool1d_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_jit_internal.py", line 485, in fn | |
return if_false(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 696, in _max_pool1d | |
return torch.max_pool1d(input, kernel_size, stride, padding, dilation, ceil_mode) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool3d_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_jit_internal.py", line 483, in fn | |
return if_true(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 843, in max_pool3d_with_indices | |
return torch._C._nn.max_pool3d_with_indices(input, kernel_size, stride, padding, dilation, ceil_mode) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.max_pool3d_with_indices.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool1d_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 948, in max_unpool1d | |
return torch._C._nn.max_unpool2d(input.unsqueeze(-1), indices.unsqueeze(-1), output_size).squeeze(-1) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.max_unpool2d.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool1d_grad_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 948, in max_unpool1d | |
return torch._C._nn.max_unpool2d(input.unsqueeze(-1), indices.unsqueeze(-1), output_size).squeeze(-1) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.max_unpool2d.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool2d_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 980, in max_unpool2d | |
return torch._C._nn.max_unpool2d(input, indices, output_size) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.max_unpool2d.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool2d_grad_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 980, in max_unpool2d | |
return torch._C._nn.max_unpool2d(input, indices, output_size) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.max_unpool2d.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool3d_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 1012, in max_unpool3d | |
return torch._C._nn.max_unpool3d(input, indices, output_size, _stride, padding) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.max_unpool3d.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool3d_grad_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 1012, in max_unpool3d | |
return torch._C._nn.max_unpool3d(input, indices, output_size, _stride, padding) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.max_unpool3d.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_multi_margin_loss_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 3532, in multi_margin_loss | |
return torch._C._nn.multi_margin_loss(input, target, p, margin, weight, reduction_enum) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.multi_margin_loss.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_multilabel_margin_loss_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 3391, in multilabel_margin_loss | |
return torch._C._nn.multilabel_margin_loss(input, target, reduction_enum) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.multilabel_margin_loss_forward.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_nll_loss_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 2703, in nll_loss | |
return torch._C._nn.nll_loss_nd(input, target, weight, _Reduction.get_enum(reduction), ignore_index) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.nll_loss2d_forward.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_normalize_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1], tangents_1: f32[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/torch/functional.py:1537, code: return _VF.norm(input, p, _dim, keepdim=keepdim) # type: ignore[attr-defined] | |
norm: f32[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1] = torch.ops.aten.norm.ScalarOpt_dim(primals_1, 0.5, [0], True) | |
# File: /scratch/ezyang/work/pytorch/torch/nn/functional.py:4657, code: denom = input.norm(p, dim, keepdim=True).clamp_min(eps).expand_as(input) | |
clamp_min: f32[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1] = torch.ops.aten.clamp_min.default(norm, 1e-12) | |
# File: /scratch/ezyang/work/pytorch/torch/utils/_pytree.py:244, code: return f(x) | |
sym_size: Sym(1) = torch.ops.aten.sym_size(primals_1, 0) | |
sym_size_1: Sym(s0) = torch.ops.aten.sym_size(primals_1, 1) | |
sym_size_2: Sym(s1) = torch.ops.aten.sym_size(primals_1, 2) | |
sym_size_3: Sym(s2) = torch.ops.aten.sym_size(primals_1, 3) | |
# File: /scratch/ezyang/work/pytorch/torch/nn/functional.py:4657, code: denom = input.norm(p, dim, keepdim=True).clamp_min(eps).expand_as(input) | |
expand: f32[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1] = torch.ops.aten.expand.default(clamp_min, [sym_size, sym_size_1, sym_size_2, sym_size_3]); clamp_min = sym_size = sym_size_1 = sym_size_2 = sym_size_3 = None | |
# File: /scratch/ezyang/work/pytorch/torch/nn/functional.py:4658, code: return input / denom | |
div: f32[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1] = torch.ops.aten.div.Tensor(primals_1, expand) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(div, tangents_1); div = None | |
div_1: f32[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1] = torch.ops.aten.div.Tensor(primals_1, expand) | |
div_2: f32[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1] = torch.ops.aten.div.Tensor(div_1, expand); div_1 = None | |
neg: f32[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1] = torch.ops.aten.neg.default(tangents_1) | |
mul: f32[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1] = torch.ops.aten.mul.Tensor(neg, div_2); neg = div_2 = None | |
div_3: f32[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1] = torch.ops.aten.div.Tensor(tangents_1, expand); tangents_1 = expand = None | |
scalar_tensor: f32[], [] = torch.ops.aten.scalar_tensor.default(0.0, dtype = torch.float32, layout = torch.strided, device = device(type='cpu')) | |
ge: b8[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1] = torch.ops.aten.ge.Scalar(norm, 1e-12); norm = None | |
where: f32[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1] = torch.ops.aten.where.self(ge, mul, scalar_tensor); ge = mul = scalar_tensor = None | |
abs_1: f32[1, s0, s1, s2], [s0*s1*s2, s1*s2, s2, 1] = torch.ops.aten.abs.default(primals_1); primals_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_pad_reflect_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.reflection_pad1d.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_pad_replicate_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.replication_pad1d.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_pairwise_distance_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0], [1], primals_2: f32[s0], [1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
sub: f32[s0], [1] = torch.ops.aten.sub.Tensor(primals_1, primals_2); primals_1 = primals_2 = None | |
add: f32[s0], [1] = torch.ops.aten.add.Scalar(sub, 1e-06); sub = None | |
norm: f32[], [] = torch.ops.aten.norm.ScalarOpt_dim(add, 5.0, [0]) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(norm, tangents_1) | |
unsqueeze: f32[1], [0] = torch.ops.aten.unsqueeze.default(tangents_1, 0); tangents_1 = None | |
unsqueeze_1: f32[1], [0] = torch.ops.aten.unsqueeze.default(norm, 0); norm = None | |
abs_1: f32[s0], [1] = torch.ops.aten.abs.default(add); add = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_pdist_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._pdist_forward.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_pixel_shuffle_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.pixel_shuffle.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_pixel_unshuffle_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.pixel_unshuffle.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_prelu_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 465, in __torch_dispatch__ | |
return self.inner_torch_dispatch(func, types, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 490, in inner_torch_dispatch | |
out = proxy_call(self, func, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 323, in proxy_call | |
out = func(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_ops.py", line 284, in __call__ | |
return self._op(*args, **kwargs or {}) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 853, in __torch_dispatch__ | |
return decomposition_table[func](*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_decomp/decompositions.py", line 68, in inner | |
r = f(*tree_map(increase_prec, args), **tree_map(increase_prec, kwargs)) | |
File "/scratch/ezyang/work/pytorch/torch/_decomp/decompositions.py", line 291, in prelu_backward | |
out = weight_grad_collector.sum_to_size(cur_weight.shape) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 832, in __torch_dispatch__ | |
r = func.decompose(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_ops.py", line 306, in decompose | |
return self._op_dk(dk, *args, **kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0], [1], primals_2: f32[], [], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
prelu: f32[s0], [1] = torch.ops.aten.prelu.default(primals_1, primals_2) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(prelu, tangents_1); prelu = None | |
prelu_backward = torch.ops.aten.prelu_backward.default(tangents_1, primals_1, primals_2); tangents_1 = primals_1 = primals_2 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_rrelu_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 11729, in <lambda> | |
wrapper_set_seed(torch.nn.functional.rrelu, input, *args, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 1682, in rrelu | |
result = torch.rrelu(input, lower, upper, training) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.rrelu_with_noise.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_smooth_l1_loss_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 3202, in smooth_l1_loss | |
return torch._C._nn.smooth_l1_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction), beta) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.smooth_l1_loss.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_upsample_nearest_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 4021, in upsample_nearest | |
return interpolate(input, size, scale_factor, mode="nearest") | |
File "/scratch/ezyang/work/pytorch/torch/nn/functional.py", line 3926, in interpolate | |
return torch._C._nn.upsample_nearest1d(input, output_size, scale_factors) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stderr call ----------------------------- | |
/scratch/ezyang/work/pytorch/torch/nn/functional.py:4020: UserWarning: nn.functional.upsample_nearest is deprecated. Use nn.functional.interpolate instead. | |
warnings.warn("nn.functional.upsample_nearest is deprecated. Use nn.functional.interpolate instead.") | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_norm_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s0], [s0, 1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/torch/functional.py:1493, code: return _VF.norm(input, p, dim=_dim, keepdim=keepdim) # type: ignore[attr-defined] | |
norm: f32[], [] = torch.ops.aten.norm.ScalarOpt_dim(primals_1, 0.5, [0, 1]) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(norm, tangents_1) | |
unsqueeze: f32[1], [0] = torch.ops.aten.unsqueeze.default(tangents_1, 0); tangents_1 = None | |
unsqueeze_1: f32[1, 1], [0, 0] = torch.ops.aten.unsqueeze.default(unsqueeze, 1); unsqueeze = None | |
unsqueeze_2: f32[1], [0] = torch.ops.aten.unsqueeze.default(norm, 0); norm = None | |
unsqueeze_3: f32[1, 1], [0, 0] = torch.ops.aten.unsqueeze.default(unsqueeze_2, 1); unsqueeze_2 = None | |
abs_1: f32[s0, s0], [s0, 1] = torch.ops.aten.abs.default(primals_1); primals_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_norm_nuc_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/functional.py", line 1522, in norm | |
return _VF.nuclear_norm(input, keepdim=keepdim) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._linalg_svd.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_normal_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 14612, in <lambda> | |
wrapper_set_seed(torch.normal, inp, *args, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.normal.Tensor_Tensor - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_normal_number_mean_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 14635, in <lambda> | |
wrapper_set_seed(torch.normal, mean, std, *args, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.normal.float_Tensor - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_ormqr_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.ormqr.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_outer_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_pca_lowrank_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13524, in <lambda> | |
op=lambda *args, **kwargs: wrapper_set_seed( | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13525, in <lambda> | |
lambda a, b, **kwargs: torch.pca_lowrank(a @ b.mT, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/_lowrank.py", line 275, in pca_lowrank | |
return _svd_lowrank(A, q, niter=niter, M=None) | |
File "/scratch/ezyang/work/pytorch/torch/_lowrank.py", line 161, in _svd_lowrank | |
Q = get_approximate_basis(A_t, q, niter=niter, M=M_t) | |
File "/scratch/ezyang/work/pytorch/torch/_lowrank.py", line 70, in get_approximate_basis | |
Q = torch.linalg.qr(matmul(A, R)).Q | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_qr.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_pinverse_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_pinv.atol_rtol_tensor - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_polar_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.polar.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_0_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13571, in <lambda> | |
op=lambda x, n, **kwargs: torch.polygamma(n, x, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.polygamma.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_1_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13584, in <lambda> | |
op=lambda x, n, **kwargs: torch.polygamma(n, x, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.polygamma.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_2_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13607, in <lambda> | |
op=lambda x, n, **kwargs: torch.polygamma(n, x, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.polygamma.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_3_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13632, in <lambda> | |
op=lambda x, n, **kwargs: torch.polygamma(n, x, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.polygamma.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_4_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13653, in <lambda> | |
op=lambda x, n, **kwargs: torch.polygamma(n, x, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.polygamma.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_prod_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s0, s0], [s0**2, s0, 1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
prod: f32[], [] = torch.ops.aten.prod.default(primals_1) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(prod, tangents_1); prod = tangents_1 = None | |
view: f32[s0**3], [1] = torch.ops.aten.view.default(primals_1, [-1]); primals_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_put_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.put.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_qr_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_qr.default - couldn't find symbolic meta function/decomposition | |
----------------------------- Captured stderr call ----------------------------- | |
/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239: UserWarning: torch.qr is deprecated in favor of torch.linalg.qr and will be removed in a future PyTorch release. | |
The boolean parameter 'some' has been replaced with a string parameter 'mode'. | |
Q, R = torch.qr(A, some) | |
should be replaced with | |
Q, R = torch.linalg.qr(A, 'reduced' if some else 'complete') (Triggered internally at /scratch/ezyang/work/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:2458.) | |
return op.op(*c_args, **c_kwargs) | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_rad2deg_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.rad2deg.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_renorm_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.renorm.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_roll_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 853, in __torch_dispatch__ | |
return decomposition_table[func](*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_refs/__init__.py", line 3185, in roll | |
t0 = torch.narrow(a, dim, start, size - start) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 832, in __torch_dispatch__ | |
r = func.decompose(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_ops.py", line 306, in decompose | |
return self._op_dk(dk, *args, **kwargs) | |
IndexError: Dimension out of range (expected to be in range of [-s0, s0 - 1], but got Mod(-2, s0)) | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_round_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.round.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_round_decimals_0_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.round.decimals - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_round_decimals_3_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.round.decimals - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_round_decimals_neg_3_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.round.decimals - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_segment_reduce_lengths_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.segment_reduce.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_segment_reduce_offsets_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.segment_reduce.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_sgn_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s0], [s0, 1], tangents_1: f32[s0, s0], [s0, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
sgn: f32[s0, s0], [s0, 1] = torch.ops.aten.sgn.default(primals_1); primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(sgn, tangents_1); sgn = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_special_i1_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 465, in __torch_dispatch__ | |
return self.inner_torch_dispatch(func, types, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 490, in inner_torch_dispatch | |
out = proxy_call(self, func, args, kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 323, in proxy_call | |
out = func(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_ops.py", line 284, in __call__ | |
return self._op(*args, **kwargs or {}) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.i0.default - couldn't find symbolic meta function/decomposition | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0], [1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
special_i1: f32[s0], [1] = torch.ops.aten.special_i1.default(primals_1) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(special_i1, tangents_1); tangents_1 = None | |
abs_1: f32[s0], [1] = torch.ops.aten.abs.default(primals_1) | |
gt: b8[s0], [1] = torch.ops.aten.gt.Scalar(abs_1, 1.1920928955078125e-07); abs_1 = None | |
full: f32[], [] = torch.ops.aten.full.default([], 1.1920928955078125e-07, dtype = torch.float32, layout = torch.strided, device = device(type='cpu')) | |
where: f32[s0], [1] = torch.ops.aten.where.self(gt, primals_1, full); gt = primals_1 = full = None | |
reciprocal: f32[s0], [1] = torch.ops.aten.reciprocal.default(where) | |
mul: f32[s0], [1] = torch.ops.aten.mul.Tensor(special_i1, reciprocal); special_i1 = reciprocal = None | |
i0 = torch.ops.aten.i0.default(where); where = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_special_polygamma_special_polygamma_n_0_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/opinfo/definitions/special.py", line 177, in <lambda> | |
op=lambda x, n, **kwargs: torch.special.polygamma(n, x, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.polygamma.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_std_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s0, s0], [s0**2, s0, 1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
std: f32[], [] = torch.ops.aten.std.correction(primals_1, None, correction = 1); primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(std, tangents_1) | |
mul: f32[], [] = torch.ops.aten.mul.Scalar(std, 2) | |
div: f32[], [] = torch.ops.aten.div.Tensor(tangents_1, mul); tangents_1 = mul = None | |
eq: b8[], [] = torch.ops.aten.eq.Scalar(std, 0); std = None | |
masked_fill: f32[], [] = torch.ops.aten.masked_fill.Scalar(div, eq, 0); div = eq = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_std_mean_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s0, s0], [s0**2, s0, 1], tangents_1: f32[], [], tangents_2: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
std_mean = torch.ops.aten.std_mean.correction(primals_1, None, correction = 1); primals_1 = None | |
getitem: f32[], [] = std_mean[0] | |
getitem_1: f32[], [] = std_mean[1]; std_mean = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(getitem, tangents_1) | |
is_same_size_1 = torch.ops.aten.is_same_size.default(getitem_1, tangents_2); getitem_1 = tangents_2 = None | |
mul: f32[], [] = torch.ops.aten.mul.Scalar(getitem, 2) | |
div: f32[], [] = torch.ops.aten.div.Tensor(tangents_1, mul); tangents_1 = mul = None | |
eq: b8[], [] = torch.ops.aten.eq.Scalar(getitem, 0); getitem = None | |
masked_fill: f32[], [] = torch.ops.aten.masked_fill.Scalar(div, eq, 0); div = eq = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_stft_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/functional.py", line 639, in stft | |
input = F.pad(input.view(extended_shape), [pad, pad], pad_mode) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.reflection_pad1d.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_sum_to_size_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 8860, in <lambda> | |
op=lambda x, *args, **kwargs: x.sum_to_size(*args, **kwargs), | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_svd_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten._linalg_svd.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_svd_lowrank_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13497, in <lambda> | |
op=lambda *args, **kwargs: wrapper_set_seed( | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13498, in <lambda> | |
lambda a, b, **kwargs: torch.svd_lowrank(a @ b.mT, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/_lowrank.py", line 137, in svd_lowrank | |
return _svd_lowrank(A, q=q, niter=niter, M=M) | |
File "/scratch/ezyang/work/pytorch/torch/_lowrank.py", line 161, in _svd_lowrank | |
Q = get_approximate_basis(A_t, q, niter=niter, M=M_t) | |
File "/scratch/ezyang/work/pytorch/torch/_lowrank.py", line 70, in get_approximate_basis | |
Q = torch.linalg.qr(matmul(A, R)).Q | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.linalg_qr.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_symeig_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.symeig.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_take_along_dim_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_take_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.take.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_tensordot_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/functional.py", line 1100, in tensordot | |
return _VF.tensordot(a, b, dims_a, dims_b) # type: ignore[attr-defined] | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_to_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 12071, in <lambda> | |
op=lambda x, *args, **kwargs: x.to(*args, **kwargs), | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 850, in __torch_dispatch__ | |
r = meta_table[func](*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_meta_registrations.py", line 1573, in _to_copy | |
return torch.empty( | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 853, in __torch_dispatch__ | |
return decomposition_table[func](*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_prims_common/wrappers.py", line 212, in _fn | |
result = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_refs/__init__.py", line 3851, in empty | |
check( | |
File "/scratch/ezyang/work/pytorch/torch/_prims_common/__init__.py", line 1502, in check | |
raise exc_type(s()) | |
RuntimeError: torch.empty: the Preserve memory format is not supported | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_trace_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s0], [s0, 1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
trace: f32[], [] = torch.ops.aten.trace.default(primals_1) | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(trace, tangents_1); trace = tangents_1 = None | |
sym_size: Sym(s0) = torch.ops.aten.sym_size(primals_1, 0) | |
sym_size_1: Sym(s0) = torch.ops.aten.sym_size(primals_1, 1); primals_1 = None | |
# No stacktrace found for following nodes | |
mul: Sym(s0**2) = sym_size * sym_size_1; sym_size = sym_size_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
zeros: f32[s0**2], [1] = torch.ops.aten.zeros.default([mul], dtype = torch.float32, layout = torch.strided, device = device(type='cpu')); mul = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_trapezoid_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_trapz_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_triangular_solve_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.triangular_solve.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_unflatten_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_var_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s0, s0], [s0**2, s0, 1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
var: f32[], [] = torch.ops.aten.var.correction(primals_1, None, correction = 1); primals_1 = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(var, tangents_1); var = tangents_1 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_var_mean_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 495, in aot_dispatch_autograd | |
fx_g = make_fx( | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 665, in wrapped | |
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs)) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 422, in dispatch_trace | |
graph = tracer.trace(root, concrete_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace | |
(self.create_arg(fn(*args)),), | |
File "/scratch/ezyang/work/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn | |
tree_out = root_fn(*tree_args) | |
File "/scratch/ezyang/work/pytorch/torch/fx/experimental/proxy_tensor.py", line 439, in wrapped | |
out = f(*tensors) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 189, in inner | |
outs = f(*f_args, **f_kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 257, in joint_forward_backward | |
backward_out = torch.autograd.grad( | |
File "/scratch/ezyang/work/pytorch/torch/autograd/__init__.py", line 300, in grad | |
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass | |
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides | |
----------------------------- Captured stdout call ----------------------------- | |
incomplete graph: | |
class joint_forward_backward(torch.nn.Module): | |
def forward(self, primals, tangents): | |
primals_1: f32[s0, s0, s0], [s0**2, s0, 1], tangents_1: f32[], [], tangents_2: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec) | |
# File: /scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py:1239, code: return op.op(*c_args, **c_kwargs) | |
var_mean = torch.ops.aten.var_mean.correction(primals_1, None, correction = 1); primals_1 = None | |
getitem: f32[], [] = var_mean[0] | |
getitem_1: f32[], [] = var_mean[1]; var_mean = None | |
# Gradient addition node due to multiple use of tensor around: | |
is_same_size = torch.ops.aten.is_same_size.default(getitem, tangents_1); getitem = tangents_1 = None | |
is_same_size_1 = torch.ops.aten.is_same_size.default(getitem_1, tangents_2); getitem_1 = tangents_2 = None | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_view_as_complex_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/_subclasses/fake_tensor.py", line 874, in __torch_dispatch__ | |
raise RuntimeError( | |
RuntimeError: aten.view_as_complex.default - couldn't find symbolic meta function/decomposition | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_view_as_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13723, in <lambda> | |
op=lambda x, other: x.view_as(other), | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_vsplit_cpu_float32 _ | |
Traceback (most recent call last): | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1307, in test_aot_autograd_symbolic_exhaustive | |
_test_aot_autograd_helper(self, device, dtype, op) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1263, in _test_aot_autograd_helper | |
call_forwards_backwards(compiled_f) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1242, in call_forwards_backwards | |
out = wrapper_set_seed(f, args) | |
File "/scratch/ezyang/work/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7795, in wrapper_set_seed | |
return op(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 827, in returned_function | |
compiled_fn = create_aot_dispatcher_function( | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 678, in create_aot_dispatcher_function | |
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 471, in aot_dispatch_autograd | |
out = flat_fn(*flat_args) | |
File "/scratch/ezyang/work/pytorch/functorch/_src/aot_autograd.py", line 807, in flat_fn | |
tree_out = fn(*args, **kwargs) | |
File "/scratch/ezyang/work/pytorch/test/functorch/test_aotdispatch.py", line 1239, in f | |
return op.op(*c_args, **c_kwargs) | |
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides | |
----------------------- CSV report: test_aotdispatch.csv ----------------------- | |
=========================== short test summary info ============================ | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rmatmul___cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addmv_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addr_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_amax_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_amin_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_baddbmm_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_block_diag_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cartesian_prod_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cdist_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cdouble_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cfloat_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cholesky_inverse_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cholesky_solve_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_column_stack_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_combinations_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_complex_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cross_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cummax_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cummin_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cumprod_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cumsum_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cumulative_trapezoid_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_deg2rad_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diff_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_digamma_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_dist_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_dsplit_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fft2_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fft_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fftn_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fftshift_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_hfft2_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_hfft_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_hfftn_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifft2_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifft_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifftn_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifftshift_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ihfft2_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ihfft_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ihfftn_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_irfft2_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_irfft_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_irfftn_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_rfft2_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_rfft_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_rfftn_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_frexp_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_gradient_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_hsplit_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_i0_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_inner_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_kron_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_kthvalue_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cholesky_ex_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cond_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cross_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_det_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_det_singular_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eigh_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eigvals_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eigvalsh_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_inv_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_inv_ex_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lstsq_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lstsq_grad_oriented_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_factor_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_factor_ex_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_matrix_norm_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_matrix_power_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_multi_dot_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_norm_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_norm_subgradients_at_zero_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_pinv_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_pinv_hermitian_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_qr_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_slogdet_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_solve_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_solve_ex_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_solve_triangular_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_svd_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_svdvals_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_tensorinv_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_tensorsolve_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_vander_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_vector_norm_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logaddexp2_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logaddexp_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logcumsumexp_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logdet_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logsumexp_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lu_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lu_solve_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lu_unpack_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_amax_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_amin_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_cumprod_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_cumsum_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_fill_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_logaddexp_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_logsumexp_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_prod_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_scatter_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_select_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_matmul_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_matrix_exp_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_median_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_meshgrid_list_of_tensors_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_meshgrid_variadic_tensors_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_min_reduction_with_dim_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mode_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mv_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mvlgamma_mvlgamma_p_1_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mvlgamma_mvlgamma_p_3_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mvlgamma_mvlgamma_p_5_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nan_to_num_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional__scaled_dot_product_attention_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_avg_pool3d_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool1d_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool2d_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool3d_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_avg_pool3d_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_bilinear_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_binary_cross_entropy_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_cosine_embedding_loss_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_cosine_similarity_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_cross_entropy_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_ctc_loss_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_embedding_bag_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_fractional_max_pool2d_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_fractional_max_pool3d_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_grid_sample_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_group_norm_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_hinge_embedding_loss_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_instance_norm_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_area_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_bicubic_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_linear_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_nearest_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_trilinear_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool1d_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool3d_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool1d_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool1d_grad_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool2d_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool2d_grad_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool3d_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool3d_grad_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_multi_margin_loss_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_multilabel_margin_loss_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_nll_loss_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_normalize_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pad_reflect_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pad_replicate_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pairwise_distance_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pdist_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pixel_shuffle_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pixel_unshuffle_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_prelu_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_rrelu_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_smooth_l1_loss_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_upsample_nearest_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_norm_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_norm_nuc_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_normal_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_normal_number_mean_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ormqr_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_outer_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_pca_lowrank_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_pinverse_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polar_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_0_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_1_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_2_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_3_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_4_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_prod_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_put_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_qr_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_rad2deg_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_renorm_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_roll_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_round_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_round_decimals_0_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_round_decimals_3_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_round_decimals_neg_3_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_segment_reduce_lengths_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_segment_reduce_offsets_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sgn_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_i1_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_polygamma_special_polygamma_n_0_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_std_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_std_mean_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_stft_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sum_to_size_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_svd_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_svd_lowrank_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_symeig_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_take_along_dim_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_take_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tensordot_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_to_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trace_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trapezoid_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trapz_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_triangular_solve_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unflatten_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_var_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_var_mean_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_view_as_complex_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_view_as_cpu_float32 | |
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_vsplit_cpu_float32 | |
= 213 failed, 255 passed, 129 skipped, 658 deselected, 15 xfailed in 1333.02s (0:22:13) = |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment