Skip to content

Instantly share code, notes, and snippets.

@ezyang
Created November 20, 2022 01:01
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save ezyang/3988ade331bdd95627cde2027c5f8eef to your computer and use it in GitHub Desktop.
Save ezyang/3988ade331bdd95627cde2027c5f8eef to your computer and use it in GitHub Desktop.
/home/ezyang/local/b/pytorch-env/lib/python3.9/site-packages/pytest_csv/_reporter.py:38: PytestDeprecationWarning: The hookimpl CSVReporter.pytest_runtest_makereport uses old-style configuration options (marks or attributes).
Please use the pytest.hookimpl(hookwrapper=True) decorator instead
to configure the hooks.
See https://docs.pytest.org/en/latest/deprecations.html#configuring-hook-specs-impls-using-markers
@pytest.mark.hookwrapper
============================= test session starts ==============================
platform linux -- Python 3.9.15, pytest-7.2.0, pluggy-1.0.0 -- /home/ezyang/local/b/pytorch-env/bin/python
cachedir: .pytest_cache
benchmark: 4.0.0 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/data/users/ezyang/b/pytorch/.hypothesis/examples')
rootdir: /data/users/ezyang/b/pytorch, configfile: pytest.ini
plugins: benchmark-4.0.0, hydra-core-1.1.2, hypothesis-6.57.1, csv-3.0.0
collecting ... collected 1297 items / 679 deselected / 618 selected
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_H_cpu_float32 FAILED [ 0%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_T_cpu_float32 FAILED [ 0%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___getitem___cpu_float32 FAILED [ 0%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___radd___cpu_float32 PASSED [ 0%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rdiv___cpu_float32 PASSED [ 0%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rmatmul___cpu_float32 FAILED [ 0%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rmod___cpu_float32 PASSED [ 1%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rmul___cpu_float32 PASSED [ 1%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rpow___cpu_float32 PASSED [ 1%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rsub___cpu_float32 PASSED [ 1%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive__softmax_backward_data_cpu_float32 PASSED [ 1%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_abs_cpu_float32 PASSED [ 1%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_acos_cpu_float32 PASSED [ 2%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_acosh_cpu_float32 PASSED [ 2%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_add_cpu_float32 PASSED [ 2%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addbmm_cpu_float32 PASSED [ 2%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addcdiv_cpu_float32 PASSED [ 2%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addcmul_cpu_float32 PASSED [ 2%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addmm_cpu_float32 PASSED [ 3%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addmm_decomposed_cpu_float32 PASSED [ 3%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addmv_cpu_float32 FAILED [ 3%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addr_cpu_float32 FAILED [ 3%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_all_cpu_float32 SKIPPED [ 3%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_allclose_cpu_float32 SKIPPED [ 3%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_amax_cpu_float32 FAILED [ 4%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_amin_cpu_float32 FAILED [ 4%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_aminmax_cpu_float32 SKIPPED [ 4%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_angle_cpu_float32 PASSED [ 4%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_any_cpu_float32 SKIPPED [ 4%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_arange_cpu_float32 SKIPPED [ 4%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_argmax_cpu_float32 SKIPPED [ 5%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_argmin_cpu_float32 SKIPPED [ 5%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_argsort_cpu_float32 SKIPPED [ 5%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_argwhere_cpu_float32 SKIPPED [ 5%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_as_strided_cpu_float32 PASSED [ 5%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_as_strided_scatter_cpu_float32 SKIPPED [ 5%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_asin_cpu_float32 PASSED [ 5%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_asinh_cpu_float32 PASSED [ 6%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atan2_cpu_float32 PASSED [ 6%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atan_cpu_float32 PASSED [ 6%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atanh_cpu_float32 PASSED [ 6%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atleast_1d_cpu_float32 FAILED [ 6%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atleast_2d_cpu_float32 FAILED [ 6%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atleast_3d_cpu_float32 FAILED [ 7%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_baddbmm_cpu_float32 FAILED [ 7%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_bernoulli_cpu_float32 PASSED [ 7%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_bfloat16_cpu_float32 PASSED [ 7%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_block_diag_cpu_float32 FAILED [ 7%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_bmm_cpu_float32 PASSED [ 7%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_bool_cpu_float32 SKIPPED [ 8%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_broadcast_shapes_cpu_float32 SKIPPED [ 8%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_broadcast_tensors_cpu_float32 FAILED [ 8%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_broadcast_to_cpu_float32 FAILED [ 8%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_bucketize_cpu_float32 SKIPPED [ 8%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_byte_cpu_float32 SKIPPED [ 8%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cartesian_prod_cpu_float32 FAILED [ 9%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cat_cpu_float32 PASSED [ 9%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cdist_cpu_float32 FAILED [ 9%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cdouble_cpu_float32 PASSED [ 9%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ceil_cpu_float32 PASSED [ 9%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cfloat_cpu_float32 PASSED [ 9%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_chalf_cpu_float32 XFAIL [ 10%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_char_cpu_float32 SKIPPED [ 10%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cholesky_cpu_float32 XFAIL [ 10%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cholesky_inverse_cpu_float32 FAILED [ 10%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cholesky_solve_cpu_float32 FAILED [ 10%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_chunk_cpu_float32 FAILED [ 10%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_clamp_cpu_float32 PASSED [ 11%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_clamp_max_cpu_float32 PASSED [ 11%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_clamp_min_cpu_float32 PASSED [ 11%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_clone_cpu_float32 PASSED [ 11%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_column_stack_cpu_float32 FAILED [ 11%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_combinations_cpu_float32 FAILED [ 11%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_complex_cpu_float32 PASSED [ 11%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_conj_cpu_float32 PASSED [ 12%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_conj_physical_cpu_float32 PASSED [ 12%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_constant_pad_nd_cpu_float32 PASSED [ 12%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_contiguous_cpu_float32 PASSED [ 12%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_copysign_cpu_float32 PASSED [ 12%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_corrcoef_cpu_float32 XFAIL [ 12%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cos_cpu_float32 PASSED [ 13%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cosh_cpu_float32 PASSED [ 13%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_count_nonzero_cpu_float32 SKIPPED [ 13%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cov_cpu_float32 XFAIL [ 13%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cross_cpu_float32 FAILED [ 13%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cummax_cpu_float32 FAILED [ 13%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cummin_cpu_float32 FAILED [ 14%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cumprod_cpu_float32 FAILED [ 14%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cumsum_cpu_float32 FAILED [ 14%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cumulative_trapezoid_cpu_float32 FAILED [ 14%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_deg2rad_cpu_float32 PASSED [ 14%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diag_cpu_float32 PASSED [ 14%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diag_embed_cpu_float32 PASSED [ 15%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diagflat_cpu_float32 PASSED [ 15%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diagonal_copy_cpu_float32 PASSED [ 15%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diagonal_cpu_float32 FAILED [ 15%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diagonal_scatter_cpu_float32 PASSED [ 15%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diff_cpu_float32 FAILED [ 15%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_digamma_cpu_float32 FAILED [ 16%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_dist_cpu_float32 PASSED [ 16%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_div_floor_rounding_cpu_float32 PASSED [ 16%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_div_no_rounding_mode_cpu_float32 PASSED [ 16%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_div_trunc_rounding_cpu_float32 PASSED [ 16%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_dot_cpu_float32 PASSED [ 16%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_double_cpu_float32 PASSED [ 16%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_dsplit_cpu_float32 FAILED [ 17%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_dstack_cpu_float32 PASSED [ 17%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_einsum_cpu_float32 PASSED [ 17%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_empty_cpu_float32 SKIPPED [ 17%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_empty_like_cpu_float32 SKIPPED [ 17%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_eq_cpu_float32 SKIPPED [ 17%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_equal_cpu_float32 SKIPPED [ 18%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_erf_cpu_float32 PASSED [ 18%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_erfc_cpu_float32 PASSED [ 18%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_erfinv_cpu_float32 PASSED [ 18%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_exp2_cpu_float32 PASSED [ 18%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_exp_cpu_float32 PASSED [ 18%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_expand_as_cpu_float32 FAILED [ 19%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_expand_cpu_float32 FAILED [ 19%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_expm1_cpu_float32 PASSED [ 19%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_eye_cpu_float32 SKIPPED [ 19%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fft2_cpu_float32 FAILED [ 19%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fft_cpu_float32 FAILED [ 19%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fftn_cpu_float32 FAILED [ 20%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fftshift_cpu_float32 FAILED [ 20%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_hfft2_cpu_float32 FAILED [ 20%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_hfft_cpu_float32 FAILED [ 20%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_hfftn_cpu_float32 FAILED [ 20%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifft2_cpu_float32 FAILED [ 20%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifft_cpu_float32 FAILED [ 21%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifftn_cpu_float32 FAILED [ 21%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifftshift_cpu_float32 FAILED [ 21%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ihfft2_cpu_float32 FAILED [ 21%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ihfft_cpu_float32 FAILED [ 21%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ihfftn_cpu_float32 FAILED [ 21%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_irfft2_cpu_float32 FAILED [ 22%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_irfft_cpu_float32 FAILED [ 22%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_irfftn_cpu_float32 FAILED [ 22%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_rfft2_cpu_float32 FAILED [ 22%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_rfft_cpu_float32 FAILED [ 22%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_rfftn_cpu_float32 FAILED [ 22%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fill_cpu_float32 PASSED [ 22%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_flatten_cpu_float32 FAILED [ 23%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_flip_cpu_float32 PASSED [ 23%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fliplr_cpu_float32 PASSED [ 23%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_flipud_cpu_float32 PASSED [ 23%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_float_cpu_float32 PASSED [ 23%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_float_power_cpu_float32 PASSED [ 23%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_floor_cpu_float32 PASSED [ 24%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_floor_divide_cpu_float32 SKIPPED [ 24%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fmax_cpu_float32 PASSED [ 24%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fmin_cpu_float32 PASSED [ 24%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fmod_cpu_float32 PASSED [ 24%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_frac_cpu_float32 PASSED [ 24%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_frexp_cpu_float32 FAILED [ 25%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_full_cpu_float32 SKIPPED [ 25%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_full_like_cpu_float32 SKIPPED [ 25%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_gather_cpu_float32 PASSED [ 25%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ge_cpu_float32 SKIPPED [ 25%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_geqrf_cpu_float32 SKIPPED [ 25%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_gradient_cpu_float32 FAILED [ 26%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_grid_sampler_2d_cpu_float32 PASSED [ 26%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_gt_cpu_float32 SKIPPED [ 26%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_half_cpu_float32 PASSED [ 26%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_heaviside_cpu_float32 SKIPPED [ 26%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_histc_cpu_float32 SKIPPED [ 26%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_histogram_cpu_float32 SKIPPED [ 27%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_histogramdd_cpu_float32 SKIPPED [ 27%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_hsplit_cpu_float32 FAILED [ 27%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_hstack_cpu_float32 PASSED [ 27%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_hypot_cpu_float32 PASSED [ 27%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_i0_cpu_float32 FAILED [ 27%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_igamma_cpu_float32 SKIPPED [ 27%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_igammac_cpu_float32 SKIPPED [ 28%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_index_add_cpu_float32 PASSED [ 28%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_index_copy_cpu_float32 PASSED [ 28%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_index_fill_cpu_float32 PASSED [ 28%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_index_put_cpu_float32 SKIPPED [ 28%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_index_reduce_cpu_float32 XFAIL [ 28%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_index_select_cpu_float32 PASSED [ 29%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_inner_cpu_float32 FAILED [ 29%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_int_cpu_float32 SKIPPED [ 29%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isclose_cpu_float32 SKIPPED [ 29%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isfinite_cpu_float32 SKIPPED [ 29%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isin_cpu_float32 SKIPPED [ 29%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isinf_cpu_float32 SKIPPED [ 30%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isnan_cpu_float32 SKIPPED [ 30%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isneginf_cpu_float32 SKIPPED [ 30%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isposinf_cpu_float32 SKIPPED [ 30%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_isreal_cpu_float32 SKIPPED [ 30%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_jiterator_2inputs_2outputs_cpu_float32 SKIPPED [ 30%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_jiterator_4inputs_with_extra_args_cpu_float32 SKIPPED [ 31%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_jiterator_binary_cpu_float32 SKIPPED [ 31%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_jiterator_binary_return_by_ref_cpu_float32 SKIPPED [ 31%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_jiterator_unary_cpu_float32 SKIPPED [ 31%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_kron_cpu_float32 FAILED [ 31%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_kthvalue_cpu_float32 FAILED [ 31%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ldexp_cpu_float32 PASSED [ 32%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_le_cpu_float32 SKIPPED [ 32%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lerp_cpu_float32 PASSED [ 32%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lgamma_cpu_float32 PASSED [ 32%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cholesky_cpu_float32 XFAIL [ 32%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cholesky_ex_cpu_float32 FAILED [ 32%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cond_cpu_float32 FAILED [ 33%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cross_cpu_float32 FAILED [ 33%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_det_cpu_float32 FAILED [ 33%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_det_singular_cpu_float32 FAILED [ 33%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eig_cpu_float32 XFAIL [ 33%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eigh_cpu_float32 FAILED [ 33%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eigvals_cpu_float32 FAILED [ 33%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eigvalsh_cpu_float32 FAILED [ 34%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_householder_product_cpu_float32 SKIPPED [ 34%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_inv_cpu_float32 FAILED [ 34%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_inv_ex_cpu_float32 FAILED [ 34%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_ldl_factor_cpu_float32 SKIPPED [ 34%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_ldl_factor_ex_cpu_float32 SKIPPED [ 34%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_ldl_solve_cpu_float32 SKIPPED [ 35%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lstsq_cpu_float32 SKIPPED [ 35%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lstsq_grad_oriented_cpu_float32 SKIPPED [ 35%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_cpu_float32 FAILED [ 35%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_factor_cpu_float32 FAILED [ 35%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_factor_ex_cpu_float32 FAILED [ 35%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_solve_cpu_float32 SKIPPED [ 36%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_matrix_norm_cpu_float32 FAILED [ 36%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_matrix_power_cpu_float32 FAILED [ 36%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_matrix_rank_cpu_float32 SKIPPED [ 36%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_matrix_rank_hermitian_cpu_float32 SKIPPED [ 36%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_multi_dot_cpu_float32 FAILED [ 36%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_norm_cpu_float32 FAILED [ 37%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_norm_subgradients_at_zero_cpu_float32 FAILED [ 37%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_pinv_cpu_float32 FAILED [ 37%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_pinv_hermitian_cpu_float32 FAILED [ 37%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_pinv_singular_cpu_float32 SKIPPED [ 37%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_qr_cpu_float32 FAILED [ 37%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_slogdet_cpu_float32 FAILED [ 38%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_solve_cpu_float32 FAILED [ 38%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_solve_ex_cpu_float32 FAILED [ 38%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_solve_triangular_cpu_float32 FAILED [ 38%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_svd_cpu_float32 FAILED [ 38%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_svdvals_cpu_float32 FAILED [ 38%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_tensorinv_cpu_float32 FAILED [ 38%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_tensorsolve_cpu_float32 FAILED [ 39%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_vander_cpu_float32 FAILED [ 39%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_vecdot_cpu_float32 PASSED [ 39%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_vector_norm_cpu_float32 PASSED [ 39%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linspace_cpu_float32 SKIPPED [ 39%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_log10_cpu_float32 PASSED [ 39%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_log1p_cpu_float32 PASSED [ 40%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_log2_cpu_float32 PASSED [ 40%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_log_cpu_float32 PASSED [ 40%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_log_softmax_cpu_float32 PASSED [ 40%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_log_softmax_with_dtype_cpu_float32 PASSED [ 40%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logaddexp2_cpu_float32 FAILED [ 40%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logaddexp_cpu_float32 FAILED [ 41%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logcumsumexp_cpu_float32 FAILED [ 41%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logdet_cpu_float32 FAILED [ 41%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logical_and_cpu_float32 SKIPPED [ 41%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logical_not_cpu_float32 SKIPPED [ 41%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logical_or_cpu_float32 SKIPPED [ 41%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logical_xor_cpu_float32 SKIPPED [ 42%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logit_cpu_float32 PASSED [ 42%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logspace_cpu_float32 SKIPPED [ 42%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logsumexp_cpu_float32 FAILED [ 42%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_long_cpu_float32 SKIPPED [ 42%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lt_cpu_float32 SKIPPED [ 42%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lu_cpu_float32 FAILED [ 43%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lu_solve_cpu_float32 FAILED [ 43%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lu_unpack_cpu_float32 FAILED [ 43%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mH_cpu_float32 FAILED [ 43%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mT_cpu_float32 FAILED [ 43%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_amax_cpu_float32 FAILED [ 43%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_amin_cpu_float32 FAILED [ 44%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_argmax_cpu_float32 SKIPPED [ 44%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_argmin_cpu_float32 SKIPPED [ 44%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_cumprod_cpu_float32 FAILED [ 44%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_cumsum_cpu_float32 FAILED [ 44%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_fill_cpu_float32 FAILED [ 44%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_log_softmax_cpu_float32 PASSED [ 44%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_logaddexp_cpu_float32 FAILED [ 45%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_logsumexp_cpu_float32 FAILED [ 45%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_mean_cpu_float32 PASSED [ 45%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_median_cpu_float32 PASSED [ 45%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_norm_cpu_float32 PASSED [ 45%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_normalize_cpu_float32 PASSED [ 45%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_prod_cpu_float32 FAILED [ 46%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_scatter_cpu_float32 FAILED [ 46%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_select_cpu_float32 SKIPPED [ 46%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_softmax_cpu_float32 PASSED [ 46%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_softmin_cpu_float32 PASSED [ 46%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_std_cpu_float32 PASSED [ 46%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_sum_cpu_float32 PASSED [ 47%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_var_cpu_float32 PASSED [ 47%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_matmul_cpu_float32 FAILED [ 47%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_matrix_exp_cpu_float32 FAILED [ 47%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_max_binary_cpu_float32 PASSED [ 47%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_max_pool2d_with_indices_backward_cpu_float32 SKIPPED [ 47%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_max_reduction_no_dim_cpu_float32 PASSED [ 48%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_max_reduction_with_dim_cpu_float32 PASSED [ 48%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_maximum_cpu_float32 PASSED [ 48%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mean_cpu_float32 PASSED [ 48%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_median_cpu_float32 FAILED [ 48%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_meshgrid_list_of_tensors_cpu_float32 FAILED [ 48%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_meshgrid_variadic_tensors_cpu_float32 FAILED [ 49%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_min_binary_cpu_float32 PASSED [ 49%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_min_reduction_no_dim_cpu_float32 PASSED [ 49%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_min_reduction_with_dim_cpu_float32 FAILED [ 49%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_minimum_cpu_float32 PASSED [ 49%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mm_cpu_float32 PASSED [ 49%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mode_cpu_float32 FAILED [ 50%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_movedim_cpu_float32 FAILED [ 50%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_msort_cpu_float32 PASSED [ 50%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mul_cpu_float32 PASSED [ 50%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_multinomial_cpu_float32 SKIPPED [ 50%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mv_cpu_float32 FAILED [ 50%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mvlgamma_mvlgamma_p_1_cpu_float32 PASSED [ 50%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mvlgamma_mvlgamma_p_3_cpu_float32 PASSED [ 51%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mvlgamma_mvlgamma_p_5_cpu_float32 PASSED [ 51%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nan_to_num_cpu_float32 PASSED [ 51%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nanmean_cpu_float32 PASSED [ 51%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nanmedian_cpu_float32 PASSED [ 51%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nanquantile_cpu_float32 XFAIL [ 51%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nansum_cpu_float32 PASSED [ 52%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_narrow_copy_cpu_float32 SKIPPED [ 52%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_narrow_cpu_float32 XFAIL [ 52%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_native_batch_norm_cpu_float32 PASSED [ 52%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_native_dropout_backward_cpu_float32 PASSED [ 52%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_native_layer_norm_cpu_float32 PASSED [ 52%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ne_cpu_float32 SKIPPED [ 53%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_neg_cpu_float32 PASSED [ 53%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_new_empty_cpu_float32 SKIPPED [ 53%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_new_empty_strided_cpu_float32 SKIPPED [ 53%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_new_full_cpu_float32 SKIPPED [ 53%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_new_ones_cpu_float32 SKIPPED [ 53%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_new_zeros_cpu_float32 SKIPPED [ 54%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nextafter_cpu_float32 SKIPPED [ 54%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional__scaled_dot_product_attention_cpu_float32 FAILED [ 54%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_avg_pool1d_cpu_float32 PASSED [ 54%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_avg_pool2d_cpu_float32 PASSED [ 54%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_avg_pool3d_cpu_float32 FAILED [ 54%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool1d_cpu_float32 FAILED [ 55%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool2d_cpu_float32 FAILED [ 55%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool3d_cpu_float32 FAILED [ 55%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_alpha_dropout_cpu_float32 PASSED [ 55%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_avg_pool1d_cpu_float32 PASSED [ 55%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_avg_pool2d_cpu_float32 PASSED [ 55%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_avg_pool3d_cpu_float32 FAILED [ 55%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_batch_norm_cpu_float32 SKIPPED [ 56%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_bilinear_cpu_float32 FAILED [ 56%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_binary_cross_entropy_cpu_float32 FAILED [ 56%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_binary_cross_entropy_with_logits_cpu_float32 SKIPPED [ 56%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_celu_cpu_float32 PASSED [ 56%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_conv1d_cpu_float32 PASSED [ 56%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_conv2d_cpu_float32 PASSED [ 57%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_conv_transpose1d_cpu_float32 PASSED [ 57%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_conv_transpose2d_cpu_float32 PASSED [ 57%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_conv_transpose3d_cpu_float32 PASSED [ 57%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_cosine_embedding_loss_cpu_float32 PASSED [ 57%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_cosine_similarity_cpu_float32 FAILED [ 57%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_cross_entropy_cpu_float32 FAILED [ 58%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_ctc_loss_cpu_float32 SKIPPED [ 58%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_dropout2d_cpu_float32 PASSED [ 58%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_dropout3d_cpu_float32 FAILED [ 58%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_dropout_cpu_float32 PASSED [ 58%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_elu_cpu_float32 PASSED [ 58%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_embedding_bag_cpu_float32 FAILED [ 59%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_embedding_cpu_float32 PASSED [ 59%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_feature_alpha_dropout_with_train_cpu_float32 PASSED [ 59%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_feature_alpha_dropout_without_train_cpu_float32 PASSED [ 59%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_fractional_max_pool2d_cpu_float32 FAILED [ 59%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_fractional_max_pool3d_cpu_float32 FAILED [ 59%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_gaussian_nll_loss_cpu_float32 XFAIL [ 60%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_gelu_cpu_float32 PASSED [ 60%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_glu_cpu_float32 PASSED [ 60%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_grid_sample_cpu_float32 FAILED [ 60%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_group_norm_cpu_float32 FAILED [ 60%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_hardshrink_cpu_float32 PASSED [ 60%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_hardsigmoid_cpu_float32 PASSED [ 61%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_hardswish_cpu_float32 PASSED [ 61%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_hardtanh_cpu_float32 PASSED [ 61%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_hinge_embedding_loss_cpu_float32 PASSED [ 61%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_huber_loss_cpu_float32 PASSED [ 61%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_instance_norm_cpu_float32 PASSED [ 61%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_area_cpu_float32 FAILED [ 61%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_bicubic_cpu_float32 FAILED [ 62%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_bilinear_cpu_float32 PASSED [ 62%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_linear_cpu_float32 FAILED [ 62%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_nearest_cpu_float32 FAILED [ 62%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_trilinear_cpu_float32 FAILED [ 62%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_kl_div_cpu_float32 PASSED [ 62%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_l1_loss_cpu_float32 PASSED [ 63%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_layer_norm_cpu_float32 PASSED [ 63%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_leaky_relu_cpu_float32 PASSED [ 63%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_linear_cpu_float32 PASSED [ 63%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_local_response_norm_cpu_float32 PASSED [ 63%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_logsigmoid_cpu_float32 PASSED [ 63%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_margin_ranking_loss_cpu_float32 SKIPPED [ 64%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool1d_cpu_float32 FAILED [ 64%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool2d_cpu_float32 PASSED [ 64%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool3d_cpu_float32 FAILED [ 64%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool1d_cpu_float32 FAILED [ 64%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool1d_grad_cpu_float32 FAILED [ 64%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool2d_cpu_float32 FAILED [ 65%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool2d_grad_cpu_float32 FAILED [ 65%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool3d_cpu_float32 FAILED [ 65%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool3d_grad_cpu_float32 FAILED [ 65%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_mish_cpu_float32 PASSED [ 65%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_mse_loss_cpu_float32 PASSED [ 65%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_multi_margin_loss_cpu_float32 FAILED [ 66%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_multilabel_margin_loss_cpu_float32 FAILED [ 66%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_multilabel_soft_margin_loss_cpu_float32 PASSED [ 66%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_nll_loss_cpu_float32 FAILED [ 66%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_normalize_cpu_float32 PASSED [ 66%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pad_circular_cpu_float32 PASSED [ 66%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pad_constant_cpu_float32 PASSED [ 66%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pad_reflect_cpu_float32 FAILED [ 67%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pad_replicate_cpu_float32 FAILED [ 67%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pairwise_distance_cpu_float32 PASSED [ 67%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pdist_cpu_float32 FAILED [ 67%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pixel_shuffle_cpu_float32 FAILED [ 67%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pixel_unshuffle_cpu_float32 FAILED [ 67%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_poisson_nll_loss_cpu_float32 PASSED [ 68%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_prelu_cpu_float32 FAILED [ 68%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_relu6_cpu_float32 PASSED [ 68%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_relu_cpu_float32 PASSED [ 68%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_rrelu_cpu_float32 FAILED [ 68%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_selu_cpu_float32 PASSED [ 68%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_silu_cpu_float32 PASSED [ 69%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_smooth_l1_loss_cpu_float32 FAILED [ 69%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_soft_margin_loss_cpu_float32 PASSED [ 69%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_softmin_cpu_float32 PASSED [ 69%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_softmin_with_dtype_cpu_float32 PASSED [ 69%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_softplus_cpu_float32 PASSED [ 69%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_softshrink_cpu_float32 PASSED [ 70%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_softsign_cpu_float32 PASSED [ 70%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_tanhshrink_cpu_float32 PASSED [ 70%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_threshold_cpu_float32 PASSED [ 70%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_triplet_margin_loss_cpu_float32 PASSED [ 70%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_triplet_margin_with_distance_loss_cpu_float32 PASSED [ 70%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_unfold_cpu_float32 PASSED [ 71%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_upsample_bilinear_cpu_float32 PASSED [ 71%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_upsample_nearest_cpu_float32 FAILED [ 71%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nonzero_cpu_float32 SKIPPED [ 71%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_norm_cpu_float32 PASSED [ 71%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_norm_fro_cpu_float32 PASSED [ 71%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_norm_inf_cpu_float32 PASSED [ 72%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_norm_nuc_cpu_float32 FAILED [ 72%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_normal_cpu_float32 FAILED [ 72%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_normal_number_mean_cpu_float32 FAILED [ 72%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ones_cpu_float32 SKIPPED [ 72%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ones_like_cpu_float32 SKIPPED [ 72%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ormqr_cpu_float32 FAILED [ 72%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_outer_cpu_float32 FAILED [ 73%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_pca_lowrank_cpu_float32 FAILED [ 73%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_permute_cpu_float32 FAILED [ 73%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_pinverse_cpu_float32 FAILED [ 73%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polar_cpu_float32 FAILED [ 73%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_0_cpu_float32 FAILED [ 73%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_1_cpu_float32 FAILED [ 74%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_2_cpu_float32 FAILED [ 74%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_3_cpu_float32 FAILED [ 74%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_4_cpu_float32 FAILED [ 74%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_positive_cpu_float32 PASSED [ 74%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_pow_cpu_float32 PASSED [ 74%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_prod_cpu_float32 FAILED [ 75%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_put_cpu_float32 FAILED [ 75%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_qr_cpu_float32 FAILED [ 75%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_quantile_cpu_float32 XFAIL [ 75%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_rad2deg_cpu_float32 PASSED [ 75%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_rand_like_cpu_float32 SKIPPED [ 75%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_randint_cpu_float32 SKIPPED [ 76%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_randint_like_cpu_float32 SKIPPED [ 76%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_randn_cpu_float32 SKIPPED [ 76%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_randn_like_cpu_float32 SKIPPED [ 76%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ravel_cpu_float32 FAILED [ 76%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_real_cpu_float32 PASSED [ 76%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_reciprocal_cpu_float32 PASSED [ 77%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_remainder_cpu_float32 PASSED [ 77%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_renorm_cpu_float32 FAILED [ 77%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_repeat_cpu_float32 PASSED [ 77%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_repeat_interleave_cpu_float32 SKIPPED [ 77%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_reshape_as_cpu_float32 FAILED [ 77%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_reshape_cpu_float32 FAILED [ 77%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_resize__cpu_float32 SKIPPED [ 78%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_resize_as__cpu_float32 SKIPPED [ 78%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_resolve_conj_cpu_float32 PASSED [ 78%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_resolve_neg_cpu_float32 PASSED [ 78%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_roll_cpu_float32 FAILED [ 78%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_rot90_cpu_float32 PASSED [ 78%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_round_cpu_float32 PASSED [ 79%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_round_decimals_0_cpu_float32 PASSED [ 79%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_round_decimals_3_cpu_float32 PASSED [ 79%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_round_decimals_neg_3_cpu_float32 PASSED [ 79%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_rsqrt_cpu_float32 PASSED [ 79%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_rsub_cpu_float32 PASSED [ 79%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scalar_tensor_cpu_float32 SKIPPED [ 80%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_add_cpu_float32 PASSED [ 80%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_cpu_float32 PASSED [ 80%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_reduce_amax_cpu_float32 PASSED [ 80%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_reduce_amin_cpu_float32 PASSED [ 80%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_reduce_mean_cpu_float32 PASSED [ 80%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_reduce_prod_cpu_float32 XFAIL [ 81%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_scatter_reduce_sum_cpu_float32 PASSED [ 81%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_searchsorted_cpu_float32 SKIPPED [ 81%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_segment_reduce_lengths_cpu_float32 FAILED [ 81%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_segment_reduce_offsets_cpu_float32 FAILED [ 81%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_select_cpu_float32 FAILED [ 81%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_select_scatter_cpu_float32 PASSED [ 82%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sgn_cpu_float32 FAILED [ 82%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_short_cpu_float32 SKIPPED [ 82%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sigmoid_cpu_float32 PASSED [ 82%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sign_cpu_float32 PASSED [ 82%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_signal_windows_cosine_cpu_float32 SKIPPED [ 82%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_signal_windows_exponential_cpu_float32 SKIPPED [ 83%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_signal_windows_gaussian_cpu_float32 SKIPPED [ 83%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_signal_windows_kaiser_cpu_float32 SKIPPED [ 83%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_signbit_cpu_float32 SKIPPED [ 83%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sin_cpu_float32 PASSED [ 83%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sinc_cpu_float32 PASSED [ 83%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sinh_cpu_float32 PASSED [ 83%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_slice_cpu_float32 FAILED [ 84%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_slice_scatter_cpu_float32 PASSED [ 84%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_softmax_cpu_float32 PASSED [ 84%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_softmax_with_dtype_cpu_float32 PASSED [ 84%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sort_cpu_float32 PASSED [ 84%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sparse_sampled_addmm_cpu_float32 XFAIL [ 84%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_airy_ai_cpu_float32 SKIPPED [ 85%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_bessel_j0_cpu_float32 SKIPPED [ 85%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_bessel_j1_cpu_float32 SKIPPED [ 85%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_bessel_y0_cpu_float32 SKIPPED [ 85%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_bessel_y1_cpu_float32 SKIPPED [ 85%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_chebyshev_polynomial_t_cpu_float32 SKIPPED [ 85%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_chebyshev_polynomial_u_cpu_float32 SKIPPED [ 86%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_chebyshev_polynomial_v_cpu_float32 SKIPPED [ 86%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_chebyshev_polynomial_w_cpu_float32 SKIPPED [ 86%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_entr_cpu_float32 PASSED [ 86%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_erfcx_cpu_float32 PASSED [ 86%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_hermite_polynomial_h_cpu_float32 SKIPPED [ 86%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_hermite_polynomial_he_cpu_float32 SKIPPED [ 87%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_i0e_cpu_float32 PASSED [ 87%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_i1_cpu_float32 FAILED [ 87%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_i1e_cpu_float32 PASSED [ 87%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_laguerre_polynomial_l_cpu_float32 SKIPPED [ 87%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_legendre_polynomial_p_cpu_float32 SKIPPED [ 87%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_log_ndtr_cpu_float32 PASSED [ 88%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_modified_bessel_i0_cpu_float32 SKIPPED [ 88%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_modified_bessel_i1_cpu_float32 SKIPPED [ 88%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_modified_bessel_k0_cpu_float32 SKIPPED [ 88%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_modified_bessel_k1_cpu_float32 SKIPPED [ 88%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_ndtr_cpu_float32 PASSED [ 88%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_ndtri_cpu_float32 PASSED [ 88%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_polygamma_special_polygamma_n_0_cpu_float32 FAILED [ 89%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_scaled_modified_bessel_k0_cpu_float32 SKIPPED [ 89%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_scaled_modified_bessel_k1_cpu_float32 SKIPPED [ 89%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_shifted_chebyshev_polynomial_t_cpu_float32 SKIPPED [ 89%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_shifted_chebyshev_polynomial_u_cpu_float32 SKIPPED [ 89%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_shifted_chebyshev_polynomial_v_cpu_float32 SKIPPED [ 89%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_shifted_chebyshev_polynomial_w_cpu_float32 SKIPPED [ 90%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_spherical_bessel_j0_cpu_float32 SKIPPED [ 90%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_xlog1py_cpu_float32 PASSED [ 90%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_zeta_cpu_float32 SKIPPED [ 90%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_split_cpu_float32 FAILED [ 90%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_split_list_args_cpu_float32 FAILED [ 90%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_split_with_sizes_cpu_float32 FAILED [ 91%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sqrt_cpu_float32 PASSED [ 91%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_square_cpu_float32 PASSED [ 91%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_squeeze_cpu_float32 FAILED [ 91%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_stack_cpu_float32 PASSED [ 91%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_std_cpu_float32 FAILED [ 91%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_std_mean_cpu_float32 FAILED [ 92%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_stft_cpu_float32 FAILED [ 92%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sub_cpu_float32 PASSED [ 92%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sum_cpu_float32 PASSED [ 92%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sum_to_size_cpu_float32 FAILED [ 92%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_svd_cpu_float32 FAILED [ 92%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_svd_lowrank_cpu_float32 FAILED [ 93%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_symeig_cpu_float32 FAILED [ 93%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_t_cpu_float32 FAILED [ 93%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_take_along_dim_cpu_float32 FAILED [ 93%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_take_cpu_float32 FAILED [ 93%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tan_cpu_float32 PASSED [ 93%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tanh_cpu_float32 PASSED [ 94%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tensor_split_cpu_float32 XFAIL [ 94%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tensordot_cpu_float32 FAILED [ 94%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tile_cpu_float32 PASSED [ 94%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_to_cpu_float32 FAILED [ 94%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_to_sparse_cpu_float32 XFAIL [ 94%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_topk_cpu_float32 PASSED [ 94%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trace_cpu_float32 FAILED [ 95%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_transpose_cpu_float32 FAILED [ 95%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trapezoid_cpu_float32 FAILED [ 95%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trapz_cpu_float32 FAILED [ 95%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_triangular_solve_cpu_float32 FAILED [ 95%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tril_cpu_float32 PASSED [ 95%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_triu_cpu_float32 PASSED [ 96%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_true_divide_cpu_float32 PASSED [ 96%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trunc_cpu_float32 PASSED [ 96%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unbind_cpu_float32 FAILED [ 96%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unflatten_cpu_float32 FAILED [ 96%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unfold_copy_cpu_float32 PASSED [ 96%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unfold_cpu_float32 FAILED [ 97%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_uniform_cpu_float32 SKIPPED [ 97%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unique_consecutive_cpu_float32 SKIPPED [ 97%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unique_cpu_float32 SKIPPED [ 97%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unsqueeze_cpu_float32 FAILED [ 97%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_var_cpu_float32 FAILED [ 97%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_var_mean_cpu_float32 FAILED [ 98%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_vdot_cpu_float32 PASSED [ 98%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_view_as_complex_cpu_float32 FAILED [ 98%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_view_as_cpu_float32 FAILED [ 98%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_view_copy_cpu_float32 PASSED [ 98%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_view_cpu_float32 FAILED [ 98%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_vsplit_cpu_float32 FAILED [ 99%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_vstack_cpu_float32 PASSED [ 99%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_where_cpu_float32 PASSED [ 99%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_xlogy_cpu_float32 PASSED [ 99%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_zero__cpu_float32 PASSED [ 99%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_zeros_cpu_float32 SKIPPED [ 99%]
test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_zeros_like_cpu_float32 SKIPPED [100%]
=================================== FAILURES ===================================
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_H_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_T_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive___getitem___cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive___rmatmul___cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0], [1], primals_2: f32[s1, s0], [s0, 1], tangents_1: f32[s1], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/torch/_decomp/decompositions.py:2276, code: return torch.mv(tensor1, tensor2)
mv: f32[s1], [1] = torch.ops.aten.mv.default(primals_2, primals_1); primals_2 = primals_1 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(mv, tangents_1); mv = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_addmv_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_addr_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 845, in __torch_dispatch__
return decomposition_table[func](*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_prims_common/wrappers.py", line 209, in _fn
result = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_prims_common/wrappers.py", line 119, in _fn
result = fn(**bound.arguments)
File "/data/users/ezyang/b/pytorch/torch/_refs/__init__.py", line 2444, in addr
return beta * self + alpha * torch.outer(vec1, vec2)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 849, in __torch_dispatch__
r = func.decompose(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 319, in decompose
return self._op_dk(dk, *args, **kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_amax_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s1], [s1, 1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
amax: f32[s0], [1] = torch.ops.aten.amax.default(primals_1, [-1]); primals_1 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(amax, tangents_1); amax = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_amin_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s1], [s1, 1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
amin: f32[s0], [1] = torch.ops.aten.amin.default(primals_1, [-1]); primals_1 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(amin, tangents_1); amin = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_atleast_1d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_atleast_2d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_atleast_3d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_baddbmm_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_block_diag_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/functional.py", line 1173, in block_diag
return torch._C._VariableFunctions.block_diag(tensors) # type: ignore[attr-defined]
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_broadcast_tensors_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_broadcast_to_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cartesian_prod_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/functional.py", line 1137, in cartesian_prod
return _VF.cartesian_prod(tensors) # type: ignore[attr-defined]
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cdist_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/functional.py", line 1224, in cdist
return _VF.cdist(x1, x2, p, 1) # type: ignore[attr-defined]
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cholesky_inverse_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.cholesky_inverse.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cholesky_solve_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_chunk_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_column_stack_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_combinations_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cross_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cummax_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cummin_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cumprod_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cumsum_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s0, s0], [s0**2, s0, 1], tangents_1: f32[s0, s0, s0], [s0**2, s0, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
cumsum: f32[s0, s0, s0], [s0**2, s0, 1] = torch.ops.aten.cumsum.default(primals_1, 0); primals_1 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(cumsum, tangents_1); cumsum = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_cumulative_trapezoid_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s1], [s1, 1], primals_2: f32[s0, s1], [s1, 1], tangents_1: f32[s0, s1 - 1], [s1 - 1, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
slice_1: f32[s0, s1 - 1], [s1, 1] = torch.ops.aten.slice.Tensor(primals_2, 1, 0, -1)
slice_2: f32[s0, s1 - 1], [s1, 1] = torch.ops.aten.slice.Tensor(primals_2, 1, 1); primals_2 = None
sub: f32[s0, s1 - 1], [s1 - 1, 1] = torch.ops.aten.sub.Tensor(slice_2, slice_1); slice_2 = slice_1 = None
slice_3: f32[s0, s1 - 1], [s1, 1] = torch.ops.aten.slice.Tensor(primals_1, 1, 0, -1)
slice_4: f32[s0, s1 - 1], [s1, 1] = torch.ops.aten.slice.Tensor(primals_1, 1, 1); primals_1 = None
add: f32[s0, s1 - 1], [s1 - 1, 1] = torch.ops.aten.add.Tensor(slice_3, slice_4); slice_3 = slice_4 = None
mul: f32[s0, s1 - 1], [s1 - 1, 1] = torch.ops.aten.mul.Tensor(add, sub); add = sub = None
cumsum: f32[s0, s1 - 1], [s1 - 1, 1] = torch.ops.aten.cumsum.default(mul, 1); mul = None
div: f32[s0, s1 - 1], [s1 - 1, 1] = torch.ops.aten.div.Scalar(cumsum, 2.0); cumsum = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(div, tangents_1); div = None
# No stacktrace found for following nodes
_tensor_constant0 = self._tensor_constant0
# Gradient addition node due to multiple use of tensor around:
lift_fresh_copy: f64[], [] = torch.ops.aten.lift_fresh_copy.default(_tensor_constant0); _tensor_constant0 = None
div_1: f32[s0, s1 - 1], [s1 - 1, 1] = torch.ops.aten.div.Tensor(tangents_1, lift_fresh_copy); tangents_1 = lift_fresh_copy = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_diagonal_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_diff_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_digamma_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 485, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 510, in inner_torch_dispatch
out = proxy_call(self, func, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 341, in proxy_call
out = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0], [1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
digamma: f32[s0], [1] = torch.ops.aten.digamma.default(primals_1)
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(digamma, tangents_1); digamma = tangents_1 = None
polygamma = torch.ops.aten.polygamma.default(1, primals_1); primals_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_dsplit_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_expand_as_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_expand_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_fft2_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_fft_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_fftn_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_fftshift_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_hfft2_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_hfft_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_hfftn_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ifft2_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ifft_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ifftn_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ifftshift_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ihfft2_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ihfft_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_ihfftn_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_irfft2_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_irfft_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_irfftn_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_rfft2_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_rfft_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_fft_rfftn_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_flatten_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_frexp_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.frexp.Tensor_out at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_gradient_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_hsplit_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_i0_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_inner_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_kron_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_kthvalue_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.kthvalue.values at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_cholesky_ex_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_cond_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_cross_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_det_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_det_singular_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_eigh_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_eigvals_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.linalg_eig.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_eigvalsh_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_inv_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_inv_ex_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_lu_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_lu_factor_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_lu_factor_ex_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_matrix_norm_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_matrix_power_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_multi_dot_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_norm_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_norm_subgradients_at_zero_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_pinv_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_pinv_hermitian_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_qr_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_slogdet_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_solve_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_solve_ex_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_solve_triangular_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.linalg_solve_triangular.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_svd_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_svdvals_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_tensorinv_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_tensorsolve_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_linalg_vander_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_logaddexp2_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_logaddexp_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_logcumsumexp_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten._logcumsumexp.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_logdet_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_logsumexp_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s0], [s0, 1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
logsumexp: f32[s0], [1] = torch.ops.aten.logsumexp.default(primals_1, [1]); primals_1 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(logsumexp, tangents_1); logsumexp = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_lu_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_jit_internal.py", line 483, in fn
return if_true(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/functional.py", line 1711, in _lu_with_infos
result = _lu_impl(A, pivot, get_infos, out)
File "/data/users/ezyang/b/pytorch/torch/functional.py", line 1690, in _lu_impl
return torch._lu_with_info(A, pivot=pivot, check_errors=(not get_infos))
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/functional.py:1690: UserWarning: torch.lu is deprecated in favor of torch.linalg.lu_factor / torch.linalg.lu_factor_ex and will be removed in a future PyTorch release.
LU, pivots = torch.lu(A, compute_pivots)
should be replaced with
LU, pivots = torch.linalg.lu_factor(A, compute_pivots)
and
LU, pivots, info = torch.lu(A, compute_pivots, get_infos=True)
should be replaced with
LU, pivots, info = torch.linalg.lu_factor_ex(A, compute_pivots) (Triggered internally at /data/users/ezyang/b/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:2028.)
return torch._lu_with_info(A, pivot=pivot, check_errors=(not get_infos))
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_lu_solve_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566: UserWarning: torch.lu_solve is deprecated in favor of torch.linalg.lu_solveand will be removed in a future PyTorch release.
Note that torch.linalg.lu_solve has its arguments reversed.
X = torch.lu_solve(B, LU, pivots)
should be replaced with
X = torch.linalg.lu_solve(LU, pivots, B) (Triggered internally at /data/users/ezyang/b/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:2182.)
return op.op(*c_args, **c_kwargs)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_lu_unpack_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_mH_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_mT_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_amax_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s1], [s1, 1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/torch/masked/_ops.py:1228, code: return torch.amax(mask_input, dim_, bool(keepdim)).to(dtype=dtype)
amax: f32[s0], [1] = torch.ops.aten.amax.default(primals_1, [1]); primals_1 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(amax, tangents_1); amax = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_amin_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s1], [s1, 1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/torch/masked/_ops.py:1278, code: return torch.amin(mask_input, dim_, bool(keepdim)).to(dtype=dtype)
amin: f32[s0], [1] = torch.ops.aten.amin.default(primals_1, [1]); primals_1 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(amin, tangents_1); amin = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_cumprod_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/masked/_ops.py", line 1196, in cumprod
return torch.cumprod(mask_input, dim_, dtype=dtype).to(dtype=dtype)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_cumsum_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0], [1], primals_2: b8[s0], [1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# No stacktrace found for following nodes
_tensor_constant0 = self._tensor_constant0
# File: /data/users/ezyang/b/pytorch/torch/masked/_ops.py:417, code: return torch.tensor(0, dtype=dtype, device=device)
lift_fresh_copy: f32[], [] = torch.ops.aten.lift_fresh_copy.default(_tensor_constant0); _tensor_constant0 = None
# File: /data/users/ezyang/b/pytorch/torch/masked/_ops.py:849, code: return torch.where(mask, input, fill_value)
where: f32[s0], [1] = torch.ops.aten.where.self(primals_2, primals_1, lift_fresh_copy); primals_2 = primals_1 = lift_fresh_copy = None
# File: /data/users/ezyang/b/pytorch/torch/masked/_ops.py:1176, code: return torch.cumsum(mask_input, dim_, dtype=dtype).to(dtype=dtype)
cumsum: f32[s0], [1] = torch.ops.aten.cumsum.default(where, 0, dtype = torch.float32); where = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(cumsum, tangents_1); cumsum = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_fill_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 845, in __torch_dispatch__
return decomposition_table[func](*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_refs/__init__.py", line 4710, in masked_fill
value = value.item()
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_logaddexp_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/masked/_ops.py", line 1496, in logaddexp
return torch.logaddexp(mask_input, mask_other).to(dtype=dtype)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_logsumexp_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0], [1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/torch/masked/_ops.py:1474, code: return torch.logsumexp(mask_input, dim_, keepdim=keepdim).to(dtype=dtype)
logsumexp: f32[], [] = torch.ops.aten.logsumexp.default(primals_1, [0]); primals_1 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(logsumexp, tangents_1); logsumexp = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_prod_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0], [1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/torch/masked/_ops.py:1128, code: result = result.prod(dim=d, keepdim=bool(keepdim))
prod: f32[], [] = torch.ops.aten.prod.dim_int(primals_1, 0); primals_1 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(prod, tangents_1); prod = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_masked_scatter_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_matmul_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s1], [s1, 1], primals_2: f32[s1], [1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/torch/_decomp/decompositions.py:2276, code: return torch.mv(tensor1, tensor2)
mv: f32[s0], [1] = torch.ops.aten.mv.default(primals_1, primals_2); primals_1 = primals_2 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(mv, tangents_1); mv = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_matrix_exp_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.linalg_matrix_exp.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_median_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.median.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_meshgrid_list_of_tensors_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/functional.py", line 489, in meshgrid
return _meshgrid(*tensors, indexing=indexing)
File "/data/users/ezyang/b/pytorch/torch/functional.py", line 504, in _meshgrid
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_meshgrid_variadic_tensors_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/functional.py", line 489, in meshgrid
return _meshgrid(*tensors, indexing=indexing)
File "/data/users/ezyang/b/pytorch/torch/functional.py", line 504, in _meshgrid
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_min_reduction_with_dim_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_mode_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.mode.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_movedim_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_mv_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s1], [s1, 1], primals_2: f32[s1], [1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
mv: f32[s0], [1] = torch.ops.aten.mv.default(primals_1, primals_2); primals_1 = primals_2 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(mv, tangents_1); mv = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional__scaled_dot_product_attention_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 12000, in <lambda>
wrapper_set_seed(torch.nn.functional._scaled_dot_product_attention, *args, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_avg_pool3d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten._adaptive_avg_pool3d_backward.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 485, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 510, in inner_torch_dispatch
out = proxy_call(self, func, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 341, in proxy_call
out = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[0, s0, s0, s0, s0], [s0**4, s0**3, s0**2, s0, 1], tangents_1: f32[0, s0, 5, 7, 4], [140*s0, 140, 28, 4, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/torch/nn/functional.py:1231, code: return torch._C._nn.adaptive_avg_pool3d(input, _output_size)
_adaptive_avg_pool3d: f32[0, s0, 5, 7, 4], [140*s0, 140, 28, 4, 1] = torch.ops.aten._adaptive_avg_pool3d.default(primals_1, [5, 7, 4])
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(_adaptive_avg_pool3d, tangents_1); _adaptive_avg_pool3d = None
_adaptive_avg_pool3d_backward = torch.ops.aten._adaptive_avg_pool3d_backward.default(tangents_1, primals_1); tangents_1 = primals_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool1d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_jit_internal.py", line 483, in fn
return if_true(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 1080, in adaptive_max_pool1d_with_indices
return torch.adaptive_max_pool1d(input, output_size)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool2d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_jit_internal.py", line 483, in fn
return if_true(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 1121, in adaptive_max_pool2d_with_indices
return torch._C._nn.adaptive_max_pool2d(input, output_size)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool3d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_jit_internal.py", line 483, in fn
return if_true(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 1162, in adaptive_max_pool3d_with_indices
return torch._C._nn.adaptive_max_pool3d(input, output_size)
TypeError: adaptive_max_pool3d(): argument 'output_size' (position 2) must be tuple of ints, but found element of type SymInt at pos 1
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_avg_pool3d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_bilinear_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_binary_cross_entropy_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[], [], primals_2: f32[], [], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/torch/nn/functional.py:3099, code: return torch._C._nn.binary_cross_entropy(input, target, weight, reduction_enum)
binary_cross_entropy: f32[], [] = torch.ops.aten.binary_cross_entropy.default(primals_1, primals_2)
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(binary_cross_entropy, tangents_1); binary_cross_entropy = None
binary_cross_entropy_backward: f32[], [] = torch.ops.aten.binary_cross_entropy_backward.default(tangents_1, primals_1, primals_2, None); primals_2 = None
empty_like: f32[], [] = torch.ops.aten.empty_like.default(primals_1, memory_format = torch.preserve_format)
fill: f32[], [] = torch.ops.aten.fill.Scalar(empty_like, 1.0); empty_like = None
sub: f32[], [] = torch.ops.aten.sub.Tensor(fill, primals_1); fill = None
log: f32[], [] = torch.ops.aten.log.default(sub); sub = None
log_1: f32[], [] = torch.ops.aten.log.default(primals_1); primals_1 = None
sub_1: f32[], [] = torch.ops.aten.sub.Tensor(log, log_1); log = log_1 = None
mul: f32[], [] = torch.ops.aten.mul.Tensor(sub_1, tangents_1); sub_1 = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_cosine_similarity_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_cross_entropy_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 3030, in cross_entropy
return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_dropout3d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_embedding_bag_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten._embedding_bag_per_sample_weights_backward.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 485, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 510, in inner_torch_dispatch
out = proxy_call(self, func, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 341, in proxy_call
out = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s1], [s1, 1], primals_2: i64[s1], [1], primals_3: i64[s2], [1], primals_4: f32[s1], [1], tangents_1: f32[s2, s1], [s1, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/torch/nn/functional.py:2394, code: ret, _, _, _ = torch.embedding_bag(
_embedding_bag = torch.ops.aten._embedding_bag.default(primals_1, primals_2, primals_3, False, 0, False, primals_4); primals_4 = None
getitem: f32[s2, s1], [s1, 1] = _embedding_bag[0]
getitem_1: i64[0], [1] = _embedding_bag[1]
getitem_2: i64[s2], [1] = _embedding_bag[2]
getitem_3: i64[s2], [1] = _embedding_bag[3]; _embedding_bag = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(getitem, tangents_1); getitem = None
_embedding_bag_per_sample_weights_backward = torch.ops.aten._embedding_bag_per_sample_weights_backward.default(tangents_1, primals_1, primals_2, primals_3, getitem_1, 0); tangents_1 = primals_1 = primals_2 = primals_3 = getitem_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_fractional_max_pool2d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 11577, in <lambda>
wrapper_set_seed(torch.nn.functional.fractional_max_pool2d, input, *args, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_jit_internal.py", line 485, in fn
return if_false(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 503, in _fractional_max_pool2d
return fractional_max_pool2d_with_indices(
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 481, in fractional_max_pool2d_with_indices
_random_samples = torch.rand(n_batch, input.size(-3), 2, dtype=input.dtype, device=input.device)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 867, in __torch_dispatch__
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 306, in constructors
r = func(*args, **new_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: SymIntArrayRef expected to contain only concrete integers
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_fractional_max_pool3d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 11598, in <lambda>
wrapper_set_seed(torch.nn.functional.fractional_max_pool3d, input, *args, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_jit_internal.py", line 485, in fn
return if_false(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 608, in _fractional_max_pool3d
return fractional_max_pool3d_with_indices(
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 586, in fractional_max_pool3d_with_indices
_random_samples = torch.rand(n_batch, input.size(-4), 3, dtype=input.dtype, device=input.device)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 867, in __torch_dispatch__
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 306, in constructors
r = func(*args, **new_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: SymIntArrayRef expected to contain only concrete integers
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_grid_sample_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.grid_sampler_3d.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 4243, in grid_sample
return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_group_norm_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 485, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 510, in inner_torch_dispatch
out = proxy_call(self, func, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 341, in proxy_call
out = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 845, in __torch_dispatch__
return decomposition_table[func](*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_decomp/decompositions.py", line 69, in inner
r = f(*tree_map(increase_prec, args), **tree_map(increase_prec, kwargs))
File "/data/users/ezyang/b/pytorch/torch/_decomp/decompositions.py", line 1175, in native_group_norm_backward
cpg, _rem = divmod(C, group)
TypeError: unsupported operand type(s) for divmod(): 'SymInt' and 'int'
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[1, s0, s1], [s0*s1, s1, 1], primals_2: f32[s0], [1], primals_3: f32[s0], [1], tangents_1: f32[1, 2*s0//2, s1], [2*s1*s0//2, s1, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/torch/utils/_pytree.py:244, code: return f(x)
sym_size: Sym(1) = torch.ops.aten.sym_size(primals_1, 0)
sym_size_1: Sym(s0) = torch.ops.aten.sym_size(primals_1, 1)
sym_size_2: Sym(s1) = torch.ops.aten.sym_size(primals_1, 2)
# File: /data/users/ezyang/b/pytorch/torch/nn/functional.py:2532, code: return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
native_group_norm = torch.ops.aten.native_group_norm.default(primals_1, primals_2, primals_3, sym_size, sym_size_1, sym_size_2, 2, 0.5); primals_3 = None
getitem: f32[1, 2*s0//2, s1], [2*s1*s0//2, s1, 1] = native_group_norm[0]
getitem_1: f32[1, 2], [2, 1] = native_group_norm[1]
getitem_2: f32[1, 2], [2, 1] = native_group_norm[2]; native_group_norm = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(getitem, tangents_1); getitem = None
native_group_norm_backward = torch.ops.aten.native_group_norm_backward.default(tangents_1, primals_1, getitem_1, getitem_2, primals_2, sym_size, sym_size_1, sym_size_2, 2, [True, True, True]); tangents_1 = primals_1 = getitem_1 = getitem_2 = primals_2 = sym_size = sym_size_1 = sym_size_2 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_area_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 3943, in interpolate
return adaptive_avg_pool1d(input, output_size)
TypeError: adaptive_avg_pool1d(): argument 'output_size' (position 2) must be tuple of ints, but found element of type SymInt at pos 0
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_bicubic_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 3966, in interpolate
return torch._C._nn.upsample_bicubic2d(input, output_size, align_corners, scale_factors)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_linear_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 3953, in interpolate
return torch._C._nn.upsample_linear1d(input, output_size, align_corners, scale_factors)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_nearest_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 3928, in interpolate
return torch._C._nn.upsample_nearest1d(input, output_size, scale_factors)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_trilinear_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 3961, in interpolate
return torch._C._nn.upsample_trilinear3d(input, output_size, align_corners, scale_factors)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool1d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_jit_internal.py", line 485, in fn
return if_false(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 696, in _max_pool1d
return torch.max_pool1d(input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool3d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.max_pool3d_with_indices.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_jit_internal.py", line 483, in fn
return if_true(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 843, in max_pool3d_with_indices
return torch._C._nn.max_pool3d_with_indices(input, kernel_size, stride, padding, dilation, ceil_mode)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool1d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.max_unpool2d.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 948, in max_unpool1d
return torch._C._nn.max_unpool2d(input.unsqueeze(-1), indices.unsqueeze(-1), output_size).squeeze(-1)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool1d_grad_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.max_unpool2d.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 948, in max_unpool1d
return torch._C._nn.max_unpool2d(input.unsqueeze(-1), indices.unsqueeze(-1), output_size).squeeze(-1)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool2d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.max_unpool2d.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 980, in max_unpool2d
return torch._C._nn.max_unpool2d(input, indices, output_size)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool2d_grad_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.max_unpool2d.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 980, in max_unpool2d
return torch._C._nn.max_unpool2d(input, indices, output_size)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool3d_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.max_unpool3d.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 1012, in max_unpool3d
return torch._C._nn.max_unpool3d(input, indices, output_size, _stride, padding)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool3d_grad_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.max_unpool3d.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 1012, in max_unpool3d
return torch._C._nn.max_unpool3d(input, indices, output_size, _stride, padding)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_multi_margin_loss_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.multi_margin_loss.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 3534, in multi_margin_loss
return torch._C._nn.multi_margin_loss(input, target, p, margin, weight, reduction_enum)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_multilabel_margin_loss_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.multilabel_margin_loss_forward.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 3393, in multilabel_margin_loss
return torch._C._nn.multilabel_margin_loss(input, target, reduction_enum)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_nll_loss_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.nll_loss2d_forward.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 2705, in nll_loss
return torch._C._nn.nll_loss_nd(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_pad_reflect_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_pad_replicate_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_pdist_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten._pdist_forward.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_pixel_shuffle_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_pixel_unshuffle_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_prelu_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 485, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 510, in inner_torch_dispatch
out = proxy_call(self, func, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 341, in proxy_call
out = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 845, in __torch_dispatch__
return decomposition_table[func](*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_decomp/decompositions.py", line 69, in inner
r = f(*tree_map(increase_prec, args), **tree_map(increase_prec, kwargs))
File "/data/users/ezyang/b/pytorch/torch/_decomp/decompositions.py", line 279, in prelu_backward
out = weight_grad_collector.sum_to_size(cur_weight.shape)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 849, in __torch_dispatch__
r = func.decompose(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 319, in decompose
return self._op_dk(dk, *args, **kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0], [1], primals_2: f32[], [], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
prelu: f32[s0], [1] = torch.ops.aten.prelu.default(primals_1, primals_2)
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(prelu, tangents_1); prelu = None
prelu_backward = torch.ops.aten.prelu_backward.default(tangents_1, primals_1, primals_2); tangents_1 = primals_1 = primals_2 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_rrelu_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.rrelu_with_noise.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 11937, in <lambda>
wrapper_set_seed(torch.nn.functional.rrelu, input, *args, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 1682, in rrelu
result = torch.rrelu(input, lower, upper, training)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_smooth_l1_loss_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 3204, in smooth_l1_loss
return torch._C._nn.smooth_l1_loss(expanded_input, expanded_target, _Reduction.get_enum(reduction), beta)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_nn_functional_upsample_nearest_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 4023, in upsample_nearest
return interpolate(input, size, scale_factor, mode="nearest")
File "/data/users/ezyang/b/pytorch/torch/nn/functional.py", line 3928, in interpolate
return torch._C._nn.upsample_nearest1d(input, output_size, scale_factors)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/nn/functional.py:4022: UserWarning: nn.functional.upsample_nearest is deprecated. Use nn.functional.interpolate instead.
warnings.warn("nn.functional.upsample_nearest is deprecated. Use nn.functional.interpolate instead.")
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_norm_nuc_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/functional.py", line 1522, in norm
return _VF.nuclear_norm(input, keepdim=keepdim)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_normal_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 14831, in <lambda>
wrapper_set_seed(torch.normal, inp, *args, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_normal_number_mean_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 14854, in <lambda>
wrapper_set_seed(torch.normal, mean, std, *args, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_ormqr_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.ormqr.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_outer_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_pca_lowrank_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13751, in <lambda>
op=lambda *args, **kwargs: wrapper_set_seed(
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13752, in <lambda>
lambda a, b, **kwargs: torch.pca_lowrank(a @ b.mT, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/_lowrank.py", line 275, in pca_lowrank
return _svd_lowrank(A, q, niter=niter, M=None)
File "/data/users/ezyang/b/pytorch/torch/_lowrank.py", line 161, in _svd_lowrank
Q = get_approximate_basis(A_t, q, niter=niter, M=M_t)
File "/data/users/ezyang/b/pytorch/torch/_lowrank.py", line 70, in get_approximate_basis
Q = torch.linalg.qr(matmul(A, R)).Q
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_permute_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_pinverse_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_polar_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.polar.out at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_0_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13798, in <lambda>
op=lambda x, n, **kwargs: torch.polygamma(n, x, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_1_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13811, in <lambda>
op=lambda x, n, **kwargs: torch.polygamma(n, x, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_2_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13834, in <lambda>
op=lambda x, n, **kwargs: torch.polygamma(n, x, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_3_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13855, in <lambda>
op=lambda x, n, **kwargs: torch.polygamma(n, x, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_4_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13876, in <lambda>
op=lambda x, n, **kwargs: torch.polygamma(n, x, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_prod_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s0, s0], [s0**2, s0, 1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
prod: f32[], [] = torch.ops.aten.prod.default(primals_1)
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(prod, tangents_1); prod = tangents_1 = None
view: f32[s0**3], [1] = torch.ops.aten.view.default(primals_1, [-1]); primals_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_put_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.take.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 485, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 510, in inner_torch_dispatch
out = proxy_call(self, func, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 341, in proxy_call
out = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s0], [s0, 1], primals_2: i64[s0], [1], primals_3: f32[s0], [1], tangents_1: f32[s0, s0], [s0, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
put: f32[s0, s0], [s0, 1] = torch.ops.aten.put.default(primals_1, primals_2, primals_3, True); primals_1 = primals_3 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(put, tangents_1); put = None
take = torch.ops.aten.take.default(tangents_1, primals_2); tangents_1 = primals_2 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_qr_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566: UserWarning: torch.qr is deprecated in favor of torch.linalg.qr and will be removed in a future PyTorch release.
The boolean parameter 'some' has been replaced with a string parameter 'mode'.
Q, R = torch.qr(A, some)
should be replaced with
Q, R = torch.linalg.qr(A, 'reduced' if some else 'complete') (Triggered internally at /data/users/ezyang/b/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:2458.)
return op.op(*c_args, **c_kwargs)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_ravel_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_renorm_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_reshape_as_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_reshape_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_roll_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 845, in __torch_dispatch__
return decomposition_table[func](*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_refs/__init__.py", line 3259, in roll
t0 = torch.narrow(a, dim, start, size - start)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 849, in __torch_dispatch__
r = func.decompose(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 319, in decompose
return self._op_dk(dk, *args, **kwargs)
IndexError: Dimension out of range (expected to be in range of [-s0, s0 - 1], but got Mod(-2, s0))
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_segment_reduce_lengths_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.segment_reduce.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_segment_reduce_offsets_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.segment_reduce.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_select_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_sgn_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s0], [s0, 1], tangents_1: f32[s0, s0], [s0, 1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
sgn: f32[s0, s0], [s0, 1] = torch.ops.aten.sgn.default(primals_1); primals_1 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(sgn, tangents_1); sgn = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_slice_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_special_i1_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 485, in __torch_dispatch__
return self.inner_torch_dispatch(func, types, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 510, in inner_torch_dispatch
out = proxy_call(self, func, args, kwargs)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 341, in proxy_call
out = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0], [1], tangents_1: f32[s0], [1], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
special_i1: f32[s0], [1] = torch.ops.aten.special_i1.default(primals_1)
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(special_i1, tangents_1); tangents_1 = None
abs_1: f32[s0], [1] = torch.ops.aten.abs.default(primals_1)
gt: b8[s0], [1] = torch.ops.aten.gt.Scalar(abs_1, 1.1920928955078125e-07); abs_1 = None
full: f32[], [] = torch.ops.aten.full.default([], 1.1920928955078125e-07, dtype = torch.float32, layout = torch.strided, device = device(type='cpu'))
where: f32[s0], [1] = torch.ops.aten.where.self(gt, primals_1, full); gt = primals_1 = full = None
reciprocal: f32[s0], [1] = torch.ops.aten.reciprocal.default(where)
mul: f32[s0], [1] = torch.ops.aten.mul.Tensor(special_i1, reciprocal); special_i1 = reciprocal = None
i0 = torch.ops.aten.i0.default(where); where = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_special_polygamma_special_polygamma_n_0_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/opinfo/definitions/special.py", line 177, in <lambda>
op=lambda x, n, **kwargs: torch.special.polygamma(n, x, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_split_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_split_list_args_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_split_with_sizes_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_squeeze_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_std_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s0, s0], [s0**2, s0, 1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
std: f32[], [] = torch.ops.aten.std.correction(primals_1, None, correction = 1); primals_1 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(std, tangents_1)
mul: f32[], [] = torch.ops.aten.mul.Scalar(std, 2)
div: f32[], [] = torch.ops.aten.div.Tensor(tangents_1, mul); tangents_1 = mul = None
eq: b8[], [] = torch.ops.aten.eq.Scalar(std, 0); std = None
masked_fill: f32[], [] = torch.ops.aten.masked_fill.Scalar(div, eq, 0); div = eq = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_std_mean_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s0, s0], [s0**2, s0, 1], tangents_1: f32[], [], tangents_2: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
std_mean = torch.ops.aten.std_mean.correction(primals_1, None, correction = 1); primals_1 = None
getitem: f32[], [] = std_mean[0]
getitem_1: f32[], [] = std_mean[1]; std_mean = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(getitem, tangents_1)
is_same_size_1 = torch.ops.aten.is_same_size.default(getitem_1, tangents_2); getitem_1 = tangents_2 = None
mul: f32[], [] = torch.ops.aten.mul.Scalar(getitem, 2)
div: f32[], [] = torch.ops.aten.div.Tensor(tangents_1, mul); tangents_1 = mul = None
eq: b8[], [] = torch.ops.aten.eq.Scalar(getitem, 0); getitem = None
masked_fill: f32[], [] = torch.ops.aten.masked_fill.Scalar(div, eq, 0); div = eq = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_stft_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/functional.py", line 639, in stft
input = F.pad(input.view(extended_shape), [pad, pad], pad_mode)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_sum_to_size_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 9041, in <lambda>
op=lambda x, *args, **kwargs: x.sum_to_size(*args, **kwargs),
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_svd_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_svd_lowrank_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13724, in <lambda>
op=lambda *args, **kwargs: wrapper_set_seed(
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13725, in <lambda>
lambda a, b, **kwargs: torch.svd_lowrank(a @ b.mT, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/_lowrank.py", line 137, in svd_lowrank
return _svd_lowrank(A, q=q, niter=niter, M=M)
File "/data/users/ezyang/b/pytorch/torch/_lowrank.py", line 161, in _svd_lowrank
Q = get_approximate_basis(A_t, q, niter=niter, M=M_t)
File "/data/users/ezyang/b/pytorch/torch/_lowrank.py", line 70, in get_approximate_basis
Q = torch.linalg.qr(matmul(A, R)).Q
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_symeig_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566: UserWarning: torch.symeig is deprecated in favor of torch.linalg.eigh and will be removed in a future PyTorch release.
The default behavior has changed from using the upper triangular portion of the matrix by default to using the lower triangular portion.
L, _ = torch.symeig(A, upper=upper)
should be replaced with
L = torch.linalg.eigvalsh(A, UPLO='U' if upper else 'L')
and
L, V = torch.symeig(A, eigenvectors=True)
should be replaced with
L, V = torch.linalg.eigh(A, UPLO='U' if upper else 'L') (Triggered internally at /data/users/ezyang/b/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:2909.)
return op.op(*c_args, **c_kwargs)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_t_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_take_along_dim_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_take_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 379, in _get_dispatch
final_key = resolve_key(self, key)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 107, in resolve_key
raise NotImplementedError(f"could not find kernel for {op} at dispatch key {k}")
NotImplementedError: could not find kernel for aten.take.default at dispatch key DispatchKey.Meta
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 880, in __torch_dispatch__
return run_fallback_kernel(self, func, args, kwargs, not_implemented_error)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1027, in run_fallback_kernel
args = tree_map(to_real_tensor, args)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/utils/_pytree.py", line 195, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 1020, in to_real_tensor
out = torch.zeros_like(e, device=e.fake_device)
RuntimeError: Cannot call strides() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_tensordot_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/functional.py", line 1100, in tensordot
return _VF.tensordot(a, b, dims_a, dims_b) # type: ignore[attr-defined]
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_to_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 12279, in <lambda>
op=lambda x, *args, **kwargs: x.to(*args, **kwargs),
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 867, in __torch_dispatch__
op_impl_out = op_impl(self, func, *args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 353, in to_copy
return FakeTensor(fake_mode, aten._to_copy(input, **new_kwargs), out_device)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 484, in __call__
return self._op(*args, **kwargs or {})
File "/data/users/ezyang/b/pytorch/torch/_meta_registrations.py", line 1661, in _to_copy
return torch.empty(
File "/data/users/ezyang/b/pytorch/torch/_prims_common/wrappers.py", line 209, in _fn
result = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_refs/__init__.py", line 3925, in empty
check(
File "/data/users/ezyang/b/pytorch/torch/_prims_common/__init__.py", line 1505, in check
raise exc_type(s())
RuntimeError: torch.empty: the Preserve memory format is not supported
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_trace_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s0], [s0, 1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
trace: f32[], [] = torch.ops.aten.trace.default(primals_1)
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(trace, tangents_1); trace = tangents_1 = None
sym_size: Sym(s0) = torch.ops.aten.sym_size(primals_1, 0)
sym_size_1: Sym(s0) = torch.ops.aten.sym_size(primals_1, 1); primals_1 = None
# No stacktrace found for following nodes
mul: Sym(s0**2) = sym_size * sym_size_1; sym_size = sym_size_1 = None
# Gradient addition node due to multiple use of tensor around:
zeros: f32[s0**2], [1] = torch.ops.aten.zeros.default([mul], dtype = torch.float32, layout = torch.strided, device = device(type='cpu')); mul = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_transpose_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_trapezoid_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_trapz_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_triangular_solve_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/_subclasses/fake_tensor.py", line 875, in __torch_dispatch__
r = func(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/torch/_ops.py", line 297, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_unbind_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_unflatten_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_unfold_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_unsqueeze_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_var_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s0, s0], [s0**2, s0, 1], tangents_1: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
var: f32[], [] = torch.ops.aten.var.correction(primals_1, None, correction = 1); primals_1 = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(var, tangents_1); var = tangents_1 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_var_mean_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 907, in aot_dispatch_autograd
fx_g = make_fx(
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 685, in wrapped
t = dispatch_trace(wrap_key(func, args, fx_tracer), tracer=fx_tracer, concrete_args=tuple(phs))
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 440, in dispatch_trace
graph = tracer.trace(root, concrete_args)
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 739, in trace
(self.create_arg(fn(*args)),),
File "/data/users/ezyang/b/pytorch/torch/fx/_symbolic_trace.py", line 614, in flatten_fn
tree_out = root_fn(*tree_args)
File "/data/users/ezyang/b/pytorch/torch/fx/experimental/proxy_tensor.py", line 459, in wrapped
out = f(*tensors)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 510, in functionalized_joint
outs = joint_forward_backward(f_primals, f_tangents)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 478, in joint_forward_backward
backward_out = torch.autograd.grad(
File "/data/users/ezyang/b/pytorch/torch/autograd/__init__.py", line 300, in grad
return Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: Cannot call numel() on tensor with symbolic sizes/strides
----------------------------- Captured stdout call -----------------------------
incomplete graph:
class functionalized_joint(torch.nn.Module):
def forward(self, primals, tangents):
primals_1: f32[s0, s0, s0], [s0**2, s0, 1], tangents_1: f32[], [], tangents_2: f32[], [], = fx_pytree.tree_flatten_spec([primals, tangents], self._in_spec)
# File: /data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py:1566, code: return op.op(*c_args, **c_kwargs)
var_mean = torch.ops.aten.var_mean.correction(primals_1, None, correction = 1); primals_1 = None
getitem: f32[], [] = var_mean[0]
getitem_1: f32[], [] = var_mean[1]; var_mean = None
# Gradient addition node due to multiple use of tensor around:
is_same_size = torch.ops.aten.is_same_size.default(getitem, tangents_1); getitem = tangents_1 = None
is_same_size_1 = torch.ops.aten.is_same_size.default(getitem_1, tangents_2); getitem_1 = tangents_2 = None
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_view_as_complex_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_view_as_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 13942, in <lambda>
op=lambda x, other: x.view_as(other),
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_view_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1370, in returned_function
out = cached_fn(flat_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 570, in g
return f(*args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1125, in compiled_function
fw_outs_including_aliases.append(input_alias.as_strided(out_tensor_meta.size(), out_tensor_meta.stride(), out_tensor_meta.storage_offset()))
RuntimeError: Expected !is_symbolic() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
----------------------------- Captured stderr call -----------------------------
/data/users/ezyang/b/pytorch/torch/storage.py:315: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.
warnings.warn(message, UserWarning)
_ TestEagerFusionOpInfoCPU.test_aot_autograd_symbolic_exhaustive_vsplit_cpu_float32 _
Traceback (most recent call last):
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1634, in test_aot_autograd_symbolic_exhaustive
_test_aot_autograd_helper(self, device, dtype, op)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1590, in _test_aot_autograd_helper
call_forwards_backwards(compiled_f)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1569, in call_forwards_backwards
out = wrapper_set_seed(f, args)
File "/data/users/ezyang/b/pytorch/torch/testing/_internal/common_methods_invocations.py", line 7976, in wrapper_set_seed
return op(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1362, in returned_function
compiled_fn = _create_aot_dispatcher_function(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1191, in _create_aot_dispatcher_function
aot_dispatch_autograd(flat_fn, fake_flat_tensor_args, aot_config)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 851, in aot_dispatch_autograd
_fw_metadata, out = run_functionalized_fw_and_collect_metadata(
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 225, in inner
outs = f(*f_args)
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 852, in <lambda>
lambda *args: flat_fn(*(add_dupe_args(args))),
File "/data/users/ezyang/b/pytorch/functorch/_src/aot_autograd.py", line 1318, in flat_fn
tree_out = fn(*args, **kwargs)
File "/data/users/ezyang/b/pytorch/test/functorch/test_aotdispatch.py", line 1566, in f
return op.op(*c_args, **c_kwargs)
RuntimeError: Cannot call sizes() on tensor with symbolic sizes/strides
----------------------- CSV report: test_aotdispatch.csv -----------------------
=========================== short test summary info ============================
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_H_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_T_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___getitem___cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive___rmatmul___cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addmv_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_addr_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_amax_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_amin_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atleast_1d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atleast_2d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_atleast_3d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_baddbmm_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_block_diag_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_broadcast_tensors_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_broadcast_to_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cartesian_prod_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cdist_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cholesky_inverse_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cholesky_solve_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_chunk_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_column_stack_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_combinations_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cross_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cummax_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cummin_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cumprod_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cumsum_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_cumulative_trapezoid_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diagonal_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_diff_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_digamma_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_dsplit_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_expand_as_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_expand_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fft2_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fft_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fftn_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_fftshift_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_hfft2_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_hfft_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_hfftn_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifft2_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifft_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifftn_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ifftshift_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ihfft2_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ihfft_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_ihfftn_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_irfft2_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_irfft_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_irfftn_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_rfft2_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_rfft_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_fft_rfftn_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_flatten_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_frexp_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_gradient_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_hsplit_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_i0_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_inner_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_kron_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_kthvalue_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cholesky_ex_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cond_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_cross_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_det_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_det_singular_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eigh_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eigvals_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_eigvalsh_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_inv_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_inv_ex_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_factor_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_lu_factor_ex_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_matrix_norm_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_matrix_power_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_multi_dot_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_norm_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_norm_subgradients_at_zero_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_pinv_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_pinv_hermitian_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_qr_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_slogdet_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_solve_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_solve_ex_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_solve_triangular_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_svd_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_svdvals_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_tensorinv_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_tensorsolve_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_linalg_vander_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logaddexp2_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logaddexp_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logcumsumexp_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logdet_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_logsumexp_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lu_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lu_solve_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_lu_unpack_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mH_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mT_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_amax_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_amin_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_cumprod_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_cumsum_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_fill_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_logaddexp_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_logsumexp_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_prod_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_masked_scatter_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_matmul_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_matrix_exp_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_median_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_meshgrid_list_of_tensors_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_meshgrid_variadic_tensors_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_min_reduction_with_dim_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mode_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_movedim_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_mv_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional__scaled_dot_product_attention_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_avg_pool3d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool1d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool2d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_adaptive_max_pool3d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_avg_pool3d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_bilinear_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_binary_cross_entropy_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_cosine_similarity_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_cross_entropy_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_dropout3d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_embedding_bag_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_fractional_max_pool2d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_fractional_max_pool3d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_grid_sample_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_group_norm_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_area_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_bicubic_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_linear_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_nearest_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_interpolate_trilinear_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool1d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_pool3d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool1d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool1d_grad_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool2d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool2d_grad_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool3d_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool3d_grad_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_multi_margin_loss_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_multilabel_margin_loss_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_nll_loss_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pad_reflect_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pad_replicate_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pdist_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pixel_shuffle_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_pixel_unshuffle_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_prelu_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_rrelu_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_smooth_l1_loss_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_nn_functional_upsample_nearest_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_norm_nuc_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_normal_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_normal_number_mean_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ormqr_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_outer_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_pca_lowrank_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_permute_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_pinverse_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polar_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_0_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_1_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_2_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_3_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_polygamma_polygamma_n_4_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_prod_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_put_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_qr_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_ravel_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_renorm_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_reshape_as_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_reshape_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_roll_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_segment_reduce_lengths_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_segment_reduce_offsets_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_select_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sgn_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_slice_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_i1_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_special_polygamma_special_polygamma_n_0_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_split_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_split_list_args_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_split_with_sizes_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_squeeze_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_std_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_std_mean_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_stft_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_sum_to_size_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_svd_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_svd_lowrank_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_symeig_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_t_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_take_along_dim_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_take_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_tensordot_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_to_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trace_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_transpose_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trapezoid_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_trapz_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_triangular_solve_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unbind_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unflatten_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unfold_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_unsqueeze_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_var_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_var_mean_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_view_as_complex_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_view_as_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_view_cpu_float32
FAILED test/functorch/test_aotdispatch.py::TestEagerFusionOpInfoCPU::test_aot_autograd_symbolic_exhaustive_vsplit_cpu_float32
= 221 failed, 249 passed, 133 skipped, 679 deselected, 15 xfailed in 1463.08s (0:24:23) =
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment