Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save shunting314/eb57965688fc9e1746fcfa9b7b6b02df to your computer and use it in GitHub Desktop.
Save shunting314/eb57965688fc9e1746fcfa9b7b6b02df to your computer and use it in GitHub Desktop.
(pytorch) [shunting@devgpu005.nha1 ~/ws/pytorch (dash)]$ git log --oneline a448b3ae9537c0ae233fb9199a4a221fdffbb..0e6c204642a571d5a7cd60be0caeb9b50faca030 torch/_inductor/
ffc202a1b91 Added remove_noop_ops to joint_graph_passes (#124451)
8a45cf4c64c [AOTI] align data_size of the constants (#127610)
6e5c2a1a3bc [inductor] Add missing files to torch_key (#128230)
647815049ec Inductor: Allow small sizes of m for mixed mm autotuning (#127663)
ba81c3c2909 [inductor] add cpp builder code. (take 2) (#125849)
0a6df4fca67 delete inductor config.trace.compile_profile (#127143)
0c7f4353e50 [inductor] simplify indexing (#127661)
d9696ea6248 [AOTInductor] [Tooling] Update NaN and INF Checker for AOTInductor (#127574)
852b7b4c995 [inductor] Enable subprocess-based parallel compile as the default (#126817)
ac51f782fe0 Revert "Complete revamp of float/promotion sympy handling (#126905)"
23c156cd2d6 Revert "[inductor] simplify indexing (#127661)"
70724bdbfee (origin/gh/peterbell10/741/base) Bugfix for nondeterminstic torch_key (#128111)
00c6ca44598 [compiled autograd][cudagraphs] Inputs runtime wrapper to move cpu scalars to cuda (#125382)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment