This release is meant to fix the following issues (regressions / silent correctness):
Torch.compile
Fix Excessive cudagraph re-recording for HF LLM models (#152287)
Fix torch.compile on some HuggingFace models (#151154)
Fix crash due to Exception raised inside torch.autocast (#152503)
Improve Error logging in torch.compile (#149831)
Mark mutable custom operators as cacheable in torch.compile (#151194)
Implement workaround for a graph break with older version einops (#153925)
Fix an issue with tensor.view(dtype).copy_(...) (#151598)
Flex Attention
Fix assertion error due to inductor permuting inputs to flex attention (#151959)
Fix performance regression on nanogpt speedrun (#152641)
Distributed
Fix extra CUDA context created by barrier (#149144)
Fix an issue related to Distributed Fused Adam in Rocm/APEX when using nccl_ub feature (#150010)
Add a workaround random hang in non-blocking API mode in NCCL 2.26 (#154055)
MacOS
Fix MacOS compilation error with Clang 17 (#151316)
Fix binary kernels produce incorrect results when one of the tensor arguments is from a wrapped scalar on MPS devices (#152997)
Other
Improve PyTorch Wheel size due to introduction of addition of 128 bit vectorization (#148320) (#152396)
Fix fmsub function definition (#152075)
Fix Floating point exception in torch.mkldnn_max_pool2d (#151848)
Fix abnormal inference output with XPU:1 device (#153067)
Fix Illegal Instruction Caused by grid_sample on Windows (#152613)
Fix ONNX decomposition does not preserve custom CompositeImplicitAutograd ops (#151826)
Fix error with dynamic linking of libgomp library (#150084)
Fix segfault in profiler with Python 3.13 (#153848)