Releases: PASSIONLab/OpenEquivariance
Releases · PASSIONLab/OpenEquivariance
v0.3.0
v0.3.0 (2025-06-22)
This release includes bugfixes and new opaque operations that compose with torch.compile
for PT2.4-2.7. These will be unnecessary for PT2.8+.
Added:
- Opaque variants of major operations via PyTorch
custom_op
declarations. These functions cannot be traced through and fail for JITScript / AOTI. They are shims that enable composition withtorch.compile
pre-PT2.8. torch.load
/torch.save
functionality that, withouttorch.compile
, is portable across GPU architectures..to()
support to moveTensorProduct
andTensorProductConv
between devices or change datatypes.
Fixed:
- Gracefully records an error if
libpython.so
is not linked against C++ extension. - Resolves Kahan summation / various other bugs for HIP at O3 compiler-optimization level.
- Removes multiple contexts spawning for GPU 0 when multiple devices are used.
- Zero-initialized gradient buffers to prevent backward pass garbage accumulation.
v0.2.0
OpenEquivariance v0.2.0 Release Notes
Our first stable release, v0.2.0, introduces several new features. Highlights include:
- Full HIP support for all kernels.
- Support for
torch.compile
, JITScript and export, preliminary support for AOTI. - Faster double backward performance for training.
- Ability to install versioned releases from PyPI.
- Support for CUDA streams and multiple devices.
- An extensive test suite and newly released documentation.
If you successfully run OpenEquivariance on a GPU model not listed here, let us know! We can add your name to the list.
Known issues:
- Kahan summation is broken on HIP – fix planned.
- FX + Export + Compile has trouble with PyTorch dynamo; fix planned.
- AOTI broken on PT <2.8; you need the nightly build due to incomplete support for TorchBind in prior versions.