Skip to content

[cp] dispatch flex_attention_backward to CP impl in TorchDispatchMode #152311

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 14 commits into
base: gh/XilunWu/139/base
Choose a base branch
from

Conversation

Copy link

pytorch-bot bot commented Apr 28, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/152311

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (3 Unrelated Failures)

As of commit 1721481 with merge base ba51f48 (image):

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added ciflow/inductor oncall: distributed Add this issue/PR to distributed oncall triage queue labels Apr 28, 2025
@XilunWu XilunWu marked this pull request as draft April 28, 2025 08:34
…ispatchMode"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k

[ghstack-poisoned]
XilunWu added a commit that referenced this pull request Apr 28, 2025
…ispatchMode"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k

[ghstack-poisoned]
…ispatchMode"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k

[ghstack-poisoned]
XilunWu added a commit that referenced this pull request Apr 28, 2025
@XilunWu XilunWu marked this pull request as ready for review April 28, 2025 23:35
@XilunWu XilunWu requested a review from fegin April 28, 2025 23:35
…ispatchMode"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k

[ghstack-poisoned]
XilunWu added a commit that referenced this pull request Apr 30, 2025
…ispatchMode"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k

[ghstack-poisoned]
…ispatchMode"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k

[ghstack-poisoned]
XilunWu added a commit that referenced this pull request May 9, 2025
…ispatchMode"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k

[ghstack-poisoned]
XilunWu added a commit that referenced this pull request May 21, 2025
@XilunWu XilunWu requested a review from drisspg May 22, 2025 17:17
Copy link
Contributor

@fegin fegin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The change of require_grad is concerning, please address that comment first.

self.comm_module_counts[par]["backward"] = defaultdict(int)
self.comm_module_counts[par][key][func_packet] += 1
# TODO (xilunwu): this is a temporary hack to unblock the issue
# in tracking flex_attention_backward. Need to fix it later on.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we describe more about what is the issue?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added description of the issue

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it always a lambda or is dependent on what score_mod you pass in? But tbh I dont really know much about this code path

device_mesh = mode._sharder._mesh
cp_block_mask = mode._sharder.get_cp_block_mask(mode._sharder._block_mask)

# TODO: save global KV in forward
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to save global KVs? Would it cause OOM?

…ispatchMode"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k

[ghstack-poisoned]
…ispatchMode"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k

[ghstack-poisoned]
XilunWu added a commit that referenced this pull request May 27, 2025
@pytorch pytorch deleted a comment from a-r-r-o-w May 28, 2025
…ispatchMode"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k

[ghstack-poisoned]
XilunWu added a commit that referenced this pull request May 29, 2025
XilunWu added 3 commits May 29, 2025 22:35
…ispatchMode"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k

[ghstack-poisoned]
…ispatchMode"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k

[ghstack-poisoned]
…ispatchMode"

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k

[ghstack-poisoned]
@XilunWu XilunWu requested a review from fegin May 30, 2025 06:15
Copy link
Contributor

@drisspg drisspg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/inductor module: context parallel PyTorch Context Parallel oncall: distributed Add this issue/PR to distributed oncall triage queue release notes: context parallel
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy