Skip to content

Commit 6a84ea5

Browse files
committed
Update on "[dtensor][view_op] add as_strided op support to DTensor in FakeTensorMode"
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #151497 * #151507 * __->__ #151495 ## Introduction `flex_attention`'s FakeTensor propagation `flex_attention_fake_impl` [permutes](https://github.com/pytorch/pytorch/blob/fb6ac2f16132f7953711ce6924bc2ee4a033228c/torch/_higher_order_ops/flex_attention.py#L459) the stride of `out` (the attention score) based on `query`'s stride. To enable `flex_attention` call on DTensor, this requires us add `as_strided` support on DTensor in `FakeTensorMode`. ## Limited Support Due to the complexity of supporting actual `as_strided` on DTensor, I choose to only enable a limited subset: 1. `as_strided` only works correctly in `FakeTensorMode` i.e. shape and strided propagation. 2. `as_strided` is only allowed in case where `size == input.shape` because this PR specifically unblocks the use case of `flex_attention_fake_impl`. 3. `as_strided` requires `storage_offset=None` because the other case is not defined in DTensor. ## Test `pytest test/distributed/tensor/test_view_ops.py -s -k test_as_strided` cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k tianyu-l [ghstack-poisoned]
2 parents dc6df79 + e508128 commit 6a84ea5

File tree

112 files changed

+1875
-1274
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

112 files changed

+1875
-1274
lines changed

.ci/docker/libtorch/build.sh

Lines changed: 28 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -1,83 +1,63 @@
11
#!/usr/bin/env bash
22
# Script used only in CD pipeline
33

4-
set -eou pipefail
4+
set -eoux pipefail
55

66
image="$1"
77
shift
88

99
if [ -z "${image}" ]; then
10-
echo "Usage: $0 IMAGE"
10+
echo "Usage: $0 IMAGENAME:ARCHTAG"
1111
exit 1
1212
fi
1313

14-
DOCKER_IMAGE="pytorch/${image}"
15-
1614
TOPDIR=$(git rev-parse --show-toplevel)
1715

18-
GPU_ARCH_TYPE=${GPU_ARCH_TYPE:-cpu}
19-
GPU_ARCH_VERSION=${GPU_ARCH_VERSION:-}
16+
DOCKER=${DOCKER:-docker}
2017

21-
WITH_PUSH=${WITH_PUSH:-}
18+
# Go from imagename:tag to tag
19+
DOCKER_TAG_PREFIX=$(echo "${image}" | awk -F':' '{print $2}')
2220

23-
DOCKER=${DOCKER:-docker}
21+
GPU_ARCH_VERSION=""
22+
if [[ "${DOCKER_TAG_PREFIX}" == cuda* ]]; then
23+
# extract cuda version from image name. e.g. manylinux2_28-builder:cuda12.8 returns 12.8
24+
GPU_ARCH_VERSION=$(echo "${DOCKER_TAG_PREFIX}" | awk -F'cuda' '{print $2}')
25+
elif [[ "${DOCKER_TAG_PREFIX}" == rocm* ]]; then
26+
# extract rocm version from image name. e.g. manylinux2_28-builder:rocm6.2.4 returns 6.2.4
27+
GPU_ARCH_VERSION=$(echo "${DOCKER_TAG_PREFIX}" | awk -F'rocm' '{print $2}')
28+
fi
2429

25-
case ${GPU_ARCH_TYPE} in
30+
case ${DOCKER_TAG_PREFIX} in
2631
cpu)
2732
BASE_TARGET=cpu
28-
DOCKER_TAG=cpu
2933
GPU_IMAGE=ubuntu:20.04
3034
DOCKER_GPU_BUILD_ARG=""
3135
;;
32-
cuda)
36+
cuda*)
3337
BASE_TARGET=cuda${GPU_ARCH_VERSION}
34-
DOCKER_TAG=cuda${GPU_ARCH_VERSION}
3538
GPU_IMAGE=ubuntu:20.04
3639
DOCKER_GPU_BUILD_ARG=""
3740
;;
38-
rocm)
41+
rocm*)
3942
BASE_TARGET=rocm
40-
DOCKER_TAG=rocm${GPU_ARCH_VERSION}
4143
GPU_IMAGE=rocm/dev-ubuntu-22.04:${GPU_ARCH_VERSION}-complete
4244
PYTORCH_ROCM_ARCH="gfx900;gfx906;gfx908;gfx90a;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
4345
DOCKER_GPU_BUILD_ARG="--build-arg PYTORCH_ROCM_ARCH=${PYTORCH_ROCM_ARCH} --build-arg ROCM_VERSION=${GPU_ARCH_VERSION}"
4446
;;
4547
*)
46-
echo "ERROR: Unrecognized GPU_ARCH_TYPE: ${GPU_ARCH_TYPE}"
48+
echo "ERROR: Unrecognized DOCKER_TAG_PREFIX: ${DOCKER_TAG_PREFIX}"
4749
exit 1
4850
;;
4951
esac
5052

51-
52-
(
53-
set -x
54-
DOCKER_BUILDKIT=1 ${DOCKER} build \
55-
--target final \
56-
${DOCKER_GPU_BUILD_ARG} \
57-
--build-arg "GPU_IMAGE=${GPU_IMAGE}" \
58-
--build-arg "BASE_TARGET=${BASE_TARGET}" \
59-
-t "${DOCKER_IMAGE}" \
60-
$@ \
61-
-f "${TOPDIR}/.ci/docker/libtorch/Dockerfile" \
62-
"${TOPDIR}/.ci/docker/"
63-
64-
)
65-
66-
GITHUB_REF=${GITHUB_REF:-$(git symbolic-ref -q HEAD || git describe --tags --exact-match)}
67-
GIT_BRANCH_NAME=${GITHUB_REF##*/}
68-
GIT_COMMIT_SHA=${GITHUB_SHA:-$(git rev-parse HEAD)}
69-
DOCKER_IMAGE_BRANCH_TAG=${DOCKER_IMAGE}-${GIT_BRANCH_NAME}
70-
DOCKER_IMAGE_SHA_TAG=${DOCKER_IMAGE}-${GIT_COMMIT_SHA}
71-
72-
if [[ "${WITH_PUSH}" == true ]]; then
73-
(
74-
set -x
75-
${DOCKER} push "${DOCKER_IMAGE}"
76-
if [[ -n ${GITHUB_REF} ]]; then
77-
${DOCKER} tag ${DOCKER_IMAGE} ${DOCKER_IMAGE_BRANCH_TAG}
78-
${DOCKER} tag ${DOCKER_IMAGE} ${DOCKER_IMAGE_SHA_TAG}
79-
${DOCKER} push "${DOCKER_IMAGE_BRANCH_TAG}"
80-
${DOCKER} push "${DOCKER_IMAGE_SHA_TAG}"
81-
fi
82-
)
83-
fi
53+
tmp_tag=$(basename "$(mktemp -u)" | tr '[:upper:]' '[:lower:]')
54+
55+
DOCKER_BUILDKIT=1 ${DOCKER} build \
56+
--target final \
57+
${DOCKER_GPU_BUILD_ARG} \
58+
--build-arg "GPU_IMAGE=${GPU_IMAGE}" \
59+
--build-arg "BASE_TARGET=${BASE_TARGET}" \
60+
-t "${tmp_tag}" \
61+
$@ \
62+
-f "${TOPDIR}/.ci/docker/libtorch/Dockerfile" \
63+
"${TOPDIR}/.ci/docker/"

.ci/docker/manywheel/build.sh

Lines changed: 43 additions & 85 deletions
Original file line numberDiff line numberDiff line change
@@ -1,160 +1,118 @@
11
#!/usr/bin/env bash
22
# Script used only in CD pipeline
33

4-
set -eou pipefail
4+
set -exou pipefail
55

66
TOPDIR=$(git rev-parse --show-toplevel)
77

88
image="$1"
99
shift
1010

1111
if [ -z "${image}" ]; then
12-
echo "Usage: $0 IMAGE"
12+
echo "Usage: $0 IMAGE:ARCHTAG"
1313
exit 1
1414
fi
1515

16-
DOCKER_IMAGE="pytorch/${image}"
16+
# Go from imagename:tag to tag
17+
DOCKER_TAG_PREFIX=$(echo "${image}" | awk -F':' '{print $2}')
1718

18-
DOCKER_REGISTRY="${DOCKER_REGISTRY:-docker.io}"
19+
GPU_ARCH_VERSION=""
20+
if [[ "${DOCKER_TAG_PREFIX}" == cuda* ]]; then
21+
# extract cuda version from image name. e.g. manylinux2_28-builder:cuda12.8 returns 12.8
22+
GPU_ARCH_VERSION=$(echo "${DOCKER_TAG_PREFIX}" | awk -F'cuda' '{print $2}')
23+
elif [[ "${DOCKER_TAG_PREFIX}" == rocm* ]]; then
24+
# extract rocm version from image name. e.g. manylinux2_28-builder:rocm6.2.4 returns 6.2.4
25+
GPU_ARCH_VERSION=$(echo "${DOCKER_TAG_PREFIX}" | awk -F'rocm' '{print $2}')
26+
fi
1927

20-
GPU_ARCH_TYPE=${GPU_ARCH_TYPE:-cpu}
21-
GPU_ARCH_VERSION=${GPU_ARCH_VERSION:-}
2228
MANY_LINUX_VERSION=${MANY_LINUX_VERSION:-}
2329
DOCKERFILE_SUFFIX=${DOCKERFILE_SUFFIX:-}
24-
WITH_PUSH=${WITH_PUSH:-}
2530

26-
case ${GPU_ARCH_TYPE} in
27-
cpu)
28-
TARGET=cpu_final
29-
DOCKER_TAG=cpu
30-
GPU_IMAGE=centos:7
31-
DOCKER_GPU_BUILD_ARG=" --build-arg DEVTOOLSET_VERSION=9"
32-
;;
33-
cpu-manylinux_2_28)
31+
case ${image} in
32+
manylinux2_28-builder:cpu)
3433
TARGET=cpu_final
35-
DOCKER_TAG=cpu
3634
GPU_IMAGE=amd64/almalinux:8
3735
DOCKER_GPU_BUILD_ARG=" --build-arg DEVTOOLSET_VERSION=11"
3836
MANY_LINUX_VERSION="2_28"
3937
;;
40-
cpu-aarch64)
38+
manylinuxaarch64-builder:cpu-aarch64)
4139
TARGET=final
42-
DOCKER_TAG=cpu-aarch64
4340
GPU_IMAGE=arm64v8/centos:7
4441
DOCKER_GPU_BUILD_ARG=" --build-arg DEVTOOLSET_VERSION=10"
4542
MANY_LINUX_VERSION="aarch64"
4643
;;
47-
cpu-aarch64-2_28)
44+
manylinux2_28_aarch64-builder:cpu-aarch64)
4845
TARGET=final
49-
DOCKER_TAG=cpu-aarch64
5046
GPU_IMAGE=arm64v8/almalinux:8
5147
DOCKER_GPU_BUILD_ARG=" --build-arg DEVTOOLSET_VERSION=11 --build-arg NINJA_VERSION=1.12.1"
5248
MANY_LINUX_VERSION="2_28_aarch64"
5349
;;
54-
cpu-cxx11-abi)
50+
manylinuxcxx11-abi-builder:cpu-cxx11-abi)
5551
TARGET=final
56-
DOCKER_TAG=cpu-cxx11-abi
5752
GPU_IMAGE=""
5853
DOCKER_GPU_BUILD_ARG=" --build-arg DEVTOOLSET_VERSION=9"
5954
MANY_LINUX_VERSION="cxx11-abi"
6055
;;
61-
cpu-s390x)
56+
manylinuxs390x-builder:cpu-s390x)
6257
TARGET=final
63-
DOCKER_TAG=cpu-s390x
6458
GPU_IMAGE=s390x/almalinux:8
6559
DOCKER_GPU_BUILD_ARG=""
6660
MANY_LINUX_VERSION="s390x"
6761
;;
68-
cuda)
69-
TARGET=cuda_final
70-
DOCKER_TAG=cuda${GPU_ARCH_VERSION}
71-
# Keep this up to date with the minimum version of CUDA we currently support
72-
GPU_IMAGE=centos:7
73-
DOCKER_GPU_BUILD_ARG="--build-arg BASE_CUDA_VERSION=${GPU_ARCH_VERSION} --build-arg DEVTOOLSET_VERSION=9"
74-
;;
75-
cuda-manylinux_2_28)
62+
manylinux2_28-builder:cuda*)
7663
TARGET=cuda_final
77-
DOCKER_TAG=cuda${GPU_ARCH_VERSION}
7864
GPU_IMAGE=amd64/almalinux:8
7965
DOCKER_GPU_BUILD_ARG="--build-arg BASE_CUDA_VERSION=${GPU_ARCH_VERSION} --build-arg DEVTOOLSET_VERSION=11"
8066
MANY_LINUX_VERSION="2_28"
8167
;;
82-
cuda-aarch64)
68+
manylinuxaarch64-builder:cuda*)
8369
TARGET=cuda_final
84-
DOCKER_TAG=cuda${GPU_ARCH_VERSION}
8570
GPU_IMAGE=arm64v8/centos:7
8671
DOCKER_GPU_BUILD_ARG="--build-arg BASE_CUDA_VERSION=${GPU_ARCH_VERSION} --build-arg DEVTOOLSET_VERSION=11"
8772
MANY_LINUX_VERSION="aarch64"
8873
DOCKERFILE_SUFFIX="_cuda_aarch64"
8974
;;
90-
rocm|rocm-manylinux_2_28)
75+
manylinux2_28-builder:rocm*)
9176
TARGET=rocm_final
92-
DOCKER_TAG=rocm${GPU_ARCH_VERSION}
9377
GPU_IMAGE=rocm/dev-centos-7:${GPU_ARCH_VERSION}-complete
9478
DEVTOOLSET_VERSION="9"
95-
if [ ${GPU_ARCH_TYPE} == "rocm-manylinux_2_28" ]; then
96-
MANY_LINUX_VERSION="2_28"
97-
DEVTOOLSET_VERSION="11"
98-
GPU_IMAGE=rocm/dev-almalinux-8:${GPU_ARCH_VERSION}-complete
99-
fi
79+
MANY_LINUX_VERSION="2_28"
80+
DEVTOOLSET_VERSION="11"
81+
GPU_IMAGE=rocm/dev-almalinux-8:${GPU_ARCH_VERSION}-complete
10082
PYTORCH_ROCM_ARCH="gfx900;gfx906;gfx908;gfx90a;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
10183
DOCKER_GPU_BUILD_ARG="--build-arg ROCM_VERSION=${GPU_ARCH_VERSION} --build-arg PYTORCH_ROCM_ARCH=${PYTORCH_ROCM_ARCH} --build-arg DEVTOOLSET_VERSION=${DEVTOOLSET_VERSION}"
10284
;;
103-
xpu)
85+
manylinux2_28-builder:xpu)
10486
TARGET=xpu_final
105-
DOCKER_TAG=xpu
10687
GPU_IMAGE=amd64/almalinux:8
10788
DOCKER_GPU_BUILD_ARG=" --build-arg DEVTOOLSET_VERSION=11"
10889
MANY_LINUX_VERSION="2_28"
10990
;;
11091
*)
111-
echo "ERROR: Unrecognized GPU_ARCH_TYPE: ${GPU_ARCH_TYPE}"
92+
echo "ERROR: Unrecognized image name: ${image}"
11293
exit 1
11394
;;
11495
esac
11596

116-
IMAGES=''
117-
11897
if [[ -n ${MANY_LINUX_VERSION} && -z ${DOCKERFILE_SUFFIX} ]]; then
11998
DOCKERFILE_SUFFIX=_${MANY_LINUX_VERSION}
12099
fi
121-
(
122-
set -x
123-
124-
# Only activate this if in CI
125-
if [ "$(uname -m)" != "s390x" ] && [ -v CI ]; then
126-
# TODO: Remove LimitNOFILE=1048576 patch once https://github.com/pytorch/test-infra/issues/5712
127-
# is resolved. This patch is required in order to fix timing out of Docker build on Amazon Linux 2023.
128-
sudo sed -i s/LimitNOFILE=infinity/LimitNOFILE=1048576/ /usr/lib/systemd/system/docker.service
129-
sudo systemctl daemon-reload
130-
sudo systemctl restart docker
131-
fi
132-
133-
DOCKER_BUILDKIT=1 docker build \
134-
${DOCKER_GPU_BUILD_ARG} \
135-
--build-arg "GPU_IMAGE=${GPU_IMAGE}" \
136-
--target "${TARGET}" \
137-
-t "${DOCKER_IMAGE}" \
138-
$@ \
139-
-f "${TOPDIR}/.ci/docker/manywheel/Dockerfile${DOCKERFILE_SUFFIX}" \
140-
"${TOPDIR}/.ci/docker/"
141-
)
100+
# Only activate this if in CI
101+
if [ "$(uname -m)" != "s390x" ] && [ -v CI ]; then
102+
# TODO: Remove LimitNOFILE=1048576 patch once https://github.com/pytorch/test-infra/issues/5712
103+
# is resolved. This patch is required in order to fix timing out of Docker build on Amazon Linux 2023.
104+
sudo sed -i s/LimitNOFILE=infinity/LimitNOFILE=1048576/ /usr/lib/systemd/system/docker.service
105+
sudo systemctl daemon-reload
106+
sudo systemctl restart docker
107+
fi
142108

143-
GITHUB_REF=${GITHUB_REF:-"dev")}
144-
GIT_BRANCH_NAME=${GITHUB_REF##*/}
145-
GIT_COMMIT_SHA=${GITHUB_SHA:-$(git rev-parse HEAD)}
146-
DOCKER_IMAGE_BRANCH_TAG=${DOCKER_IMAGE}-${GIT_BRANCH_NAME}
147-
DOCKER_IMAGE_SHA_TAG=${DOCKER_IMAGE}-${GIT_COMMIT_SHA}
109+
tmp_tag=$(basename "$(mktemp -u)" | tr '[:upper:]' '[:lower:]')
148110

149-
if [[ "${WITH_PUSH}" == true ]]; then
150-
(
151-
set -x
152-
docker push "${DOCKER_IMAGE}"
153-
if [[ -n ${GITHUB_REF} ]]; then
154-
docker tag ${DOCKER_IMAGE} ${DOCKER_IMAGE_BRANCH_TAG}
155-
docker tag ${DOCKER_IMAGE} ${DOCKER_IMAGE_SHA_TAG}
156-
docker push "${DOCKER_IMAGE_BRANCH_TAG}"
157-
docker push "${DOCKER_IMAGE_SHA_TAG}"
158-
fi
159-
)
160-
fi
111+
DOCKER_BUILDKIT=1 docker build \
112+
${DOCKER_GPU_BUILD_ARG} \
113+
--build-arg "GPU_IMAGE=${GPU_IMAGE}" \
114+
--target "${TARGET}" \
115+
-t "${tmp_tag}" \
116+
$@ \
117+
-f "${TOPDIR}/.ci/docker/manywheel/Dockerfile${DOCKERFILE_SUFFIX}" \
118+
"${TOPDIR}/.ci/docker/"

.ci/pytorch/macos-build.sh

Lines changed: 8 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -34,11 +34,14 @@ if which sccache > /dev/null; then
3434
fi
3535

3636
print_cmake_info
37-
38-
# Explicitly set USE_DISTRIBUTED=0 to align with the default build config on mac. This also serves as the sole CI config that tests
39-
# that building with USE_DISTRIBUTED=0 works at all. See https://github.com/pytorch/pytorch/issues/86448
40-
USE_DISTRIBUTED=0 USE_OPENMP=1 MACOSX_DEPLOYMENT_TARGET=11.0 WERROR=1 BUILD_TEST=OFF USE_PYTORCH_METAL=1 python setup.py bdist_wheel
41-
37+
if [[ ${BUILD_ENVIRONMENT} == *"distributed"* ]]; then
38+
# Needed for inductor benchmarks, as lots of HF networks make `torch.distribtued` calls
39+
USE_DISTRIBUTED=1 USE_OPENMP=1 WERROR=1 python setup.py bdist_wheel
40+
else
41+
# Explicitly set USE_DISTRIBUTED=0 to align with the default build config on mac. This also serves as the sole CI config that tests
42+
# that building with USE_DISTRIBUTED=0 works at all. See https://github.com/pytorch/pytorch/issues/86448
43+
USE_DISTRIBUTED=0 USE_OPENMP=1 MACOSX_DEPLOYMENT_TARGET=11.0 WERROR=1 BUILD_TEST=OFF USE_PYTORCH_METAL=1 python setup.py bdist_wheel
44+
fi
4245
if which sccache > /dev/null; then
4346
print_sccache_stats
4447
fi

.ci/pytorch/macos-test.sh

Lines changed: 27 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -221,27 +221,39 @@ test_torchbench_smoketest() {
221221
TEST_REPORTS_DIR=$(pwd)/test/test-reports
222222
mkdir -p "$TEST_REPORTS_DIR"
223223

224-
local dtype=notset
225224
local device=mps
226-
local models=(hf_T5 llama BERT_pytorch dcgan hf_GPT2 yolov3 resnet152)
225+
local models=(hf_T5 llama BERT_pytorch dcgan hf_GPT2 yolov3 resnet152 sam pytorch_unet stable_diffusion_text_encoder moco speech_transformer)
227226

228227
for backend in eager inductor; do
229-
touch "$TEST_REPORTS_DIR/inductor_${backend}_torchbench_${dtype}_training_${device}_performance.csv"
230-
touch "$TEST_REPORTS_DIR/inductor_${backend}_torchbench_${dtype}_inference_${device}_performance.csv"
231-
232-
echo "Launching torchbench training performance run for backend ${backend}"
233-
for model in "${models[@]}"; do
234-
PYTHONPATH="$(pwd)"/torchbench python benchmarks/dynamo/torchbench.py \
235-
--performance --only "$model" --backend "$backend" --training --devices "$device" \
236-
--output "$TEST_REPORTS_DIR/inductor_${backend}_torchbench_${dtype}_training_${device}_performance.csv" || true
228+
229+
for dtype in notset float16 bfloat16; do
230+
echo "Launching torchbench inference performance run for backend ${backend} and dtype ${dtype}"
231+
local dtype_arg="--${dtype}"
232+
if [ "$dtype" == notset ]; then
233+
dtype_arg="--float32"
234+
fi
235+
touch "$TEST_REPORTS_DIR/inductor_${backend}_torchbench_${dtype}_inference_${device}_performance.csv"
236+
for model in "${models[@]}"; do
237+
PYTHONPATH="$(pwd)"/torchbench python benchmarks/dynamo/torchbench.py \
238+
--performance --only "$model" --backend "$backend" --inference --devices "$device" "$dtype_arg" \
239+
--output "$TEST_REPORTS_DIR/inductor_${backend}_torchbench_${dtype}_inference_${device}_performance.csv" || true
240+
done
237241
done
238242

239-
echo "Launching torchbench inference performance run for backend ${backend}"
240-
for model in "${models[@]}"; do
241-
PYTHONPATH="$(pwd)"/torchbench python benchmarks/dynamo/torchbench.py \
242-
--performance --only "$model" --backend "$backend" --inference --devices "$device" \
243-
--output "$TEST_REPORTS_DIR/inductor_${backend}_torchbench_${dtype}_inference_${device}_performance.csv" || true
243+
for dtype in notset amp; do
244+
echo "Launching torchbench training performance run for backend ${backend} and dtype ${dtype}"
245+
touch "$TEST_REPORTS_DIR/inductor_${backend}_torchbench_${dtype}_training_${device}_performance.csv"
246+
local dtype_arg="--${dtype}"
247+
if [ "$dtype" == notset ]; then
248+
dtype_arg="--float32"
249+
fi
250+
for model in "${models[@]}"; do
251+
PYTHONPATH="$(pwd)"/torchbench python benchmarks/dynamo/torchbench.py \
252+
--performance --only "$model" --backend "$backend" --training --devices "$device" "$dtype_arg" \
253+
--output "$TEST_REPORTS_DIR/inductor_${backend}_torchbench_${dtype}_training_${device}_performance.csv" || true
254+
done
244255
done
256+
245257
done
246258

247259
echo "Pytorch benchmark on mps device completed"

.github/scripts/trymerge.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -434,7 +434,7 @@ def __init__(self, name: str, url: str, run_id: int, status: Optional[str]):
434434
RE_GHSTACK_HEAD_REF = re.compile(r"^(gh/[^/]+/[0-9]+/)head$")
435435
RE_GHSTACK_DESC = re.compile(r"Stack.*:\r?\n(\* [^\r\n]+\r?\n)+", re.MULTILINE)
436436
RE_PULL_REQUEST_RESOLVED = re.compile(
437-
r"Pull Request resolved: "
437+
r"(Pull Request resolved|Pull-Request-resolved): "
438438
r"https://github.com/(?P<owner>[^/]+)/(?P<repo>[^/]+)/pull/(?P<number>[0-9]+)",
439439
re.MULTILINE,
440440
)

0 commit comments

Comments
 (0)
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy