-
Notifications
You must be signed in to change notification settings - Fork 40.6k
Failure cluster [e2e90f0c...] Connectivity Pod Lifecycle should be able to have zero downtime on a Blue Green deployment using Services and Readiness Gates #131707
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
flake a lot in https://testgrid.k8s.io/sig-release-master-blocking#kind-ipv6-master-parallel master blocking board. |
ok, it fails now because the kube-proxy in the GCE jobs has a sync interval of 10 seconds to progam the dataplane
It means, that is not reliable to sync on the number of responses to validate the dataplane is programmed cc: @aroradaman |
No Flake in https://testgrid.k8s.io/sig-release-master-blocking#kind-ipv6-master-parallel after #131780 was merged. |
Failure cluster e2e90f0c9cc473dc1dbf
Error text:
Recent failures:
5/9/2025, 9:35:32 AM e2e-kops-aws-k8s-latest
5/9/2025, 9:10:25 AM pr:pull-kubernetes-e2e-ec2
5/9/2025, 3:58:53 AM pr:pull-kubernetes-e2e-gce
5/9/2025, 3:20:17 AM ci-kubernetes-e2e-gci-gce-containerd
5/9/2025, 3:06:45 AM pr:pull-kubernetes-e2e-kind-beta-features
/kind failing-test
/sig network
spotted in https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/131702/pull-kubernetes-e2e-kind/1920954886661345280
The text was updated successfully, but these errors were encountered: