-
Notifications
You must be signed in to change notification settings - Fork 40.6k
Implement DRA Device Binding Conditions (KEP-5007) #130160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Implement DRA Device Binding Conditions (KEP-5007) #130160
Conversation
Hi @KobayashiD27. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
We are pleased to share the initial implementation of the KEP-5007 DRA device binding conditions. While this PR aligns with the outlined in the KEP, we recognize that there may be areas for improvement. We invite the community to review the implementation and provide feedback and insights to help refine and enhance this feature. @pohly @johnbelamaric |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some quick comments. I have looked at the API and allocator, but not the scheduler plugin.
The API lacks validation, but that's of course okay when the main goal for now is to try out the functionality.
const ( | ||
// IsPrepared indicates the device ready state. | ||
// If NeedToPreparing is True and IsPrepared is True, the scheduler proceeds to Bind. | ||
IsPrepared = "dra.example.com/is-prepared" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was just an example in the KEP. It doesn't belong into the upstream API. Same for PreparingFailed
.
}) | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add the device status only when needed by a device.
You also have to add feature gate checking: I don't remember whether it was spelled out explicitly in the KEP (if not, please add in a follow-up), but what would make sense to me is to ignore devices which have binding conditions when the feature is turned off. In other words, don't select them because the code which waits during binding wouldn't be active.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I add allocatedDeviceStatus only when BindingConditions exist.
and I pass the featureGate status of BindingConditions to the allocator and check it.
@pohly
Early feedback on these sections would be very helpful. Additionally, regarding the comment about the lack of API validation, are you referring to |
/milestone v1.33 |
pkg/scheduler/framework/plugins/dynamicresources/dynamicresources.go
Outdated
Show resolved
Hide resolved
pkg/scheduler/framework/plugins/dynamicresources/dynamicresources.go
Outdated
Show resolved
Hide resolved
1c36f41
to
f07e273
Compare
Co-authored-by: Patrick Ohly <patrick.ohly@intel.com>
Co-authored-by: Patrick Ohly <patrick.ohly@intel.com>
Co-authored-by: Patrick Ohly <patrick.ohly@intel.com>
Co-authored-by: Patrick Ohly <patrick.ohly@intel.com>
Co-authored-by: Patrick Ohly <patrick.ohly@intel.com>
Co-authored-by: Patrick Ohly <patrick.ohly@intel.com>
e4d7660
to
a0921cc
Compare
The concern was raised in this PR comment: #130160 (comment). Here's what Tim said there:
The KEP calls attachable devices as the only use case for this feature and it specifically mentions the binding is supposed to fail on successful attachment. Is there a change how attaching use cases should be modeled (using proxy devices)? If so, we probably need to update the KEP as well. Note that it's very important how the attachment is modeled in scheduling and we had some thoughts and discussions, but no clear conclusions yet. I wanted first to have clarity what is your current thinking and whether we should take this model into consideration when planning changes to scheduler this cycle. @pohly @johnbelamaric WDYT? |
This is still a standing concern. What we need to decide is whether it's worth proceeding with this feature in-tree (i.e. merge it) or keeping it out-of-tree until the usage as part of composable disaggregated infrastructure is clarified. I think the feature potentially has some merits also in other areas, even if those are not called out in the KEP and not of high priority. Promotion to beta absolutely has to depend on identifying those other usages or satisfying all concerns around using this feature for composable disaggregated infrastructure - we should have this in the KEP under beta criteria. Having it in-tree has the advantage that we can consider the need of handling binding conditions while making other changes in the scheduler plugin or framework. Overall I am in favor of moving ahead with this in 1.34, with the caveat that reviewers will have to prioritize and it's not one of the features that are critical. |
My concern is that the KEP still refers to the aniti-pattern of failing binding on successful device attachment. I worry that if we proceed, that pattern becomes in fact enabled and we will have device implementations relying on that. This pattern may be problematic whenever we have workloads that are planned ahead. Failing binding should be a signal to workload rescheduling, not retry, so it would be hard to distinguish the two. @wojtek-t do you have any thoughts on that?
Do you have an example where the feature may be useful as well? Maybe we should update the KEP and remove the composable disaggregated infrastructure case if this is not the solution.
We had an agreement during the KEP review that we can postpone addressing the problem before beta, but Tim questioned that. I don't mind proceeding, but we have alignment here. |
Potentially for some app-controlled gang scheduling: It was just a wild idea, nothing that we pursued further. |
Other than that: it fills a gap compared to what is possible with the storage API, which has control-plane attach/detach of volumes. The uncertainty is whether any DRA driver actually needs this. |
Yes, if "failure is success" is still the approach, that concerns me. I think a more deterministic approach would be a proxy driver / proxy devices as you say. In that approach, the underlying device DRA driver (e.g, the NVIDIA driver) would register with a node-local proxy or shim driver instead of with kubelet. Additionally, that's where it would publish its resource slices. That proxy driver would provide a communication channel with the disaggregated device controller, such that it is aware of the handoff of the device from the controller to the node-local driver. This means that we simply "proceed" with binding once the hand off is complete. From the point of view of the scheduler, the device originally allocated for the claim continues to be the same device that gets passed to the node. Has this avenue been explored? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wasn't the decision to favor devices without binding conditions? That's not in the code yet, is it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had started writing an integration test for this and then simplified it so that it only uses a single device. You can find the ground work (reusable also for other features) in #131869. The Device Binding Conditions test is in https://github.com/pohly/kubernetes/commits/dra-device-binding-conditions/
I think I outlined in some previous message all the various edge cases that can occur once we consider more complex scenarios (multiple pods referencing the same claim; binding a pod fails after updating the ResourceClaim; and probably more that I don't remember right now). Those are scenarios which need integration tests. A unit test alone is not enough because we might make incorrect assumptions about what the right behavior of the plugin needs to be and what the scheduler then does. Having those tests before merging would be nice, otherwise they are needed before beta and then should be listed in the KEP's test plan.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wasn't the decision to favor devices without binding conditions? That's not in the code yet, is it?
Yes, you're right, the decision was to favor devices without binding conditions.
That logic has already been implemented in GatherPools()
within "staging/src/k8s.io/dynamic-resource-allocation/structured/pools.go."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll check the tests you implemented since they're not working properly. I'll also try the dra integration test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a commented out "without-binding" device. My expectation was that the first claim would allocate that, but it ended up getting "with-binding" instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't expecting these to be mixed in a single ResourceSlice. As it stands, each slice is evaluated for its own BindingConditions, so this doesn't work for this test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems appropriate to change it to evaluate for each device. Is that correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated the implementation to handle mixed conditions, and also confirmed that the dra test prefers the "without-binding" device.
I would appreciate it if you could check the test results and the implementation of allocator and 'GatherPools()'.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having those tests before merging would be nice, otherwise they are needed before beta and then should be listed in the KEP's test plan.
Thank you for highlighting those edge cases. I understand the importance of integration tests for covering these complex scenarios. I will prepare additional integration tests to address them, and I will ensure they are ready before the merge. I appreciate your patience while I create them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wasn't expecting these to be mixed in a single ResourceSlice. As it stands, each slice is evaluated for its own BindingConditions, so this doesn't work for this test.
We could make that assumption, as long as we document it properly. It might even make sense if it keeps the implementation simpler - I need to look at that.
The "fail then reschedule" model is similar to how auto scaler has worked in the past. There, a pod fails to schedule, auto scaler scales the cluster, and we hope the pod gets the new node. That pattern has been less than ideal. It leads to extra latency, it leads to pods not getting scheduled as expected, it makes pre-planning of multi-node workloads very difficult, and is generally a poor user experience. I don't think we should model this new functionality on that. I really urge you to explore a couple alternatives to that pattern. Key requirements would be a clean experience for the end user (no manifest changes needed to use local or disaggregated devices), and a deterministic scheduling flow that doesn't require rescheduling. Two options I can think of off hand:
The first one has the benefit of minimal changes to the underlying DRA drivers - they simply need to be able to be pointed to a different place for kubelet registration and for API server. Otherwise they remain the same. The second requires integration for any specific driver to work with this platform so seems less desirable. One last note. The current upstream API doesn't expect encode the "fail then reschedule" pattern; it enables it, but it is not the expected pattern of usage. It's our specific, motivating downstream implementation that is using the API in that way. But without other motivating examples, it's hard to say we should build an API that's "not quite right" for that use case. I suspect the API could be used for some other things. As @pohly mentioned, gang scheduling. It could also possibly used to "accumulate" devices - grab one "for now" on a node and wait for more to be released by other workloads. That has its own set of possible deadlocks but those may be solvable. I am not sure we have a strong need to pursue those yet, but it could be useful to get something into alpha so we can experiment with them. |
Thanks for the feedback. I’d like to clarify one important point regarding the original intent behind the KEP. The “fail then reschedule” pattern described in the KEP was not meant to treat binding failure as a signal for attachment completion. Rather, our goal was to support fabric-attached devices in a way that minimizes the need for significant changes to existing vendor drivers. At the time, modeling the failure as a fallback mechanism allowed us to prototype the feature without requiring a new coordination protocol between node-local and fabric-aware components. Specifically, the idea was that once a device is attached via the fabric, it would become usable through a subsequent scheduling cycle. That said, I now see how the current KEP text could be interpreted as endorsing this pattern as the expected or recommended flow. That was not the intention, and I agree that this needs to be clarified. I'm planning to revise the KEP to better reflect the intended design goals and to avoid promoting “fail then reschedule” as the default model. I’d appreciate input on how best to describe the alternatives — such as proxy drivers — and how to frame the fallback behavior in a way that doesn’t encourage misuse. |
I also recognize that the value of the success path introduced by With We're still exploring how best to describe these scenarios and would appreciate any feedback on whether they seem meaningful and how they might be framed in the KEP. |
Jumping late as much was written since then. I fully agree with you, John and others that it seems to be an anti-pattern. And there seem to be an agreement that we don't want to proceed with that for beta. My mental model for whether we want to proceed that for Alpha or not [in general, not just in this particular case is]
I think that the "we roughly know how the end-state looks like" is the crucial bit here in this particular context. It's just not true here. There are ideas (like proxy driver), but we didn't fully explore them and didn't make any decisions. So we don't even know how much we would need to rework stuff to achieve that state. I didn't have time to really think about it, but the proxy driver idea seems the most promissing to me and worth exploring further.
+100 to this - especially given we're actively thinking how to change that, exactly because of the pains that John described above, and few others (e.g. worse efficiency, workload disruptions and more) |
I agree with that and I think we still can try to come up with a sensible solution for beta now. We probably all agree that there is no straightforward solution considering the current scheduling model, but we are working on addressing this and similar problems under umbrella of the workload-aware scheduling. Even though we don't have clarity on how things should look like, we still can think what approach would be the best here. One of the possibilities that are considered is extending the scheduling process with a phase where workload is planned, but not bound yet. In fact this how many other schedulers work already, so we could safely assume we would need such a phase. My question is whether it would be possible to "reserve" somehow the attachable resources before the attachment is eventually "requested", which could be a part of such a planning phase? I truly hope it is, because otherwise we wouldn't be able to schedule workloads ahead of time and perform any scheduling optimizations toward finding the best pods placement. So the second question is whether we're able to construct the ResourceSlice offering for non-yet-attached device that would represent the potentially-attached device with all its attributes needed for scheduling? In other words, would both ResourceSlices be similar to each other before and after an attachment. Finally, do we really need to reconstruct the ResourceSlice after attachment and why. We already see that the desire is that both ResourceSlices should be ideally almost identical, so I'm not sure if we can avoid building a proxy device pluggin around it. I hope it would be possible at all to mimic a non-yet-attached device and that the proxy thing could be generic rather than specific to the device plugin. If we find answers to those questions, it's still not clear whether the attachment should be part of the binding phase, as there are several alternatives, although a similar mechanism to the binding conditions may be needed anyway. |
Thanks again for the thoughtful discussion. I’d like to clarify my current thinking and how I see the path forward. While much of the conversation has (rightfully) focused on the modeling of fabric-attached devices, I believe the core mechanism proposed — BindingConditions — has broader utility and should be evaluated on its own merits. It provides a way to defer binding until readiness is confirmed, which can improve scheduling reliability in a variety of scenarios, not just for fabric devices. I fully agree that the “fail then reschedule” pattern is problematic and should not be encouraged. I’ll revise the KEP to make that clear, and to better separate the concerns of the mechanism itself from the specific device models it might support. At the same time, I recognize that the architectural questions around proxy drivers and planning phases are important and worth exploring. My hope is that we can continue those discussions in parallel, while also reviewing the current implementation of BindingConditions as a self-contained feature. The current KEP still reflects the earlier fail-then-reschedule model, so I’ll be updating it to remove that framing and align it with the direction we’re now discussing. I’ll share the revised version shortly. |
I think we should focus on updating the KEP first, especially reformulating the purpose and defining which problem it solves. Doing things in the right order should help us to review the implementation and ask the right questions.
How important is solving the problem of attachable devices? Even if it's not a priority now, I think it's very important to explore in the context of changes we plan to make in scheduling. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This PR implements the KEP-5007 DRA device binding conditions. This feature ensures that the scheduler waits in the PreBind phase until any DRA devices that need preparation are ready.
Which issue(s) this PR fixes:
Related to kubernetes/enhancements#5007
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: