-
Notifications
You must be signed in to change notification settings - Fork 40.6k
Cannot run patchesStrategicMerge on CRDs schema under Versions array #113223
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
/sig api-machinery |
@gaelgatelement: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/wg api-expression |
Let me know, if this issue is sensible enough, if I should go down writing a KEP about this API change. |
I don't think it's a breaking change, since it did not change behavior for the v1beta1 endpoint. I would expect a single actor to define the versions for the CRD, since they also have to define the conversion between them. Independent ownership of different versions or of parts of the schema don't seem like a good thing to encourage. |
I understand your reluctance to this change. It's indeed a good point that partial ownership would become possible. We are actually using this schema change locally in our repository to be able to build complex schemas in multiple kustomize steps before pushing it to the kubernetes API. I think this is a valid usecase, but I see that there would have unwanted side-effects if we pushed it to the main kubernetes schema aswell. I feel like this possibility will have to stay local to our buildchain ? |
If you're applying patches locally to construct a schema that is written to the server by a single actor, I'd suggest using a different patch type than strategic merge patch. You can get very fine-grained control of where and how the patch is applied with the json-patch type (https://kubectl.docs.kubernetes.io/references/kustomize/builtins/#_patchesjson6902_) |
Yes, we have been using this as well, but here we are trying to include metadata in every field of our CRD. To be more precise, we are trying to add openapi specifications extensions which are not supported by k8s yet. So we build 2 distincts schema file : first the CRD which will be injected into k8s, and then we attach some Using the Json patch, it would be very verbose to do so, while with the strategic merge patch the patch is readable and easy to manage. |
/triage accepted thanks jordan |
This issue has not been updated in over 1 year, and should be re-triaged. You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
/triage accepted |
/close as working as intended |
This propagates the label constraints of Linode resources to their associated CustomResourceDefinitions via the Kubernetes Validation Rules feature. When a custom resource is created, the Kuberneten object name is validated against the label constraints of its backing Linode resources. This allows CAPL-managed resources to maintain a human-readable naming scheme between its Kubernetes representation and the backing Linode implementation. Validation rules are implemented via Kustomize JSON patches due to limitations with Kubebuilder and Strategic Merge Patching with CRDs in Kubernetes. See: - https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules - kubernetes/kubernetes#113223 - kubernetes/kubernetes#74620 - kubernetes-sigs/kubebuilder#1074
This propagates the label constraints of Linode resources to their associated CustomResourceDefinitions via the Kubernetes Validation Rules feature. When a custom resource is created, the Kuberneten object name is validated against the label constraints of its backing Linode resources. This allows CAPL-managed resources to maintain a human-readable naming scheme between its Kubernetes representation and the backing Linode implementation. Validation rules are implemented via Kustomize JSON patches due to limitations with Kubebuilder and Strategic Merge Patching with CRDs in Kubernetes. See: - https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules - kubernetes/kubernetes#113223 - kubernetes/kubernetes#74620 - kubernetes-sigs/kubebuilder#1074
This propagates the label constraints of Linode resources to their associated CustomResourceDefinitions via the Kubernetes Validation Rules feature. When a custom resource is created, the Kuberneten object name is validated against the label constraints of its backing Linode resources. This allows CAPL-managed resources to maintain a human-readable naming scheme between its Kubernetes representation and the backing Linode implementation. Validation rules are implemented via Kustomize JSON patches due to limitations with Kubebuilder and Strategic Merge Patching with CRDs in Kubernetes. See: - https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules - kubernetes/kubernetes#74620 - kubernetes-sigs/kubebuilder#1074 - kubernetes/kubernetes#113223
This propagates the label constraints of Linode resources to their associated CustomResourceDefinitions via the Kubernetes Validation Rules feature. When a custom resource is created, the Kubernetes object name is validated against the label constraints of its backing Linode resources. This allows CAPL-managed resources to maintain a human-readable naming scheme between its Kubernetes representation and the backing Linode implementation. Validation rules are implemented via Kustomize JSON patches due to limitations with Kubebuilder and Strategic Merge Patching with CRDs in Kubernetes. See: - https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#validation-rules - kubernetes/kubernetes#74620 - kubernetes-sigs/kubebuilder#1074 - kubernetes/kubernetes#113223
Uh oh!
There was an error while loading. Please reload this page.
What happened?
When trying to use
patchesStrategicMerge
on CRD schema underspec/versions
array, it fails and replace the whole array because it is missing amergeKey
andpatchStrategy
onVersions
.patchesStrategicMerge
used to work withspec.version
andspec.validation
fields but those are now deprecated.What did you expect to happen?
patchesStrategicMerge
should be usable with newversions
array.How can we reproduce it (as minimally and precisely as possible)?
base.yaml
:kustomization.yaml
:bar
is missing from result :kustomize build bug/
Anything else we need to know?
This might be a breaking change, if people relies on
patchesStrategicMerge
to replace their CRDs versions content instead of adding/merging.Kubernetes version
All versions supporting
apiextensions.k8s.io/v1
.Cloud provider
--
OS version
No response
Install tools
No response
Container runtime (CRI) and version (if applicable)
No response
Related plugins (CNI, CSI, ...) and versions (if applicable)
No response
The text was updated successfully, but these errors were encountered: