The Kubebuilder Book
The Kubebuilder Book
The Kubebuilder Book
Users of Kubernetes
Users of Kubernetes will develop a deeper understanding of Kubernetes through learning the
fundamental concepts behind how APIs are designed and implemented. This book will teach
readers how to develop their own Kubernetes APIs and the principles from which the core
Kubernetes APIs are designed.
Including:
API extension developers will learn the principles and concepts behind implementing
canonical Kubernetes APIs, as well as simple tools and libraries for rapid execution. This book
covers pitfalls and misconceptions that extension developers commonly encounter.
Including:
This approach has fostered a rich ecosystem of tools and libraries for working with
Kubernetes APIs.
Users work with the APIs through declaring objects as yaml or json config, and using common
tooling to manage the objects.
Building services as Kubernetes APIs provides many advantages to plain old REST, including:
Developers may build and publish their own Kubernetes APIs for installation into running
Kubernetes clusters.
Contribution
If you like to contribute to either this book or the code, please be so kind to read our
Contribution guidelines first.
Resources
Repository: sigs.k8s.io/kubebuilder
Quick Start
This Quick Start guide will cover:
Creating a project
Creating an API
Running locally
Running in-cluster
Prerequisites
go version v1.20.0+
docker version 17.03+.
kubectl version v1.11.3+.
Access to a Kubernetes v1.11.3+ cluster.
Projects created by Kubebuilder contain a Makefile that will install tools at versions
defined at creation time. Those tools are:
kustomize
controller-gen
The versions which are defined in the Makefile and go.mod files are the versions tested
and therefore is recommend to use the specified versions.
Installation
Install kubebuilder:
mkdir -p ~/projects/guestbook
cd ~/projects/guestbook
kubebuilder init --domain my.domain --repo my.domain/guestbook
Developing in $GOPATH
If your project is initialized within GOPATH , the implicitly called go mod init will
interpolate the module path for you. Otherwise --repo=<module path> must be set.
Create an API
Run the following command to create a new API (group/version) as webapp/v1 and the new
Kind(CRD) Guestbook on it:
Press Options
If you press y for Create Resource [y/n] and for Create Controller [y/n] then this will
create the files api/v1/guestbook_types.go where the API is defined and the
internal/controllers/guestbook_controller.go where the reconciliation business
logic is implemented for this Kind(CRD).
OPTIONAL: Edit the API definition and the reconciliation business logic. For more info see
Designing an API and What’s in a Controller.
If you are editing the API definitions, generate the manifests such as Custom Resources (CRs)
or Custom Resource Defintions (CRDs) using
make manifests
Context Used
Your controller will automatically use the current context in your kubeconfig file (i.e.
whatever cluster kubectl cluster-info shows).
make install
Run your controller (this will run in the foreground, so switch to a new terminal if you want to
leave it running):
make run
registry permission
This image ought to be published in the personal registry you specified. And it is required
to have access to pull the image from the working environment. Make sure you have the
proper permission to the registry if the above commands don’t work.
RBAC errors
If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or
be logged in as admin. See Prerequisites for using Kubernetes RBAC on GKE cluster
v1.11.x and older which may be your case.
Uninstall CRDs
To delete your CRDs from the cluster:
make uninstall
Undeploy controller
Undeploy the controller to the cluster:
make undeploy
Next Step
Now, see the architecture concept diagram for a better overview and follow up the CronJob
tutorial to better understand how it works by developing a demo example project.
Ensure that you check out the Deploy Image Plugin. This plugin allows users to scaffold
API/Controllers to deploy and manage an Operand (image) on the cluster following the
guidelines and best practices. It abstracts the complexities of achieving this goal while
allowing users to customize the generated code.
Let’s pretend (and sure, this is a teensy bit contrived) that we’ve finally gotten tired of the
maintenance burden of the non-Kubebuilder implementation of the CronJob controller in
Kubernetes, and we’d like to rewrite it using Kubebuilder.
The job (no pun intended) of the CronJob controller is to run one-off tasks on the Kubernetes
cluster at regular intervals. It does this by building on top of the Job controller, whose task is to
run one-off tasks once, seeing them to completion.
Instead of trying to tackle rewriting the Job controller as well, we’ll use this as an opportunity
to see how to interact with external types.
Note that most of this tutorial is generated from literate Go files that live in the book
source directory: docs/book/src/cronjob-tutorial/testdata. The full, runnable project lives
in project, while intermediate files live directly under the testdata directory.
Your project’s name defaults to that of your current working directory. You can pass --
project-name=<dns1123-label-string> to set a different project name.
Now that we’ve got a project in place, let’s take a look at what Kubebuilder has scaffolded for
us so far...
Developing in $GOPATH
If your project is initialized within GOPATH , the implicitly called go mod init will
interpolate the module path for you. Otherwise --repo=<module path> must be set.
Build Infrastructure
First up, basic infrastructure for building your project:
Each other directory contains a different piece of configuration, refactored out into its own
base:
config/rbac : permissions required to run your controllers under their own service
account
The Entrypoint
Last, but certainly not least, Kubebuilder scaffolds out the basic entrypoint of our project:
main.go . Let’s take a look at that next...
import (
"flag"
"fmt"
"os"
// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
// to ensure that exec-entrypoint and run can make use of them.
_ "k8s.io/client-go/plugin/pkg/client/auth"
"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
// +kubebuilder:scaffold:imports
)
Every set of controllers needs a Scheme, which provides mappings between Kinds and their
corresponding Go types. We’ll talk a bit more about Kinds when we write our API definition, so
just keep this in mind for later.
var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)
func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
//+kubebuilder:scaffold:scheme
}
We instantiate a manager, which keeps track of running all of our controllers, as well as
setting up shared caches and clients to the API server (notice we tell the manager about
our Scheme).
We run our manager, which in turn runs all of our controllers and webhooks. The
manager is set up to run until it receives a graceful shutdown signal. This way, when
we’re running on Kubernetes, we behave nicely with graceful pod termination.
While we don’t have anything to run just yet, remember where that
+kubebuilder:scaffold:builder comment is -- things’ll get interesting there soon.
func main() {
var metricsAddr string
var enableLeaderElection bool
var probeAddr string
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address
the metric endpoint binds to.")
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address
the probe endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller
manager.")
opts := zap.Options{
Development: true,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
Note that the Manager can restrict the namespace that all controllers will watch for resources
by:
The above example will change the scope of your project to a single Namespace . In this
scenario, it is also suggested to restrict the provided authorization to this namespace by
replacing the default ClusterRole and ClusterRoleBinding to Role and RoleBinding
respectively. For further information see the Kubernetes documentation about Using RBAC
Authorization.
// +kubebuilder:scaffold:builder
setupLog.Info("starting manager")
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
}
With that out of the way, we can get on to scaffolding our API!
When we talk about APIs in Kubernetes, we often use 4 terms: groups, versions, kinds, and
resources.
You’ll also hear mention of resources on occasion. A resource is simply a use of a Kind in the
API. Often, there’s a one-to-one mapping between Kinds and resources. For instance, the
pods resource corresponds to the Pod Kind. However, sometimes, the same Kind may be
returned by multiple resources. For instance, the Scale Kind is returned by all scale
subresources, like deployments/scale or replicasets/scale . This is what allows the
Kubernetes HorizontalPodAutoscaler to interact with different resources. With CRDs, however,
each Kind will correspond to a single resource.
Notice that resources are always lowercase, and by convention are the lowercase form of the
Kind.
Now that we have our terminology straight, we can actually create our API!
The goal of this command is to create Custom Resource (CR) and Custom Resource Definition
(CRD) for our Kind(s). To check it further see; Extend the Kubernetes API with
CustomResourceDefinitions.
Our APIs and resources represent our solutions on the clusters. Basically, the CRDs are a
definition of our customized Objects, and the CRs are an instance of it.
In this way, we can create the App CRD which will have its controller and which would be
responsible for things like creating Deployments that contain the App and creating Services to
access it and etc. Similarly, we could create a CRD to represent the DB, and deploy a controller
that would manage DB instances.
Then, we can later construct a new &CronJob{} given some JSON from the API server that
says
{
"kind": "CronJob",
"apiVersion": "batch.tutorial.kubebuilder.io/v1",
...
}
The first time we call this command for each group-version, it will create a directory for the
new group-version.
It has also added a file for our CronJob Kind, api/v1/cronjob_types.go . Each time we call
the command with a different kind, it’ll add a corresponding new file.
Let’s take a look at what we’ve been given out of the box, then we can move on to filling it out.
$ vim emptyapi.go
We start out simply enough: we import the meta/v1 API group, which is not normally exposed
by itself, but instead contains metadata common to all Kubernetes Kinds.
package v1
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
Next, we define types for the Spec and Status of our Kind. Kubernetes functions by reconciling
desired state ( Spec ) with actual cluster state (other objects’ Status ) and external state, and
then recording what it observed ( Status ). Thus, every functional object includes spec and
status. A few types, like ConfigMap don’t follow this pattern, since they don’t encode desired
state, but most types do.
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for
the fields to be serialized.
Next, we define the types corresponding to actual Kinds, CronJob and CronJobList .
CronJob is our root type, and describes the CronJob kind. Like all Kubernetes objects, it
contains TypeMeta (which describes API version and Kind), and also contains ObjectMeta ,
which holds things like name, namespace, and labels.
CronJobList is simply a container for multiple CronJob s. It’s the Kind used in bulk
operations, like LIST.
In general, we never modify either of these -- all modifications go in either Spec or Status.
That little +kubebuilder:object:root comment is called a marker. We’ll see more of them in
a bit, but know that they act as extra metadata, telling controller-tools (our code and YAML
generator) extra information. This particular one tells the object generator that this type
represents a Kind. Then, the object generator generates an implementation of the
runtime.Object interface for us, which is the standard interface that all types representing
Kinds must implement.
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:object:root=true
Finally, we add the Go types to the API group. This allows us to add the types in this API group
to any Scheme.
func init() {
SchemeBuilder.Register(&CronJob{}, &CronJobList{})
}
Now that we’ve seen the basic structure, let’s fill it out!
Designing an API
In Kubernetes, we have a few rules for how we design APIs. Namely, all serialized fields must
be camelCase , so we use JSON struct tags to specify this. We can also use the omitempty
struct tag to mark that a field should be omitted from serialization when empty.
Fields may use most of the primitive types. Numbers are the exception: for API compatibility
purposes, we accept three forms of numbers: int32 and int64 for integers, and
resource.Quantity for decimals.
There’s one other special type that we use: metav1.Time . This functions identically to
time.Time , except that it has a fixed, portable serialization format.
With that out of the way, let’s take a look at what our CronJob object looks like!
$ vim project/api/v1/cronjob_types.go
// Imports (hidden) ◀
First, let’s take a look at our spec. As we discussed before, spec holds desired state, so any
“inputs” to our controller go here.
We’ll also want a few extras, which will make our users’ lives easier:
A deadline for starting jobs (if we miss this deadline, we’ll just wait till the next scheduled
time)
What to do if multiple jobs would run at once (do we wait? stop the old one? run both?)
A way to pause the running of a CronJob, in case something’s wrong with it
Limits on old job history
Remember, since we never read our own status, we need to have some other way to keep
track of whether a job has run. We can use at least one old job to do this.
We’ll use several markers ( // +comment ) to specify additional metadata. These will be used by
controller-tools when generating our CRD manifest. As we’ll see in a bit, controller-tools will
also use GoDoc to form descriptions for the fields.
// CronJobSpec defines the desired state of CronJob
type CronJobSpec struct {
//+kubebuilder:validation:MinLength=0
//+kubebuilder:validation:Minimum=0
//+kubebuilder:validation:Minimum=0
//+kubebuilder:validation:Minimum=0
We define a custom type to hold our concurrency policy. It’s actually just a string under the
hood, but the type gives extra documentation, and allows us to attach validation on the type
instead of the field, making the validation more easily reusable.
// ConcurrencyPolicy describes how the job will be handled.
// Only one of the following concurrent policies may be specified.
// If none of the following policies is specified, the default one
// is AllowConcurrent.
// +kubebuilder:validation:Enum=Allow;Forbid;Replace
type ConcurrencyPolicy string
const (
// AllowConcurrent allows CronJobs to run concurrently.
AllowConcurrent ConcurrencyPolicy = "Allow"
Next, let’s design our status, which holds observed state. It contains any information we want
users or other controllers to be able to easily obtain.
We’ll keep a list of actively running jobs, as well as the last time that we successfully ran our
job. Notice that we use metav1.Time instead of time.Time to get the stable serialization, as
mentioned above.
// Information when was the last time the job was successfully scheduled.
// +optional
LastScheduleTime *metav1.Time `json:"lastScheduleTime,omitempty"`
}
Finally, we have the rest of the boilerplate that we’ve already discussed. As previously noted,
we don’t need to change this, except to mark that we want a status subresource, so that we
behave like built-in kubernetes types.
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
Neither of these files ever needs to be edited (the former stays the same and the latter is
autogenerated), but it’s useful to know what’s in them.
groupversion_info.go
groupversion_info.go contains common metadata about the group-version:
$ vim project/api/v1/groupversion_info.go
First, we have some package-level markers that denote that there are Kubernetes objects in
this package, and that this package represents the group batch.tutorial.kubebuilder.io .
The object generator makes use of the former, while the latter is used by the CRD generator
to generate the right metadata for the CRDs it creates from this package.
// Package v1 contains API Schema definitions for the batch v1 API group
// +kubebuilder:object:generate=true
// +groupName=batch.tutorial.kubebuilder.io
package v1
import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)
Then, we have the commonly useful variables that help us set up our Scheme. Since we need
to use all the types in this package in our controller, it’s helpful (and the convention) to have a
convenient method to add all the types to some other Scheme . SchemeBuilder makes this
easy for us.
var (
// GroupVersion is group version used to register these objects
GroupVersion = schema.GroupVersion{Group: "batch.tutorial.kubebuilder.io",
Version: "v1"}
zz_generated.deepcopy.go
zz_generated.deepcopy.go contains the autogenerated implementation of the
aforementioned runtime.Object interface, which marks all of our root types as representing
Kinds.
The object generator in controller-tools also generates two other handy methods for each
root type and all its sub-types: DeepCopy and DeepCopyInto .
What’s in a controller?
Controllers are the core of Kubernetes, and of any operator.
It’s a controller’s job to ensure that, for any given object, the actual state of the world (both the
cluster state, and potentially external state like running containers for Kubelet or
loadbalancers for a cloud provider) matches the desired state in the object. Each controller
focuses on one root Kind, but may interact with other Kinds.
In controller-runtime, the logic that implements the reconciling for a specific kind is called a
Reconciler. A reconciler takes the name of an object, and returns whether or not we need to
try again (e.g. in case of errors or periodic controllers, like the HorizontalPodAutoscaler).
$ vim emptycontroller.go
First, we start out with some standard imports. As before, we need the core controller-
runtime library, as well as the client package, and the package for our API types.
package controllers
import (
"context"
"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
batchv1 "tutorial.kubebuilder.io/project/api/v1"
)
Next, kubebuilder has scaffolded a basic reconciler struct for us. Pretty much every reconciler
needs to log, and needs to be able to fetch objects, so these are added out of the box.
Most controllers eventually end up running on the cluster, so they need RBAC permissions,
which we specify using controller-tools RBAC markers. These are the bare minimum
permissions needed to run. As we add more functionality, we’ll need to revisit these.
//
+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=get
//
+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/status,ve
// make manifests
NOTE: If you receive an error, please run the specified command in the error and re-run make
manifests .
Reconcile actually performs the reconciling for a single named object. Our Request just has
a name, but we can use the client to fetch that object from the cache.
We return an empty result and no error, which indicates to controller-runtime that we’ve
successfully reconciled this object and don’t need to try again until there’s some changes.
Most controllers need a logging handle and a context, so we set them up here.
The context is used to allow cancelation of requests, and potentially things like tracing. It’s the
first argument to all client methods. The Background context is just a basic context without
any extra data or timing restrictions.
The logging handle lets us log. controller-runtime uses structured logging through a library
called logr. As we’ll see shortly, logging works by attaching key-value pairs to a static message.
We can pre-assign some pairs at the top of our reconcile method to have those attached to all
log lines in this reconciler.
Finally, we add this reconciler to the manager, so that it gets started when the manager is
started.
For now, we just note that this reconciler operates on CronJob s. Later, we’ll use this to mark
that we care about related objects as well.
Now that we’ve seen the basic structure of a reconciler, let’s fill out the logic for CronJob s.
Implementing a controller
The basic logic of our CronJob controller is this:
6. Run a new job if it’s on schedule, not past the deadline, and not blocked by our
concurrency policy
7. Requeue when we either see a running job (done automatically) or it’s time for the next
scheduled run.
$ vim project/internal/controller/cronjob_controller.go
We’ll start out with some imports. You’ll see below that we’ll need a few more imports than
those scaffolded for us. We’ll talk about each one when we use it.
package controller
import (
"context"
"fmt"
"sort"
"time"
"github.com/robfig/cron"
kbatch "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
ref "k8s.io/client-go/tools/reference"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"
batchv1 "tutorial.kubebuilder.io/project/api/v1"
)
Next, we’ll need a Clock, which will allow us to fake timing in our tests.
// Clock (hidden) ◀
Notice that we need a few more RBAC permissions -- since we’re creating and managing jobs
now, we’ll need permissions for those, which means adding a couple more markers.
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=g
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/status,
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/finaliz
//+kubebuilder:rbac:groups=batch,resources=jobs,verbs=get;list;watch;create;update;
//+kubebuilder:rbac:groups=batch,resources=jobs/status,verbs=get
We’ll fetch the CronJob using our client. All client methods take a context (to allow for
cancellation) as their first argument, and the object in question as their last. Get is a bit
special, in that it takes a NamespacedName as the middle argument (most don’t have a middle
argument, as we’ll see below).
To fully update our status, we’ll need to list all child jobs in this namespace that belong to this
CronJob. Similarly to Get, we can use the List method to list the child jobs. Notice that we use
variadic options to set the namespace and field match (which is actually an index lookup that
we set up below).
var childJobs kbatch.JobList
if err := r.List(ctx, &childJobs, client.InNamespace(req.Namespace),
client.MatchingFields{jobOwnerKey: req.Name}); err != nil {
log.Error(err, "unable to list child Jobs")
return ctrl.Result{}, err
}
The reconciler fetches all jobs owned by the cronjob for the status. As our number of
cronjobs increases, looking these up can become quite slow as we have to filter through
all of them. For a more efficient lookup, these jobs will be indexed locally on the
controller's name. A jobOwnerKey field is added to the cached job objects. This key
references the owning controller and functions as the index. Later in this document we
will configure the manager to actually index this field.
Once we have all the jobs we own, we’ll split them into active, successful, and failed jobs,
keeping track of the most recent run so that we can record it in status. Remember, status
should be able to be reconstituted from the state of the world, so it’s generally not a good
idea to read from the status of the root object. Instead, you should reconstruct it every run.
That’s what we’ll do here.
We can check if a job is “finished” and whether it succeeded or failed using status conditions.
We’ll put that logic in a helper to make our code cleaner.
// isJobFinished (hidden) ◀
// getScheduledTimeForJob (hidden) ◀
for i, job := range childJobs.Items {
_, finishedType := isJobFinished(&job)
switch finishedType {
case "": // ongoing
activeJobs = append(activeJobs, &childJobs.Items[i])
case kbatch.JobFailed:
failedJobs = append(failedJobs, &childJobs.Items[i])
case kbatch.JobComplete:
successfulJobs = append(successfulJobs, &childJobs.Items[i])
}
if mostRecentTime != nil {
cronJob.Status.LastScheduleTime = &metav1.Time{Time: *mostRecentTime}
} else {
cronJob.Status.LastScheduleTime = nil
}
cronJob.Status.Active = nil
for _, activeJob := range activeJobs {
jobRef, err := ref.GetReference(r.Scheme, activeJob)
if err != nil {
log.Error(err, "unable to make reference to active job", "job",
activeJob)
continue
}
cronJob.Status.Active = append(cronJob.Status.Active, *jobRef)
}
Here, we’ll log how many jobs we observed at a slightly higher logging level, for debugging.
Notice how instead of using a format string, we use a fixed message, and attach key-value
pairs with the extra information. This makes it easier to filter and query log lines.
Using the data we’ve gathered, we’ll update the status of our CRD. Just like before, we use our
client. To specifically update the status subresource, we’ll use the Status part of the client,
with the Update method.
The status subresource ignores changes to spec, so it’s less likely to conflict with any other
updates, and can have separate permissions.
Once we’ve updated our status, we can move on to ensuring that the status of the world
matches what we want in our spec.
First, we’ll try to clean up old jobs, so that we don’t leave too many lying around.
// NB: deleting these are "best effort" -- if we fail on a particular one,
// we won't requeue just to finish the deleting.
if cronJob.Spec.FailedJobsHistoryLimit != nil {
sort.Slice(failedJobs, func(i, j int) bool {
if failedJobs[i].Status.StartTime == nil {
return failedJobs[j].Status.StartTime != nil
}
return
failedJobs[i].Status.StartTime.Before(failedJobs[j].Status.StartTime)
})
for i, job := range failedJobs {
if int32(i) >= int32(len(failedJobs))-
*cronJob.Spec.FailedJobsHistoryLimit {
break
}
if err := r.Delete(ctx, job,
client.PropagationPolicy(metav1.DeletePropagationBackground));
client.IgnoreNotFound(err) != nil {
log.Error(err, "unable to delete old failed job", "job", job)
} else {
log.V(0).Info("deleted old failed job", "job", job)
}
}
}
if cronJob.Spec.SuccessfulJobsHistoryLimit != nil {
sort.Slice(successfulJobs, func(i, j int) bool {
if successfulJobs[i].Status.StartTime == nil {
return successfulJobs[j].Status.StartTime != nil
}
return
successfulJobs[i].Status.StartTime.Before(successfulJobs[j].Status.StartTime)
})
for i, job := range successfulJobs {
if int32(i) >= int32(len(successfulJobs))-
*cronJob.Spec.SuccessfulJobsHistoryLimit {
break
}
if err := r.Delete(ctx, job,
client.PropagationPolicy(metav1.DeletePropagationBackground)); (err) != nil {
log.Error(err, "unable to delete old successful job", "job", job)
} else {
log.V(0).Info("deleted old successful job", "job", job)
}
}
}
If this object is suspended, we don’t want to run any jobs, so we’ll stop now. This is useful if
something’s broken with the job we’re running and we want to pause runs to investigate or
putz with the cluster, without deleting the object.
if cronJob.Spec.Suspend != nil && *cronJob.Spec.Suspend {
log.V(1).Info("cronjob suspended, skipping")
return ctrl.Result{}, nil
}
If we’re not paused, we’ll need to calculate the next scheduled run, and whether or not we’ve
got a run that we haven’t processed yet.
// getNextSchedule (hidden) ◀
We’ll prep our eventual request to requeue until the next job, and then figure out if we
actually need to run.
6: Run a new job if it’s on schedule, not past the deadline, and not blocked
by our concurrency policy
If we’ve missed a run, and we’re still within the deadline to start it, we’ll need to run a job.
if missedRun.IsZero() {
log.V(1).Info("no upcoming scheduled times, sleeping until next")
return scheduledResult, nil
}
If we actually have to run a job, we’ll need to either wait till existing ones finish, replace the
existing ones, or just add new ones. If our information is out of date due to cache delay, we’ll
get a requeue when we get up-to-date information.
// figure out how to run this job -- concurrency policy might forbid us from
running
// multiple at the same time...
if cronJob.Spec.ConcurrencyPolicy == batchv1.ForbidConcurrent &&
len(activeJobs) > 0 {
log.V(1).Info("concurrency policy blocks concurrent runs, skipping", "num
active", len(activeJobs))
return scheduledResult, nil
}
Once we’ve figured out what to do with existing jobs, we’ll actually create our desired job
// constructJobForCronJob (hidden) ◀
// actually make the job...
job, err := constructJobForCronJob(&cronJob, missedRun)
if err != nil {
log.Error(err, "unable to construct job from template")
// don't bother requeuing until we get a change to the spec
return scheduledResult, nil
}
7: Requeue when we either see a running job or it’s time for the next
scheduled run
Finally, we’ll return the result that we prepped above, that says we want to requeue when our
next run would need to occur. This is taken as a maximum deadline -- if something else
changes in between, like our job starts or finishes, we get modified, etc, we might reconcile
again sooner.
// we'll requeue once we see the running job, and update our status
return scheduledResult, nil
}
Setup
Finally, we’ll update our setup. In order to allow our reconciler to quickly look up Jobs by their
owner, we’ll need an index. We declare an index key that we can later use with the client as a
pseudo-field name, and then describe how to extract the indexed value from the Job object.
The indexer will automatically take care of namespaces for us, so we just have to extract the
owner name if the Job has a CronJob owner.
Additionally, we’ll inform the manager that this controller owns some Jobs, so that it will
automatically call Reconcile on the underlying CronJob when a Job changes, is deleted, etc.
var (
jobOwnerKey = ".metadata.controller"
apiGVStr = batchv1.GroupVersion.String()
)
if err := mgr.GetFieldIndexer().IndexField(context.Background(),
&kbatch.Job{}, jobOwnerKey, func(rawObj client.Object) []string {
// grab the job object, extract the owner...
job := rawObj.(*kbatch.Job)
owner := metav1.GetControllerOf(job)
if owner == nil {
return nil
}
// ...make sure it's a CronJob...
if owner.APIVersion != apiGVStr || owner.Kind != "CronJob" {
return nil
}
return ctrl.NewControllerManagedBy(mgr).
For(&batchv1.CronJob{}).
Owns(&kbatch.Job{}).
Complete(r)
}
That was a doozy, but now we’ve got a working controller. Let’s test against the cluster, then, if
we don’t have any issues, deploy it!
$ vim project/cmd/main.go
// Imports (hidden) ◀
The first difference to notice is that kubebuilder has added the new API group’s package
( batchv1 ) to our scheme. This means that we can use those objects in our controller.
If we would be using any other CRD we would have to add their scheme the same way. Builtin
types such as Job have their scheme added by clientgoscheme .
var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)
func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))
utilruntime.Must(batchv1.AddToScheme(scheme))
//+kubebuilder:scaffold:scheme
}
The other thing that’s changed is that kubebuilder has added a block calling our CronJob
controller’s SetupWithManager method.
func main() {
if err = (&controller.CronJobReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller",
"CronJob")
os.Exit(1)
}
Implementing defaulting/validating
webhooks
If you want to implement admission webhooks for your CRD, the only thing you need to do is
to implement the Defaulter and (or) the Validator interface.
This will scaffold the webhook functions and register your webhook with the manager in your
main.go for you.
$ vim project/api/v1/cronjob_webhook.go
// Go imports (hidden) ◀
Notice that we use kubebuilder markers to generate webhook manifests. This marker is
responsible for generating a mutating webhook manifest.
//+kubebuilder:webhook:path=/mutate-batch-tutorial-kubebuilder-io-v1-
cronjob,mutating=true,failurePolicy=fail,groups=batch.tutorial.kubebuilder.io,resou
We use the webhook.Defaulter interface to set defaults to our CRD. A webhook will
automatically be served that calls this defaulting.
The Default method is expected to mutate the receiver, setting the defaults.
var _ webhook.Defaulter = &CronJob{}
if r.Spec.ConcurrencyPolicy == "" {
r.Spec.ConcurrencyPolicy = AllowConcurrent
}
if r.Spec.Suspend == nil {
r.Spec.Suspend = new(bool)
}
if r.Spec.SuccessfulJobsHistoryLimit == nil {
r.Spec.SuccessfulJobsHistoryLimit = new(int32)
*r.Spec.SuccessfulJobsHistoryLimit = 3
}
if r.Spec.FailedJobsHistoryLimit == nil {
r.Spec.FailedJobsHistoryLimit = new(int32)
*r.Spec.FailedJobsHistoryLimit = 1
}
}
//+kubebuilder:webhook:verbs=create;update;delete,path=/validate-batch-tutorial-
kubebuilder-io-v1-
cronjob,mutating=false,failurePolicy=fail,groups=batch.tutorial.kubebuilder.io,reso
We can validate our CRD beyond what’s possible with declarative validation. Generally,
declarative validation should be sufficient, but sometimes more advanced use cases call for
complex validation.
For instance, we’ll see below that we use this to validate a well-formed cron schedule without
making up a long regular expression.
return apierrors.NewInvalid(
schema.GroupKind{Group: "batch.tutorial.kubebuilder.io", Kind:
"CronJob"},
r.Name, allErrs)
}
Some fields are declaratively validated by OpenAPI schema. You can find kubebuilder
validation markers (prefixed with // +kubebuilder:validation ) in the Designing an API
section. You can find all of the kubebuilder supported markers for declaring validation by
running controller-gen crd -w , or here.
func (r *CronJob) validateCronJobSpec() *field.Error {
// The field helpers from the kubernetes API machinery help us return nicely
// structured validation errors.
return validateScheduleFormat(
r.Spec.Schedule,
field.NewPath("spec").Child("schedule"))
}
Optional
If opting to make any changes to the API definitions, then before proceeding, generate the
manifests like CRs or CRDs with
make manifests
To test out the controller, we can run it locally against the cluster. Before we do so, though,
we’ll need to install our CRDs, as per the quick start. This will automatically update the YAML
manifests using controller-tools, if needed:
make install
Now that we’ve installed our CRDs, we can run the controller against our cluster. This will use
whatever credentials that we connect to the cluster with, so we don’t need to worry about
RBAC just yet.
If you want to run the webhooks locally, you’ll have to generate certificates for serving the
webhooks, and place them in the right directory ( /tmp/k8s-webhook-server/serving-
certs/tls.{crt,key} , by default).
If you’re not running a local API server, you’ll also need to figure out how to proxy traffic
from the remote cluster to your local webhook server. For this reason, we generally
recommend disabling webhooks when doing your local code-run-test cycle, as we do
below.
export ENABLE_WEBHOOKS=false
make run
You should see logs from the controller about starting up, but it won’t do anything just yet.
apiVersion: batch.tutorial.kubebuilder.io/v1
kind: CronJob
metadata:
labels:
app.kubernetes.io/name: cronjob
app.kubernetes.io/instance: cronjob-sample
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: project
name: cronjob-sample
spec:
schedule: "*/1 * * * *"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
At this point, you should see a flurry of activity. If you watch the changes, you should see your
cronjob running, and updating status:
Now that we know it’s working, we can run it in the cluster. Stop the make run invocation, and
run
make docker-build docker-push IMG=<some-registry>/<project-name>:tag
make deploy IMG=<some-registry>/<project-name>:tag
registry permission
This image ought to be published in the personal registry you specified. And it is required
to have access to pull the image from the working environment. Make sure you have the
proper permission to the registry if the above commands don’t work.
If we list cronjobs again like we did before, we should see the controller functioning again!
Deploying cert-manager
We suggest using cert-manager for provisioning the certificates for the webhook server. Other
solutions should also work as long as they put the certificates in the desired location.
cert-manager also has a component called CA Injector, which is responsible for injecting the
CA bundle into the MutatingWebhookConfiguration / ValidatingWebhookConfiguration .
Kind Cluster
It is recommended to develop your webhook with a kind cluster for faster iteration. Why?
cert-manager
You need to follow this to install the cert-manager bundle.
Build your image
Run the following command to build your image locally.
You don’t need to push the image to a remote container registry if you are using a kind
cluster. You can directly load your local image to your specified kind cluster:
Deploy Webhooks
You need to enable the webhook and cert manager configuration through kustomize.
config/default/kustomization.yaml should now look like the following:
# Adds namespace to all resources.
namespace: project-system
resources:
- ../crd
- ../rbac
- ../manager
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix
including the one in
# crd/kustomization.yaml
- ../webhook
# [CERTMANAGER] To enable cert-manager, uncomment all sections with
'CERTMANAGER'. 'WEBHOOK' components are required.
- ../certmanager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with
'PROMETHEUS'.
- ../prometheus
patchesStrategicMerge:
# Protect the /metrics endpoint by putting it behind auth.
# If you want your controller-manager to expose the /metrics
# endpoint w/o any authn/z, please comment the following line.
- manager_auth_proxy_patch.yaml
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix
including the one in
# crd/kustomization.yaml
- manager_webhook_patch.yaml
patches:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.
# patches here are for enabling the conversion webhook for each CRD
- patches/webhook_in_cronjobs.yaml
#+kubebuilder:scaffold:crdkustomizewebhookpatch
Wait a while till the webhook pod comes up and the certificates are provisioned. It usually
completes within 1 minute.
Now you can create a valid CronJob to test your webhooks. The creation should successfully
go through.
You can also try to create an invalid CronJob (e.g. use an ill-formatted schedule field). You
should see a creation failure with a validation error.
If you are deploying a webhook for pods in the same cluster, be careful about the
bootstrapping problem, since the creation request of the webhook pod would be sent to
the webhook pod itself, which hasn’t come up yet.
To make it work, you can either use namespaceSelector if your kubernetes version is 1.9+
or use objectSelector if your kubernetes version is 1.15+ to skip itself.
Writing controller tests
Testing Kubernetes controllers is a big subject, and the boilerplate testing files generated for
you by kubebuilder are fairly minimal.
The basic approach is that, in your generated suite_test.go file, you will use envtest to
create a local Kubernetes API server, instantiate and run your controllers, and then write
additional *_test.go files to test it using Ginkgo.
If you want to tinker with how your envtest cluster is configured, see section Configuring
envtest for integration tests as well as the envtest docs .
// Imports (hidden) ◀
var (
cfg *rest.Config
k8sClient client.Client // You'll be using this client in your tests.
testEnv *envtest.Environment
ctx context.Context
cancel context.CancelFunc
)
var _ = BeforeSuite(func() {
logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true)))
First, the envtest cluster is configured to read CRDs from the CRD directory Kubebuilder
scaffolds for you.
By("bootstrapping test environment")
testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "..", "config",
"crd", "bases")},
ErrorIfCRDPathMissing: true,
}
The autogenerated test code will add the CronJob Kind schema to the default client-go k8s
scheme. This ensures that the CronJob API/Kind will be used in our test controller.
err = batchv1.AddToScheme(scheme.Scheme)
Expect(err).NotTo(HaveOccurred())
After the schemas, you will see the following marker. This marker is what allows new schemas
to be added here automatically when a new API is added to the project.
//+kubebuilder:scaffold:scheme
One thing that this autogenerated file is missing, however, is a way to actually start your
controller. The code above will set up a client for interacting with your custom Kind, but will
not be able to test your controller behavior. If you want to test your custom controller logic,
you’ll need to add some familiar-looking manager logic to your BeforeSuite() function, so you
can register your custom controller to run on this test cluster.
You may notice that the code below runs your controller with nearly identical logic to your
CronJob project’s main.go! The only difference is that the manager is started in a separate
goroutine so it does not block the cleanup of envtest when you’re done running your tests.
Note that we set up both a “live” k8s client and a separate client from the manager. This is
because when making assertions in tests, you generally want to assert against the live state of
the API server. If you use the client from the manager ( k8sManager.GetClient ), you’d end up
asserting against the contents of the cache instead, which is slower and can introduce
flakiness into your tests. We could use the manager’s APIReader to accomplish the same
thing, but that would leave us with two clients in our test assertions and setup (one for
reading, one for writing), and it’d be easy to make mistakes.
Note that we keep the reconciler running against the manager’s cache client, though -- we
want our controller to behave as it would in production, and we use features of the cache (like
indicies) in our controller which aren’t available when talking directly to the API server.
err = (&CronJobReconciler{
Client: k8sManager.GetClient(),
Scheme: k8sManager.GetScheme(),
}).SetupWithManager(k8sManager)
Expect(err).ToNot(HaveOccurred())
go func() {
defer GinkgoRecover()
err = k8sManager.Start(ctx)
Expect(err).ToNot(HaveOccurred(), "failed to run manager")
}()
})
Kubebuilder also generates boilerplate functions for cleaning up envtest and actually running
your test files in your controllers/ directory. You won’t need to touch these.
var _ = AfterSuite(func() {
cancel()
By("tearing down the test environment")
err := testEnv.Stop()
Expect(err).NotTo(HaveOccurred())
})
Now that you have your controller running on a test cluster and a client ready to perform
operations on your CronJob, we can start writing integration tests!
Ideally, we should have one <kind>_controller_test.go for each controller scaffolded and
called in the suite_test.go . So, let’s write our example test for the CronJob controller
( cronjob_controller_test.go. )
// Imports (hidden) ◀
The first step to writing a simple integration test is to actually create an instance of CronJob
you can run tests against. Note that to create a CronJob, you’ll need to create a stub CronJob
struct that contains your CronJob’s specifications.
Note that when we create a stub CronJob, the CronJob also needs stubs of its required
downstream objects. Without the stubbed Job template spec and the Pod template spec
below, the Kubernetes API will not be able to create the CronJob.
var _ = Describe("CronJob controller", func() {
timeout = time.Second * 10
duration = time.Second * 10
interval = time.Millisecond * 250
)
After creating this CronJob, let’s check that the CronJob’s Spec fields match what we passed in.
Note that, because the k8s apiserver may not have finished creating a CronJob after our
Create() call from earlier, we will use Gomega’s Eventually() testing function instead of
Expect() to give the apiserver an opportunity to finish creating our CronJob.
Eventually() will repeatedly run the function provided as an argument every interval
seconds until (a) the function’s output matches what’s expected in the subsequent Should()
call, or (b) the number of attempts * interval period exceed the provided timeout value.
In the examples below, timeout and interval are Go Duration values of our choosing.
// We'll need to retry getting this newly created CronJob, given that
creation may not immediately happen.
Eventually(func() bool {
err := k8sClient.Get(ctx, cronjobLookupKey, createdCronjob)
if err != nil {
return false
}
return true
}, timeout, interval).Should(BeTrue())
// Let's make sure our Schedule string value was properly
converted/handled.
Expect(createdCronjob.Spec.Schedule).Should(Equal("1 * * * *"))
Now that we’ve created a CronJob in our test cluster, the next step is to write a test that
actually tests our CronJob controller’s behavior. Let’s test the CronJob controller’s logic
responsible for updating CronJob.Status.Active with actively running jobs. We’ll verify that
when a CronJob has a single active downstream Job, its CronJob.Status.Active field contains a
reference to this Job.
First, we should get the test CronJob we created earlier, and verify that it currently does not
have any active jobs. We use Gomega’s Consistently() check here to ensure that the active
job count remains 0 over a duration of time.
Next, we actually create a stubbed Job that will belong to our CronJob, as well as its
downstream template specs. We set the Job’s status’s “Active” count to 2 to simulate the Job
running two pods, which means the Job is actively running.
We then take the stubbed Job and set its owner reference to point to our test CronJob. This
ensures that the test Job belongs to, and is tracked by, our test CronJob. Once that’s done, we
create our new Job instance.
Adding this Job to our test CronJob should trigger our controller’s reconciler logic. After that,
we can write a test that evaluates whether our controller eventually updates our CronJob’s
Status field as expected!
By("By checking that the CronJob has one active Job")
Eventually(func() ([]string, error) {
err := k8sClient.Get(ctx, cronjobLookupKey, createdCronjob)
if err != nil {
return nil, err
}
names := []string{}
for _, job := range createdCronjob.Status.Active {
names = append(names, job.Name)
}
return names, nil
}, timeout, interval).Should(ConsistOf(JobName), "should list our
active job %s in the active jobs list in status", JobName)
})
})
})
After writing all this code, you can run go test ./... in your controllers/ directory again
to run your new test!
This Status update example above demonstrates a general testing strategy for a custom Kind
with downstream objects. By this point, you hopefully have learned the following methods for
testing your controller behavior:
Advanced Examples
There are more involved examples of using envtest to rigorously test controller behavior.
Examples include:
Azure Databricks Operator: see their fully fleshed-out suite_test.go as well as any
*_test.go file in that directory like this one.
Epilogue
By this point, we’ve got a pretty full-featured implementation of the CronJob controller, made
use of most of the features of Kubebuilder, and written tests for the controller using envtest.
If you want more, head over to the Multi-Version Tutorial to learn how to add new API
versions to a project.
Additionally, you can try the following steps on your own -- we’ll have a tutorial section on
them Soon™:
adding additional printer columns kubectl get
Let’s make some changes to the CronJob API spec and make sure all the different versions
are supported by our CronJob project.
If you haven’t already, make sure you’ve gone through the base CronJob Tutorial.
Note that most of this tutorial is generated from literate Go files that form a runnable
project, and live in the book source directory: docs/book/src/multiversion-
tutorial/testdata/project.
CRD conversion support was introduced as an alpha feature in Kubernetes 1.13 (which
means it’s not on by default, and needs to be enabled via a feature gate), and became
beta in Kubernetes 1.15 (which means it’s on by default).
If you’re on Kubernetes 1.13-1.14, make sure to enable the feature gate. If you’re on
Kubernetes 1.12 or below, you’ll need a new cluster to use conversion. Check out the kind
instructions for instructions on how to set up a all-in-one cluster.
Changing things up
A fairly common change in a Kubernetes API is to take some data that used to be
unstructured or stored in some special string format, and change it to structured data. Our
schedule field fits the bill quite nicely for this -- right now, in v1 , our schedules look like
That’s a pretty textbook example of a special string format (it’s also pretty unreadable unless
you’re a Unix sysadmin).
Let’s make it a bit more structured. According to the our CronJob code, we support “standard”
Cron format.
In Kubernetes, all versions must be safely round-tripable through each other. This means
that if we convert from version 1 to version 2, and then back to version 1, we must not lose
information. Thus, any change we make to our API must be compatible with whatever we
supported in v1, and also need to make sure anything we add in v2 is supported in v1. In
some cases, this means we need to add new fields to v1, but in our case, we won’t have to,
since we’re not adding new functionality.
Keeping all that in mind, let’s convert our example above to be slightly more structured:
schedule:
minute: */1
Now, at least, we’ve got labels for each of our fields, but we can still easily support all the
different syntax for each field.
We’ll need a new API version for this change. Let’s call it v2:
Now, let’s copy over our existing types, and make the change:
$ vim project/api/v2/cronjob_types.go
Since we’re in a v2 package, controller-gen will assume this is for the v2 version automatically.
We could override that with the +versionName marker.
package v2
// Imports (hidden) ◀
We’ll leave our spec largely unchanged, except to change the schedule field to a new type.
Next, we’ll need to define a type to hold our schedule. Based on our proposed YAML above,
it’ll have a field for each corresponding Cron “field”.
// describes a Cron schedule.
type CronSchedule struct {
// specifies the minute during which the job executes.
// +optional
Minute *CronField `json:"minute,omitempty"`
// specifies the hour during which the job executes.
// +optional
Hour *CronField `json:"hour,omitempty"`
// specifies the day of the month during which the job executes.
// +optional
DayOfMonth *CronField `json:"dayOfMonth,omitempty"`
// specifies the month during which the job executes.
// +optional
Month *CronField `json:"month,omitempty"`
// specifies the day of the week during which the job executes.
// +optional
DayOfWeek *CronField `json:"dayOfWeek,omitempty"`
}
Finally, we’ll define a wrapper type to represent a field. We could attach additional validation
to this field, but for now we’ll just use it for documentation purposes.
Storage Versions
$ vim project/api/v1/cronjob_types.go
package v1
// Imports (hidden) ◀
Since we’ll have more than one version, we’ll need to mark a storage version. This is the
version that the Kubernetes API server uses to store our data. We’ll chose the v1 version for
our project.
Note that multiple versions may exist in storage if they were written before the storage
version changes -- changing the storage version only affects how objects are created/updated
after the change.
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:storageversion
Now that we’ve got our types in place, we’ll need to set up conversion...
Before we do that, though, we’ll need to understand how controller-runtime thinks about
versions. Namely:
This works fine when we just have two versions, but what if we had 4 types? 8 types? That’d be
a lot of conversion functions.
Then, if we have to convert between two non-hub versions, we first convert to the hub
version, and then to our desired version:
This cuts down on the number of conversion functions that we have to define, and is modeled
off of what Kubernetes does internally.
In that case, the API server needs to know how to convert between the desired version and
the stored version. Since the conversions aren’t built in for CRDs, the Kubernetes API server
calls out to a webhook to do the conversion instead. For Kubebuilder, this webhook is
implemented by controller-runtime, and performs the hub-and-spoke conversions that we
discussed above.
Now that we have the model for conversion down pat, we can actually implement our
conversions.
Implementing conversion
With our model for conversion in place, it’s time to actually implement the conversion
functions. We’ll put them in a file called cronjob_conversion.go next to our
cronjob_types.go file, to avoid cluttering up our main types file with extra functions.
Hub...
First, we’ll implement the hub. We’ll choose the v1 version as the hub:
$ vim project/api/v1/cronjob_conversion.go
package v1
Implementing the hub method is pretty easy -- we just have to add an empty method called
Hub() to serve as a marker. We could also just put this inline in our cronjob_types.go file.
$ vim project/api/v2/cronjob_conversion.go
package v2
// Imports (hidden) ◀
Our “spoke” versions need to implement the Convertible interface. Namely, they’ll need
ConvertTo and ConvertFrom methods to convert to/from the hub version.
ConvertTo is expected to modify its argument to contain the converted object. Most of the
conversion is straightforward copying, except for converting our changed field.
// ConvertTo converts this CronJob to the Hub version (v1).
func (src *CronJob) ConvertTo(dstRaw conversion.Hub) error {
dst := dstRaw.(*v1.CronJob)
sched := src.Spec.Schedule
scheduleParts := []string{"*", "*", "*", "*", "*"}
if sched.Minute != nil {
scheduleParts[0] = string(*sched.Minute)
}
if sched.Hour != nil {
scheduleParts[1] = string(*sched.Hour)
}
if sched.DayOfMonth != nil {
scheduleParts[2] = string(*sched.DayOfMonth)
}
if sched.Month != nil {
scheduleParts[3] = string(*sched.Month)
}
if sched.DayOfWeek != nil {
scheduleParts[4] = string(*sched.DayOfWeek)
}
dst.Spec.Schedule = strings.Join(scheduleParts, " ")
return nil
}
ConvertFrom is expected to modify its receiver to contain the converted object. Most of the
conversion is straightforward copying, except for converting our changed field.
return nil
}
Now that we’ve got our conversions in place, all that we need to do is wire up our main to
serve the webhook!
to scaffold out the webhook setup. However, we’ve already got webhook setup, from when we
built our defaulting and validating webhooks!
Webhook setup...
$ vim project/api/v1/cronjob_webhook.go
// Go imports (hidden) ◀
This setup doubles as setup for our conversion webhooks: as long as our types implement the
Hub and Convertible interfaces, a conversion webhook will be registered.
...and main.go
Similarly, our existing main file is sufficient:
$ vim project/cmd/main.go
// Imports (hidden) ◀
func main() {
// existing setup (hidden) ◀
Our existing call to SetupWebhookWithManager registers our conversion webhooks with the
manager, too.
if os.Getenv("ENABLE_WEBHOOKS") != "false" {
if err = (&batchv1.CronJob{}).SetupWebhookWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create webhook", "webhook", "CronJob")
os.Exit(1)
}
if err = (&batchv2.CronJob{}).SetupWebhookWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create webhook", "webhook", "CronJob")
os.Exit(1)
}
}
//+kubebuilder:scaffold:builder
Everything’s set up and ready to go! All that’s left now is to test out our webhooks.
Kubebuilder generates Kubernetes manifests under the config directory with webhook bits
disabled. To enable them, we need to:
Additionally, if present in our Makefile, we’ll need to set the CRD_OPTIONS variable to just
"crd" , removing the trivialVersions option (this ensures that we actually generate
validation for each version, instead of telling Kubernetes that they’re the same):
CRD_OPTIONS ?= "crd"
Now we have all our code changes and manifests in place, so let’s deploy it to the cluster and
test it out.
You’ll need cert-manager installed (version 0.9.0+ ) unless you’ve got some other certificate
management solution. The Kubebuilder team has tested the instructions in this tutorial with
0.9.0-alpha.0 release.
Once all our ducks are in a row with certificates, we can run make install deploy (as normal)
to deploy all the bits (CRD, controller-manager deployment) onto the cluster.
Testing
Once all of the bits are up and running on the cluster with conversion enabled, we can test out
our conversion by requesting different versions.
apiVersion: batch.tutorial.kubebuilder.io/v2
kind: CronJob
metadata:
labels:
app.kubernetes.io/name: cronjob
app.kubernetes.io/instance: cronjob-sample
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: project
name: cronjob-sample
spec:
schedule:
minute: "*/1"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
If we’ve done everything correctly, it should create successfully, and we should be able to
fetch it using both the v2 resource
Both should be filled out, and look equivalent to our v2 and v1 samples, respectively. Notice
that each has a different API version.
Finally, if we wait a bit, we should notice that our CronJob continues to reconcile, even though
our controller is written against our v1 API version.
When we access our API types from Go code, we ask for a specific version by using that
version’s Go type (e.g. batchv2.CronJob ).
You might’ve noticed that the above invocations of kubectl looked a little different from
what we usually do -- namely, they specify a group-version-resource, instead of just a
resource.
When we write kubectl get cronjob , kubectl needs to figure out which group-version-
resource that maps to. To do this, it uses the discovery API to figure out the preferred
version of the cronjob resource. For CRDs, this is more-or-less the latest stable version
(see the CRD docs for specific details).
With our updates to CronJob, this means that kubectl get cronjob fetches the
batch/v2 group-version.
Troubleshooting
steps for troubleshooting
Tutorial: ComponentConfig
! Component Config is deprecated
The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.
Please, be aware that it will force Kubebuilder remove this option soon in future release.
Nearly every project that is built for Kubernetes will eventually need to support passing in
additional configurations into the controller. These could be to enable better logging, turn
on/off specific feature gates, set the sync period, or a myriad of other controls. Previously this
was commonly done using cli flags that your main.go would parse to make them accessible
within your program. While this works it’s not a future forward design and the Kubernetes
community has been migrating the core components away from this and toward using
versioned config files, referred to as “component configs”.
The rest of this tutorial will show you how to configure your kubebuilder project with the
component config type then moves on to implementing a custom type so that you can extend
this capability.
Note that most of this tutorial is generated from literate Go files that form a runnable
project, and live in the book source directory: docs/book/src/component-config-
tutorial/testdata/project.
Resources
Versioned Component Configuration File Design
Changing things up
! Component Config is deprecated
The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.
Please, be aware that it will force Kubebuilder remove this option soon in future release.
This tutorial will show you how to create a custom configuration file for your project by
modifying a project generated with the --component-config flag passed to the init
command. The full tutorial’s source can be found here. Make sure you’ve gone through the
installation steps before continuing.
New project:
First, add a new flag to specify the path that the component config file should be loaded
from.
var configFile string
flag.StringVar(&configFile, "config", "",
"The controller will load its initial configuration from this file. "+
"Omit this flag to use the default configuration values. "+
"Command-line flags override configuration from this file.")
Now, we can setup the Options struct and check if the configFile is set, this allows
backwards compatibility, if it’s set we’ll then use the AndFrom function on Options to parse
and populate the Options from the config.
Lastly, we’ll change the NewManager call to use the options variable we defined above.
With that out of the way, we can get on to defining our new config!
generatorOptions:
disableNameSuffixHash: true
configMapGenerator:
- name: manager-config
files:
- controller_manager_config.yaml
patchesStrategicMerge:
# Mount the controller config file for loading manager configurations
# through a ComponentConfig type
- manager_config_patch.yaml
Update the file default/manager_config_patch.yaml by adding under the spec: key the
following patch:
spec:
template:
spec:
containers:
- name: manager
args:
- "--config=controller_manager_config.yaml"
volumeMounts:
- name: manager-config
mountPath: /controller_manager_config.yaml
subPath: controller_manager_config.yaml
volumes:
- name: manager-config
configMap:
name: manager-config
The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.
Please, be aware that it will force Kubebuilder remove this option soon in future release.
Now that you have a component config base project we need to customize the values that are
passed into the controller, to do this we can take a look at
config/manager/controller_manager_config.yaml .
$ vim controller_manager_config.yaml
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: 80807133.tutorial.kubebuilder.io
To see all the available fields you can look at the v1alpha Controller Runtime config
ControllerManagerConfiguration
Using a Custom Type
! Component Config is deprecated
The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.
Please, be aware that it will force Kubebuilder remove this option soon in future release.
If you don’t need to add custom fields to configure your project you can stop now and
move on, if you’d like to be able to pass additional information keep reading.
If your project needs to accept additional non-controller runtime specific configurations, e.g.
ClusterName , Region or anything serializable into yaml you can do this by using
kubebuilder to create a new type and then updating your main.go to setup the new type for
parsing.
The rest of this tutorial will walk through implementing a custom component config type.
The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.
Please, be aware that it will force Kubebuilder remove this option soon in future release.
To scaffold out a new config Kind, we can use kubebuilder create api .
Then, run make build to implement the interface for your API type, which would generate the
file zz_generated.deepcopy.go .
Use --controller=false
You may notice this command from the CronJob tutorial although here we explicitly
setting --controller=false because ProjectConfig is not intended to be an API
extension and cannot be reconciled.
This will create a new type file in api/config/v2/ for the ProjectConfig kind. We’ll need to
change this file to embed the v1alpha1.ControllerManagerConfigurationSpec
$ vim projectconfig_types.go
We start out simply enough: we import the config/v1alpha1 API group, which is exposed
through ControllerRuntime.
package v2
import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
cfg "sigs.k8s.io/controller-runtime/pkg/config/v1alpha1"
)
// +kubebuilder:object:root=true
Next, we’ll remove the default ProjectConfigSpec and ProjectConfigList then we’ll embed
cfg.ControllerManagerConfigurationSpec in ProjectConfig .
If you haven’t, you’ll also need to remove the ProjectConfigList from the
SchemeBuilder.Register .
func init() {
SchemeBuilder.Register(&ProjectConfig{})
}
Lastly, we’ll change the main.go to reference this type for parsing the file.
Updating main
! Component Config is deprecated
The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.
Please, be aware that it will force Kubebuilder remove this option soon in future release.
Once you have defined your new custom component config type we need to make sure our
new config type has been imported and the types are registered with the scheme. If you used
kubebuilder create api this should have been automated.
import (
// ... other imports
configv2 "tutorial.kubebuilder.io/project/apis/config/v2"
// +kubebuilder:scaffold:imports
)
With the package imported we can confirm the types have been added.
func init() {
// ... other scheme registrations
utilruntime.Must(configv2.AddToScheme(scheme))
// +kubebuilder:scaffold:scheme
}
Lastly, we need to change the options parsing in main.go to use this new type. To do this we’ll
chain OfKind onto ctrl.ConfigFile() and pass in a pointer to the config kind.
Now if you need to use the .clusterName field we defined in our custom kind you can call
ctrlConfig.ClusterName which will be populated from the config file supplied.
Defining your Custom Config
! Component Config is deprecated
The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.
Please, be aware that it will force Kubebuilder remove this option soon in future release.
$ vim project/config/manager/controller_manager_config.yaml
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
metadata:
labels:
app.kubernetes.io/name: controllermanagerconfig
app.kubernetes.io/instance: controller-manager-configuration
app.kubernetes.io/component: manager
app.kubernetes.io/created-by: project
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: 80807133.tutorial.kubebuilder.io
clusterName: example-test
This type uses the new ProjectConfig kind under the GVK
config.tutorial.kubebuilder.io/v2 , with these custom configs we can add any yaml
serializable fields that your controller needs and begin to reduce the reliance on flags to
configure your project.
Migrations
Migrating between project structures in Kubebuilder generally involves a bit of manual work.
This section details what’s required to migrate, between different versions of Kubebuilder
scaffolding, as well as to more complex project layout structures.
Migration guides from Legacy versions <
3.0.0
Follow the migration guides from the legacy Kubebuilder versions up the required latest v3x
version. Note that from v3, a new ecosystem using plugins is introduced for better
maintainability, reusability and user experience .
Common changes
V2 project uses go modules. But kubebuilder will continue to support dep until go 1.13 is out.
controller-runtime
Client.List now uses functional options ( List(ctx, list, ...option) ) instead of
List(ctx, ListOptions, list) .
A number of packages under pkg/runtime have been moved, with their old locations
deprecated. The old locations will be removed before controller-runtime v1.0.0. See the
godocs for more information.
Webhook-related
Automatic certificate generation for webhooks has been removed, and webhooks will no
longer self-register. Use controller-tools to generate a webhook configuration. If you
need certificate generation, we recommend using cert-manager. Kubebuilder v2 will
scaffold out cert manager configs for you to use -- see the Webhook Tutorial for more
details.
The builder package now has separate builders for controllers and webhooks, which
facilitates choosing which to run.
controller-tools
The generator framework has been rewritten in v2. It still works the same as before in many
cases, but be aware that there are some breaking changes. Please check marker
documentation for more details.
Kubebuilder
Kubebuilder v2 introduces a simplified project layout. You can find the design doc here.
v2 uses distroless/static instead of Ubuntu as base image. This reduces image size
and attack surface.
Migration from v1 to v2
Make sure you understand the differences between Kubebuilder v1 and v2 before continuing
Please ensure you have followed the installation guide to install the required components.
The recommended way to migrate a v1 project is to create a new v2 project and copy over the
API and the reconciliation code. The conversion will end up with a project that looks like a
native v2 project. However, in some cases, it’s possible to do an in-place upgrade (i.e. reuse
the v1 project layout, upgrading controller-runtime and controller-tools.
Let’s take as example an V1 project and migrate it to Kubebuilder v2. At the end, we should
have something that looks like the example v2 project.
Preparation
We’ll need to figure out what the group, version, kind and domain are.
pkg/
├── apis
│ ├── addtoscheme_batch_v1.go
│ ├── apis.go
│ └── batch
│ ├── group.go
│ └── v1
│ ├── cronjob_types.go
│ ├── cronjob_types_test.go
│ ├── doc.go
│ ├── register.go
│ ├── v1_suite_test.go
│ └── zz_generated.deepcopy.go
├── controller
└── webhook
All of our API information is stored in pkg/apis/batch , so we can look there to find what we
need to know.
Initialize a v2 Project
Now, we need to initialize a v2 project. Before we do that, though, we’ll need to initialize a new
go module if we’re not on the gopath :
go mod init tutorial.kubebuilder.io/project
If you’re using multiple groups, some manual work is required to migrate. Please follow this
for more details.
We don’t need the following markers any more (they’re not used anymore, and are relics from
much older versions of Kubebuilder):
// +genclient
// +k8s:openapi-gen=true
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// CronJob is the Schema for the cronjobs API
type CronJob struct {...}
// +kubebuilder:object:root=true
If you are using webhooks for Kubernetes core types (e.g. Pods), or for an external CRD that is
not owned by you, you can refer the controller-runtime example for builtin types and do
something similar. Kubebuilder doesn’t scaffold much for these cases, but you can use the
library in controller-runtime.
Now let’s scaffold the webhooks for our CRD (CronJob). We’ll need to run the following
command with the --defaulting and --programmatic-validation flags (since our test
project uses defaulting and validating webhooks):
Depending on how many CRDs need webhooks, we may need to run the above command
multiple times with different Group-Version-Kinds.
Now, we’ll need to copy the logic for each webhook. For validating webhooks, we can copy the
contents from func validatingCronJobFn in
pkg/default_server/cronjob/validating/cronjob_create_handler.go to func
ValidateCreate in api/v1/cronjob_webhook.go and then the same for update .
...
The default verbs are verbs=create;update . We need to ensure verbs matches what we
need. For example, if we only want to validate creation, then we would change it to
verbs=create .
Markers like the following are no longer needed (since they deal with self-deploying certificate
configuration, which was removed in v2):
// v1 markers
// +kubebuilder:webhook:port=9876,cert-dir=/tmp/cert
// +kubebuilder:webhook:service=test-system:webhook-service,selector=app:webhook-
server
// +kubebuilder:webhook:secret=test-system:webhook-server-secret
// +kubebuilder:webhook:mutating-webhook-config-name=test-mutating-webhook-cfg
// +kubebuilder:webhook:validating-webhook-config-name=test-validating-webhook-
cfg
In v1, a single webhook marker may be split into multiple ones in the same paragraph. In v2,
each webhook must be represented by a single marker.
Others
If there are any manual updates in main.go in v1, we need to port the changes to the new
main.go . We’ll also need to ensure all of the needed schemes have been registered.
If there are additional manifests added under config directory, port them as well.
Common changes
v3 projects use Go modules and request Go 1.18+. Dep is no longer supported for
dependency management.
Kubebuilder
Preliminary support for plugins was added. For more info see the Extensible CLI and
Scaffolding Plugins: phase 1, the Extensible CLI and Scaffolding Plugins: phase 1.5 and
the Extensible CLI and Scaffolding Plugins - Phase 2 design docs. Also, you can check the
Plugins section.
The PROJECT file now has a new layout. It stores more information about what
resources are in use, to better enable plugins to make useful decisions when scaffolding.
Furthermore, the PROJECT file itself is now versioned: the version field corresponds to
the version of the PROJECT file itself, while the layout field indicates the scaffolding &
primary plugin version in use.
Code changes:
Misc:
Support for controller-tools v0.9.0 (for go/v2 it is v0.3.0 and previously it was
v0.2.5 )
Support for controller-runtime v0.12.1 (for go/v2 it is v0.6.4 and previously it
was v0.5.0 )
Support for kustomize v3.8.7 (for go/v2 it is v3.5.4 and previously it was
v3.1.0 )
Required Envtest binaries are automatically downloaded
The minimum Go version is now 1.18 (previously it was 1.13 ).
! Project customizations
After using the CLI to create your project, you are free to customise how you see fit. Bear
in mind, that it is not recommended to deviate from the proposed layout unless you
know what you are doing.
For example, you should refrain from moving the scaffolded files, doing so will make it
difficult in upgrading your project in the future. You may also lose the ability to use some
of the CLI features and helpers. For further information on the project layout, see the doc
What’s in a basic project?
Migrating to Kubebuilder v3
So you want to upgrade your scaffolding to use the latest and greatest features then, follow
up the following guide which will cover the steps in the most straightforward way to allow you
to upgrade your project to get all latest changes and improvements.
The current scaffold done by the CLI ( go/v3 ) uses kubernetes-sigs/kustomize v3 which
does not provide a valid binary for Apple Silicon ( darwin/arm64 ). Therefore, you can use
the go/v4 plugin instead which provides support for this platform:
So you want to use the latest version of Kubebuilder CLI without changing your scaffolding
then, check the following guide which will describe the manually steps required for you to
upgrade only your PROJECT version and starts to use the plugins versions.
This way is more complex, susceptible to errors, and success cannot be assured. Also, by
following these steps you will not get the improvements and bug fixes in the default
generated project files.
You will check that you can still using the previous layout by using the go/v2 plugin which will
not upgrade the controller-runtime and controller-tools to the latest version used with go/v3
becuase of its breaking changes. By checking this guide you know also how to manually
change the files to use the go/v3 plugin and its dependencies versions.
Migration from v2 to v3
Make sure you understand the differences between Kubebuilder v2 and v3 before continuing.
Please ensure you have followed the installation guide to install the required components.
The recommended way to migrate a v2 project is to create a new v3 project and copy over the
API and the reconciliation code. The conversion will end up with a project that looks like a
native v3 project. However, in some cases, it’s possible to do an in-place upgrade (i.e. reuse
the v2 project layout, upgrading controller-runtime and controller-tools).
Initialize a v3 Project
Project name
For the rest of this document, we are going to use migration-project as the project
name and tutorial.kubebuilder.io as the domain. Please, select and use appropriate
values for your case.
Create a new directory with the name of your project. Note that this name is used in the
scaffolds to create the name of your manager Pod and of the Namespace where the Manager
is deployed by default.
$ mkdir migration-project-name
$ cd migration-project-name
Now, we need to initialize a v3 project. Before we do that, though, we’ll need to initialize a new
go module if we’re not on the GOPATH . While technically this is not needed inside GOPATH , it is
still recommended.
The module of your project can found in the in the `go.mod` file at the root of your
project:
module tutorial.kubebuilder.io/migration-project
...
domain: tutorial.kubebuilder.io
...
For this example, we are going to consider that we need to scaffold both the API types
and the controllers, but remember that this depends on how you scaffolded them in your
original project.
From now on, the CRDs that will be created by controller-gen will be using the Kubernetes
API version apiextensions.k8s.io/v1 by default, instead of
apiextensions.k8s.io/v1beta1 .
So, if you would like to keep using the previous version use the flag --crd-
version=v1beta1 in the above command which is only needed if you want your operator
to support Kubernetes 1.15 and earlier. However, it is no longer recommended.
Now, let’s copy the API definition from api/v1/<kind>_types.go in our old project to the new
one.
These files have not been modified by the new plugin, so you should be able to replace your
freshly scaffolded files by your old one. There may be some cosmetic changes. So you can
choose to only copy the types themselves.
Now, let’s migrate the controller code from controllers/cronjob_controller.go in our old
project to the new one. There is a breaking change and there may be some cosmetic changes.
The new Reconcile method receives the context as an argument now, instead of having to
create it with context.Background() . You can copy the rest of the code in your old controller
to the scaffolded methods replacing:
With:
Skip
If you don’t have any webhooks, you can skip this section.
Now let’s scaffold the webhooks for our CRD (CronJob). We’ll need to run the following
command with the --defaulting and --programmatic-validation flags (since our test
project uses defaulting and validating webhooks):
From now on, the Webhooks that will be created by Kubebuilder using by default the
Kubernetes API version admissionregistration.k8s.io/v1 instead of
admissionregistration.k8s.io/v1beta1 and the cert-manager.io/v1 to replace cert-
manager.io/v1alpha2 .
So, if you would like to keep using the previous version use the flag --webhook-
version=v1beta1 in the above command which is only needed if you want your operator
to support Kubernetes 1.15 and earlier.
Now, let’s copy the webhook definition from api/v1/<kind>_webhook.go from our old project
to the new one.
Others
If there are any manual updates in main.go in v2, we need to port the changes to the new
main.go . We’ll also need to ensure all of the needed schemes have been registered.
If there are additional manifests added under config directory, port them as well.
Verification
Finally, we can run make and make docker-build to ensure things are working fine.
Please ensure you have followed the installation guide to install the required components.
The following guide describes the manual steps required to upgrade your config version and
start using the plugin-enabled version.
This way is more complex, susceptible to errors, and success cannot be assured. Also, by
following these steps you will not get the improvements and bug fixes in the default
generated project files.
Usually you will only try to do it manually if you customized your project and deviated too
much from the proposed scaffold. Before continuing, ensure that you understand the note
about project customizations. Note that you might need to spend more effort to do this
process manually than organize your project customizations to follow up the proposed layout
and keep your project maintainable and upgradable with less effort in the future.
The recommended upgrade approach is to follow the Migration Guide v2 to V3 instead.
The PROJECT file now has a new layout. It stores more information about what resources are
in use, to better enable plugins to make useful decisions when scaffolding.
Furthermore, the PROJECT file itself is now versioned. The version field corresponds to the
version of the PROJECT file itself, while the layout field indicates the scaffolding and the
primary plugin version in use.
Steps to migrate
The following steps describe the manual changes required to bring the project configuration
file ( PROJECT ). These change will add the information that Kubebuilder would add when
generating the file. This file can be found in the root directory.
...
projectName: example
...
...
layout:
- go.kubebuilder.io/v2
...
The version field represents the version of project’s layout. Update this to "3" :
...
version: "3"
...
The attribute resources represents the list of resources scaffolded in your project.
You will need to add the following data for each resource added to the project.
...
resources:
- api:
...
crdVersion: v1beta1
domain: my.domain
group: webapp
kind: Guestbook
...
Add the scope used do scaffold the CRDs by adding resources[entry].api.namespaced: true unless
they were cluster-scoped:
...
resources:
- api:
...
namespaced: true
group: webapp
kind: Guestbook
...
If you have a controller scaffolded for the API then, add resources[entry].controller: true:
...
resources:
- api:
...
controller: true
group: webapp
kind: Guestbook
Add the resource domain such as resources[entry].domain: testproject.org which usually will be
the project domain unless the API scaffold is a core type and/or an external type:
...
resources:
- api:
...
domain: testproject.org
group: webapp
kind: Guestbook
Supportability
Kubebuilder only supports core types and the APIs scaffolded in the project by default
unless you manually change the files you will be unable to work with external-types.
However, for an external-type you might leave this attribute empty. We cannot suggest
what would be the best approach in this case until it become officially supported by the
tool. For further information check the issue #1999.
Note that you will only need to add the domain if your project has a scaffold for a core type
API which the Domain value is not empty in Kubernetes API group qualified scheme definition.
(For example, see here that for Kinds from the API apps it has not a domain when see here
that for Kinds from the API authentication its domain is k8s.io )
Check the following the list to know the core types supported and its domain:
Following an example where a controller was scaffold for the core type Kind Deployment via
the command create api --group apps --version v1 --kind Deployment --
controller=true --resource=false --make=false :
- controller: true
group: apps
kind: Deployment
path: k8s.io/api/apps/v1
version: v1
Add the resources[entry].path with the import path for the api:
Path
If you did not scaffold an API but only generate a controller for the API(GKV) informed
then, you do not need to add the path. Note, that it usually happens when you add a
controller for an external or core type.
Kubebuilder only supports core types and the APIs scaffolded in the project by default
unless you manually change the files you will be unable to work with external-types.
The path will always be the import path used in your Go files to use the API.
...
resources:
- api:
...
...
group: webapp
kind: Guestbook
path: example/api/v1
If your project is using webhooks then, add resources[entry].webhooks.[type]: true for each type
generated and then, add resources[entry].webhooks.webhookVersion: v1beta1:
Webhooks
The valid types are: defaulting , validation and conversion . Use the webhook type
used to scaffold the project.
resources:
- api:
...
...
group: webapp
kind: Guestbook
webhooks:
defaulting: true
validation: true
webhookVersion: v1beta1
Now ensure that your PROJECT file has the same information when the manifests are
generated via Kubebuilder V3 CLI.
For the QuickStart example, the PROJECT file manually updated to use
go.kubebuilder.io/v2 would look like:
domain: my.domain
layout:
- go.kubebuilder.io/v2
projectName: example
repo: example
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: my.domain
group: webapp
kind: Guestbook
path: example/api/v1
version: v1
version: "3"
You can check the differences between the previous layout( version 2 ) and the current
format( version 3 ) with the go.kubebuilder.io/v2 by comparing an example scenario
which involves more than one API and webhook, see:
Example (Project version 2)
domain: testproject.org
repo: sigs.k8s.io/kubebuilder/example
resources:
- group: crew
kind: Captain
version: v1
- group: crew
kind: FirstMate
version: v1
- group: crew
kind: Admiral
version: v1
version: "2"
Verification
In the steps above, you updated only the PROJECT file which represents the project
configuration. This configuration is useful only for the CLI tool. It should not affect how your
project behaves.
There is no option to verify that you properly updated the configuration file. The best way to
ensure the configuration file has the correct V3+ fields is to initialize a project with the same
API(s), controller(s), and webhook(s) in order to compare generated configuration with the
manually changed configuration.
If you made mistakes in the above process, you will likely face issues using the CLI.
The following steps describe the manual changes required to modify the project’s layout
enabling your project to use the go/v3 plugin. These steps will not help you address all the
bug fixes of the already generated scaffolds.
! Deprecated APIs
The following steps will not migrate the API versions which are deprecated
apiextensions.k8s.io/v1beta1 , admissionregistration.k8s.io/v1beta1 , cert-
manager.io/v1alpha2 .
Steps to migrate
Before updating the layout , please ensure you have followed the above steps to upgrade
your Project version to 3 . Once you have upgraded the project version, update the layout to
the new plugin version go.kubebuilder.io/v3 as follows:
domain: my.domain
layout:
- go.kubebuilder.io/v3
...
Ensure that your go.mod is using Go version 1.15 and the following dependency versions:
module example
go 1.18
require (
github.com/onsi/ginkgo/v2 v2.1.4
github.com/onsi/gomega v1.19.0
k8s.io/api v0.24.0
k8s.io/apimachinery v0.24.0
k8s.io/client-go v0.24.0
sigs.k8s.io/controller-runtime v0.12.1
)
With:
To allow controller-gen and the scaffolding tool to use the new API versions, replace:
CRD_OPTIONS ?= "crd:trivialVersions=true"
With:
CRD_OPTIONS ?= "crd"
To allow downloading the newer versions of the Kubernetes binaries required by Envtest into
the testbin/ directory of your project instead of the global setup, replace:
# Run tests
test: generate fmt vet manifests
go test ./... -coverprofile cover.out
With:
# Setting SHELL to bash allows bash commands to be executed by recipes.
# Options are set to exit when a recipe line exits non-zero or a piped command
fails.
SHELL = /usr/bin/env bash -o pipefail
.SHELLFLAGS = -ec
ENVTEST_ASSETS_DIR=$(shell pwd)/testbin
test: manifests generate fmt vet ## Run tests.
mkdir -p ${ENVTEST_ASSETS_DIR}
test -f ${ENVTEST_ASSETS_DIR}/setup-envtest.sh || curl -sSLo
${ENVTEST_ASSETS_DIR}/setup-envtest.sh
https://raw.githubusercontent.com/kubernetes-sigs/controller-
runtime/v0.8.3/hack/setup-envtest.sh
source ${ENVTEST_ASSETS_DIR}/setup-envtest.sh; fetch_envtest_tools
$(ENVTEST_ASSETS_DIR); setup_envtest_env $(ENVTEST_ASSETS_DIR); go test ./... -
coverprofile cover.out
Envtest binaries
The Kubernetes binaries that are required for the Envtest were upgraded from 1.16.4 to
1.22.1 . You can still install them globally by following these installation instructions.
To upgrade the controller-gen and kustomize version used to generate the manifests
replace:
With:
##@ Build Dependencies
## Tool Binaries
KUSTOMIZE ?= $(LOCALBIN)/kustomize
CONTROLLER_GEN ?= $(LOCALBIN)/controller-gen
ENVTEST ?= $(LOCALBIN)/setup-envtest
## Tool Versions
KUSTOMIZE_VERSION ?= v3.8.7
CONTROLLER_TOOLS_VERSION ?= v0.9.0
KUSTOMIZE_INSTALL_SCRIPT ?= "https://raw.githubusercontent.com/kubernetes-
sigs/kustomize/master/hack/install_kustomize.sh"
.PHONY: kustomize
kustomize: $(KUSTOMIZE) ## Download kustomize locally if necessary.
$(KUSTOMIZE): $(LOCALBIN)
test -s $(LOCALBIN)/kustomize || { curl -Ss $(KUSTOMIZE_INSTALL_SCRIPT) |
bash -s -- $(subst v,,$(KUSTOMIZE_VERSION)) $(LOCALBIN); }
.PHONY: controller-gen
controller-gen: $(CONTROLLER_GEN) ## Download controller-gen locally if
necessary.
$(CONTROLLER_GEN): $(LOCALBIN)
test -s $(LOCALBIN)/controller-gen || GOBIN=$(LOCALBIN) go install
sigs.k8s.io/controller-tools/cmd/controller-gen@$(CONTROLLER_TOOLS_VERSION)
.PHONY: envtest
envtest: $(ENVTEST) ## Download envtest-setup locally if necessary.
$(ENVTEST): $(LOCALBIN)
test -s $(LOCALBIN)/setup-envtest || GOBIN=$(LOCALBIN) go install
sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
And then, to make your project use the kustomize version defined in the Makefile, replace all
usage of kustomize with $(KUSTOMIZE)
Makefile
You can check all changes applied to the Makefile by looking in the samples projects
generated in the testdata directory of the Kubebuilder repository or by just by creating
a new project with the Kubebuilder CLI.
Replace:
With:
Replace:
. "github.com/onsi/ginkgo"
With:
. "github.com/onsi/ginkgo/v2"
RunSpecsWithDefaultAndCustomReporters(t,
"Controller Suite",
[]Reporter{printer.NewlineReporter{}})
With:
RunSpecsWithDefaultAndCustomReporters(t,
"Webhook Suite",
[]Reporter{printer.NewlineReporter{}})
With:
RunSpecs(t, "Webhook Suite")
Last but not least, remove the timeout variable from the BeforeSuite blocks:
Replace:
With
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseDevMode(true)))
With:
opts := zap.Options{
Development: true,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
With:
func main() {
var metricsAddr string
var enableLeaderElection bool
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address
the metric endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller
manager.")
- name: manager
args:
- "--health-probe-bind-address=:8081"
- "--metrics-bind-address=127.0.0.1:8080"
- "--leader-elect"
Verification
Finally, we can run make and make docker-build to ensure things are working fine.
Before continuing
The following steps describe a workflow to upgrade your project to remove the deprecated
Kubernetes APIs: apiextensions.k8s.io/v1beta1 , admissionregistration.k8s.io/v1beta1 ,
cert-manager.io/v1alpha2 .
The Kubebuilder CLI tool does not support scaffolded resources for both Kubernetes API
versions such as; an API/CRD with apiextensions.k8s.io/v1beta1 and another one with
apiextensions.k8s.io/v1 .
The first step is to update your PROJECT file by replacing the api.crdVersion:v1beta and
webhooks.WebhookVersion:v1beta with api.crdVersion:v1 and
webhooks.WebhookVersion:v1 which would look like:
domain: my.domain
layout: go.kubebuilder.io/v3
projectName: example
repo: example
resources:
- api:
crdVersion: v1
namespaced: true
group: webapp
kind: Guestbook
version: v1
webhooks:
defaulting: true
webhookVersion: v1
version: "3"
You can try to re-create the APIS(CRDs) and Webhooks manifests by using the --force flag.
! Before re-create
Note, however, that the tool will re-scaffold the files which means that you will lose their
content.
Before executing the commands ensure that you have the files content stored in another
place. An easy option is to use git to compare your local change with the previous
version to recover the contents.
Now, re-create the APIS(CRDs) and Webhooks manifests by running the kubebuilder create
api and kubebuilder create webhook for the same group, kind and versions with the flag --
force , respectively.
V3 - Plugins Layout Migration Guides
Following the migration guides from the plugins versions. Note that the plugins ecosystem
was introduced with Kubebuilder v3.0.0 release where the go/v3 version is the default layout
since 28 Apr 2021 .
Therefore, you can check here how to migrate the projects built from Kubebuilder 3.x with the
plugin go/v3 to the latest.
go/v3 vs go/v4
This document covers all breaking changes when migrating from projects built using the
plugin go/v3 (default for any scaffold done since 28 Apr 2021 ) to the next alpha version of
the Golang plugin go/v4 .
controller-runtime
controller-tools
kustomize
kb-releases release notes.
Common changes
go/v4 projects use Kustomize v5x (instead of v3x)
note that some manifests under config/ directory have been changed in order to no
longer use the deprecated Kustomize features such as env vars.
A kustomization.yaml is scaffolded under config/samples . This helps simply and
flexibly generate sample manifests: kustomize build config/samples .
adds support for Apple Silicon M1 (darwin/arm64)
remove support to CRD/WebHooks Kubernetes API v1beta1 version which are no longer
supported since k8s 1.22
no longer scaffold webhook test files with "k8s.io/api/admission/v1beta1" the k8s API
which is no longer served since k8s 1.25 . By default webhooks test files are scaffolding
using "k8s.io/api/admission/v1" which is support from k8s 1.20
no longer provide backwards compatible support with k8s versions < 1.16
change the layout to accommodate the community request to follow the Standard Go
Project Layout by moving the api(s) under a new directory called api , controller(s) under
a new directory called internal and the main.go under a new directory named cmd
TL;DR of the New `go/v4` Plugin
! Project customizations
After using the CLI to create your project, you are free to customize how you see fit. Bear
in mind, that it is not recommended to deviate from the proposed layout unless you
know what you are doing.
For example, you should refrain from moving the scaffolded files, doing so will make it
difficult in upgrading your project in the future. You may also lose the ability to use some
of the CLI features and helpers. For further information on the project layout, see the doc
[What’s in a basic project?][basic-project-doc]
If you want to use the latest version of Kubebuilder CLI without changing your scaffolding
then, check the following guide which will describe the steps to be performed manually to
upgrade only your PROJECT version and start using the plugins versions.
This way is more complex, susceptible to errors, and success cannot be assured. Also, by
following these steps you will not get the improvements and bug fixes in the default
generated project files.
Please ensure you have followed the installation guide to install the required components.
The recommended way to migrate a go/v3 project is to create a new go/v4 project and copy
over the API and the reconciliation code. The conversion will end up with a project that looks
like a native go/v4 project layout (latest version).
However, in some cases, it’s possible to do an in-place upgrade (i.e. reuse the go/v3 project
layout, upgrading the PROJECT file, and scaffolds manually). For further information see
Migration from go/v3 to go/v4 by updating the files manually
Project name
For the rest of this document, we are going to use migration-project as the project
name and tutorial.kubebuilder.io as the domain. Please, select and use appropriate
values for your case.
Create a new directory with the name of your project. Note that this name is used in the
scaffolds to create the name of your manager Pod and of the Namespace where the Manager
is deployed by default.
$ mkdir migration-project-name
$ cd migration-project-name
Now, we need to initialize a go/v4 project. Before we do that, we’ll need to initialize a new go
module if we’re not on the GOPATH . While technically this is not needed inside GOPATH , it is
still recommended.
The module of your project can found in the `go.mod` file at the root of your project:
module tutorial.kubebuilder.io/migration-project
...
domain: tutorial.kubebuilder.io
...
For this example, we are going to consider that we need to scaffold both the API types
and the controllers, but remember that this depends on how you scaffolded them in your
original project.
Now, let’s copy the API definition from api/v1/<kind>_types.go in our old project to the new
one.
These files have not been modified by the new plugin, so you should be able to replace your
freshly scaffolded files by your old one. There may be some cosmetic changes. So you can
choose to only copy the types themselves.
Now, let’s migrate the controller code from controllers/cronjob_controller.go in our old
project to the new one.
Migrate the Webhooks
Skip
If you don’t have any webhooks, you can skip this section.
Now let’s scaffold the webhooks for our CRD (CronJob). We’ll need to run the following
command with the --defaulting and --programmatic-validation flags (since our test
project uses defaulting and validating webhooks):
Now, let’s copy the webhook definition from api/v1/<kind>_webhook.go from our old project
to the new one.
Others
If there are any manual updates in main.go in v3, we need to port the changes to the new
main.go . We’ll also need to ensure all of needed controller-runtime schemes have been
registered.
If there are additional manifests added under config directory, port them as well. Please, be
aware that the new version go/v4 uses Kustomize v5x and no longer Kustomize v4. Therefore,
if added customized implementations in the config you need to ensure that them can work
with Kustomize v5 and/if not update/upgrade any breaking change that you might face.
In v4, installation of Kustomize has been changed from bash script to go get . Change the
kustomize dependency in Makefile to
.PHONY: kustomize
kustomize: $(KUSTOMIZE) ## Download kustomize locally if necessary. If wrong
version is installed, it will be removed before downloading.
$(KUSTOMIZE): $(LOCALBIN)
@if test -x $(LOCALBIN)/kustomize && ! $(LOCALBIN)/kustomize version | grep -
q $(KUSTOMIZE_VERSION); then \
echo "$(LOCALBIN)/kustomize version is not expected $(KUSTOMIZE_VERSION).
Removing it before installing."; \
rm -rf $(LOCALBIN)/kustomize; \
fi
test -s $(LOCALBIN)/kustomize || GOBIN=$(LOCALBIN) GO111MODULE=on go install
sigs.k8s.io/kustomize/kustomize/v5@$(KUSTOMIZE_VERSION)
Please ensure you have followed the installation guide to install the required components.
The following guide describes the manual steps required to upgrade your PROJECT config file
to begin using go/v4 .
This way is more complex, susceptible to errors, and success cannot be assured. Also, by
following these steps you will not get the improvements and bug fixes in the default
generated project files.
Usually it is suggested to do it manually if you have customized your project and deviated too
much from the proposed scaffold. Before continuing, ensure that you understand the note
about [project customizations][project-customizations]. Note that you might need to spend
more effort to do this process manually than to organize your project customizations. The
proposed layout will keep your project maintainable and upgradable with less effort in the
future.
The recommended upgrade approach is to follow the Migration Guide go/v3 to go/v4 instead.
Steps to migrate
The following steps describe the manual changes required to bring the project configuration
file ( PROJECT ). These change will add the information that Kubebuilder would add when
generating the file. This file can be found in the root directory.
Update the PROJECT file by replacing:
layout:
- go.kubebuilder.io/v3
With:
layout:
- go.kubebuilder.io/v4
New layout:
Therefore, you can check the changes in the layout results into:
...
├── cmd
│ └── main.go
├── internal
│ └── controller
└── api
Create a new directory cmd and move the main.go under it.
If your project support multi-group the APIs are scaffold under a directory called apis .
Rename this directory to api
Move the controllers directory under the internal and rename it for controller
Now ensure that the imports will be updated accordingly by:
Update the main.go imports to look for the new path of your controllers under the
pkg directory
Then, replace:
RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o
manager main.go
With:
Update the Makefile targets to build and run the manager by replacing:
.PHONY: build
build: manifests generate fmt vet ## Build manager binary.
go build -o bin/manager main.go
.PHONY: run
run: manifests generate fmt vet ## Run a controller from your host.
go run ./main.go
With:
.PHONY: build
build: manifests generate fmt vet ## Build manager binary.
go build -o bin/manager cmd/main.go
.PHONY: run
run: manifests generate fmt vet ## Run a controller from your host.
go run ./cmd/main.go
Replace:
With:
Note that if your project has multiple groups ( multigroup:true ) then the above update
should result into "..", "..", "..", instead of "..",".."
The PROJECT tracks the paths of all APIs used in your project. Ensure that they now point to
api/... as the following example:
Before update:
group: crew
kind: Captain
path: sigs.k8s.io/kubebuilder/testdata/project-v4/apis/crew/v1
After Update:
group: crew
kind: Captain
path: sigs.k8s.io/kubebuilder/testdata/project-v4/api/crew/v1
Update the manifest under config/ directory with all changes performed in the default
scaffold done with go/v4 plugin. (see for example testdata/project-v4/config/ ) to
get all changes in the default scaffolds to be applied on your project
Create config/samples/kustomization.yaml with all Custom Resources samples
specified into config/samples . (see for example testdata/project-
v4/config/samples/kustomization.yaml )
You can mainly compare the config/ directory from the samples scaffolded under the
testdata directory by checking the differences between the testdata/project-v3/config/
with testdata/project-v4/config/ which are samples created with the same commands
with the only difference being versions.
However, note that if you create your project with Kubebuilder CLI 3.0.0, its scaffolds might
change to accommodate changes up to the latest releases using go/v3 which are not
considered breaking for users and/or are forced by the changes introduced in the
dependencies used by the project such as controller-runtime and controller-tools.
Makefile updates
Update the Makefile with the changes which can be found in the samples under testdata for
the release tag used. (see for example testdata/project-v4/Makefile )
Update the dependencies
Update the go.mod with the changes which can be found in the samples under testdata for
the release tag used. (see for example testdata/project-v4/go.mod ). Then, run go mod
tidy to ensure that you get the latest dependencies and your Golang code has no breaking
changes.
Verification
In the steps above, you updated your project manually with the goal of ensuring that it follows
the changes in the layout introduced with the go/v4 plugin that update the scaffolds.
There is no option to verify that you properly updated the PROJECT file of your project. The
best way to ensure that everything is updated correctly, would be to initialize a project using
the go/v4 plugin, (ie) using kubebuilder init --domain tutorial.kubebuilder.io
plugins=go/v4 and generating the same API(s), controller(s), and webhook(s) in order to
compare the generated configuration with the manually changed configuration.
Also, after all updates you would run the following commands:
make manifests (to re-generate the files using the latest version of the contrller-gen
after you update the Makefile)
make all (to ensure that you are able to build and perform all operations)
While Kubebuilder will not scaffold out a project structure compatible with multiple API
groups in the same repository by default, it’s possible to modify the default project
structure to support it.
Note that the process mainly is to ensure that your API(s) and controller(s) will be moved
under new directories with their respective group name.
You can verify the version by looking at the PROJECT file. The currently default and
recommended version is go/v4.
The layout go/v3 is deprecated, if you are using go/v3 it is recommended that you
migrate to go/v4, however this documentation is still valid. Migration from go/v3 to go/v4.
To change the layout of your project to support Multi-Group run the command kubebuilder
edit --multigroup=true . Once you switch to a multi-group layout, the new Kinds will be
generated in the new layout but additional manual work is needed to move the old API groups
to the new layout.
Generally, we use the prefix for the API group as the directory name. We can check
api/v1/groupversion_info.go to find that out:
// +groupName=batch.tutorial.kubebuilder.io
package v1
Then, we’ll rename move our existing APIs into a new subdirectory, “batch”:
mkdir api/batch
mv api/* api/batch
After moving the APIs to a new directory, the same needs to be applied to the controllers. For
go/v4:
mkdir internal/controller/batch
mv internal/controller/* internal/controller/batch/
Next, we’ll need to update all the references to the old package name. For CronJob, the
paths to be replaced would be main.go and
controllers/batch/cronjob_controller.go to their respective locations in the new
project structure.
If you’ve added additional files to your project, you’ll need to track down imports there as
well.
Finally, fix the PROJECT file manually, the command kubebuilder edit --
multigroup=true sets our project to multigroup, but it doesn’t fix the path of the existing
APIs. For each resource we need to modify the path.
In this process, if the project is not new and has previous implemented APIs they would
still need to be modified as needed. Notice that with the multi-group project the Kind
API’s files are created under api/<group>/<version> instead of api/<version> . Also,
note that the controllers will be created under internal/controller/<group> instead of
internal/controller .
That is the reason why we moved the previously generated APIs to their respective
locations in the new structure. Remember to update the references in imports
accordingly.
For envtest to install CRDs correctly into the test environment, the relative path to the
CRD directory needs to be updated accordingly in each
internal/controller/<group>/suite_test.go file. We need to add additional ".." to
our CRD directory relative path as shown below.
The CronJob tutorial explains each of these changes in more detail (in the context of how
they’re generated by Kubebuilder for single-group projects).
Reference
Generating CRDs
Using Finalizers Finalizers are a mechanism to execute any custom logic related to a
resource before it gets deleted from Kubernetes cluster.
Kind cluster
What’s a webhook? Webhooks are HTTP callbacks, there are 3 types of webhooks in
k8s: 1) admission webhook 2) CRD conversion webhook 3) authorization webhook
CRD Generation
CRD Validation
Webhook
Object/DeepCopy
RBAC
controller-gen CLI
completion
Artifacts
Platform Support
Metrics
Reference
Makefile Helpers
CLI plugins
Generating CRDs
Kubebuilder uses a tool called controller-gen to generate utility code and Kubernetes
object YAML, like CustomResourceDefinitions.
To do this, it makes use of special “marker comments” (comments that start with // + )
to indicate additional information about fields, types, and packages. In the case of CRDs,
these are generally pulled from your _types.go files. For more information on markers,
see the marker reference docs.
Kubebuilder provides a make target to run controller-gen and generate CRDs: make
manifests .
When you run make manifests , you should see CRDs generated under the
config/crd/bases directory. make manifests can generate a number of other artifacts
as well -- see the marker reference docs for more details.
Validation
CRDs support declarative validation using an OpenAPI v3 schema in the validation
section.
For example:
// +kubebuilder:validation:MaxItems=500
// +kubebuilder:validation:MinItems=1
// +kubebuilder:validation:UniqueItems=true
Knights []string `json:"knights,omitempty"`
// +kubebuilder:validation:Enum=Lion;Wolf;Dragon
type Alias string
// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=3
// +kubebuilder:validation:ExclusiveMaximum=false
type Rank int32
Additional Printer Columns
Starting with Kubernetes 1.11, kubectl get can ask the server what columns to display.
For CRDs, this can be used to provide useful, type-specific information with kubectl get ,
similar to the information provided for built-in types.
The information that gets displayed can be controlled with the additionalPrinterColumns
field on your CRD, which is controlled by the +kubebuilder:printcolumn marker on the
Go type for your CRD.
For instance, in the following example, we add fields to display information about the
knights, rank, and alias fields from the validation example:
// +kubebuilder:printcolumn:name="Alias",type=string,JSONPath=`.spec.alias`
// +kubebuilder:printcolumn:name="Rank",type=integer,JSONPath=`.spec.rank`
// +kubebuilder:printcolumn:name="Bravely Run
Away",type=boolean,JSONPath=`.spec.knights[?(@ == "Sir
Robin")]`,description="when danger rears its ugly head, he bravely turned his
tail and fled",priority=10
//
+kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTim
type Toy struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Subresources
CRDs can choose to implement the /status and /scale subresources as of Kubernetes
1.13.
It’s generally recommended that you make use of the /status subresource on all
resources that have a status field.
Status
// +kubebuilder:subresource:status
type Toy struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Scale
For example:
// +kubebuilder:subresource:status
//
+kubebuilder:subresource:scale:specpath=.spec.replicas,statuspath=.status.repli
type CustomSet struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Multiple Versions
As of Kubernetes 1.13, you can have multiple versions of your Kind defined in your CRD,
and use a webhook to convert between them.
You’ll need to enable this by switching the line in your makefile that says CRD_OPTIONS ?=
"crd:trivialVersions=true,preserveUnknownFields=false to CRD_OPTIONS ?=
crd:preserveUnknownFields=false if using v1beta CRDs, and CRD_OPTIONS ?= crd if
using v1 (recommended).
Then, you can use the +kubebuilder:storageversion marker to indicate the GVK that
should be used to store data by the API server.
By default, kubebuilder create api will create CRDs of API version v1 , a version
introduced in Kubernetes v1.16. If your project intends to support Kubernetes cluster
versions older than v1.16, you must use the v1beta1 API version:
To support Kubernetes clusters of version v1.14 or lower, you’ll also need to remove the
controller-gen option preserveUnknownFields=false from your Makefile. This is done by
switching the line that says CRD_OPTIONS ?=
"crd:trivialVersions=true,preserveUnknownFields=false to CRD_OPTIONS ?=
crd:trivialVersions=true
You can also run controller-gen directly, if you want to see what it’s doing.
It uses the output:crd:artifacts output rule to indicate that CRD-related config (non-
code) artifacts should end up in config/crd/bases instead of config/crd .
$ controller-gen -h
$ controller-gen -hhh
Using Finalizers
Finalizers allow controllers to implement asynchronous pre-delete hooks. Let’s say you
create an external resource (such as a storage bucket) for each object of your API type,
and you want to delete the associated external resource on object’s deletion from
Kubernetes, you can use a finalizer to do that.
You can read more about the finalizers in the Kubernetes reference docs. The section
below demonstrates how to register and trigger pre-delete hooks in the Reconcile
method of a controller.
The key point to note is that a finalizer causes “delete” on the object to become an
“update” to set deletion timestamp. Presence of deletion timestamp on the object
indicates that it is being deleted. Otherwise, without finalizers, a delete shows up as a
reconcile where the object is missing from the cache.
Highlights:
If the object is not being deleted and does not have the finalizer registered, then
add the finalizer and update the object in Kubernetes.
If object is being deleted and the finalizer is still present in finalizers list, then
execute the pre-delete logic and remove the finalizer and update the object.
Ensure that the pre-delete logic is idempotent.
$ vim ../../cronjob-tutorial/testdata/finalizer_example.go
// Imports (hidden) ◀
By default, kubebuilder will include the RBAC rules necessary to update finalizers for
CronJobs.
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs,ver
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/sta
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/fin
The code snippet below shows skeleton code for implementing a finalizer.
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request)
(ctrl.Result, error) {
log := r.Log.WithValues("cronjob", req.NamespacedName)
cronJob := &batchv1.CronJob{}
if err := r.Get(ctx, req.NamespacedName, cronJob); err != nil {
log.Error(err, "unable to fetch CronJob")
// we'll ignore not-found errors, since they can't be fixed by an
immediate
// requeue (we'll need to wait for a new notification), and we can
get them
// on deleted requests.
return ctrl.Result{}, client.IgnoreNotFound(err)
}
Creating Events
It is often useful to publish Event objects from the controller Reconcile function as they
allow users or any automated processes to see what is going on with a particular object
and respond to them.
Recent Events for an object can be viewed by running $ kubectl describe <resource
kind> <resource name> . Also, they can be checked by running $ kubectl get events .
Be aware that it is not recommended to emit Events for all operations. If authors raise
too many events, it brings bad UX experiences for those consuming the solutions on the
cluster, and they may find it difficult to filter an actionable event from the clutter. For
more information, please take a look at the Kubernetes APIs convention.
Writing Events
Anatomy of an Event:
Example Usage
Following are the steps with examples to help you raise events in your controller’s
reconciliations. Events are published from a Controller using an EventRecorder type
CorrelatorOptions struct , which can be created for a Controller by calling
GetRecorder(name string) on a Manager. See that we will change the implementation
scaffolded in cmd/main.go :
if err = (&controller.MyKindReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
// Note that we added the following line:
Recorder: mgr.GetEventRecorderFor("mykind-controller"),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller",
"MyKind")
os.Exit(1)
}
You must also grant the RBAC rules permissions to allow your project to create Events.
Therefore, ensure that you add the RBAC into your controller:
...
//+kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
...
func (r *MyKindReconciler) Reconcile(ctx context.Context, req ctrl.Request)
(ctrl.Result, error) {
And then, run $ make manifests to update the rules under config/rbac/rule.yaml .
Watching Resources
Inside a Reconcile() control loop, you are looking to do a collection of operations until it
has the desired state on the cluster. Therefore, it can be necessary to know when a
resource that you care about is changed. In the case that there is an action (create,
update, edit, delete, etc.) on a watched resource, Reconcile() should be called for the
resources watching it.
Controller Runtime libraries provide many ways for resources to be managed and
watched. This ranges from the easy and obvious use cases, such as watching the
resources which were created and managed by the controller, to more unique and
advanced use cases.
See each subsection for explanations and examples of the different ways in which your
controller can Watch the resources it cares about.
Kubebuilder and the Controller Runtime libraries allow for controllers to implement the
logic of their CRD through easy management of Kubernetes resources.
Deployments must know when the ReplicaSets that they manage are changed
ReplicaSets must know when their Pods are deleted, or change from healthy to
unhealthy.
Through the Owns() functionality, Controller Runtime provides an easy way to watch
dependency resources for changes.
$ vim owned-resource/api.go
// Imports (hidden) ◀
In this example the controller is doing basic management of a Deployment object.
The Spec here allows the user to customize the deployment created in various ways. For
example, the number of replicas it runs with.
$ vim owned-resource/controller.go
package owned_resource
import (
"context"
"github.com/go-logr/logr"
kapps "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"
appsv1 "tutorial.kubebuilder.io/project/api/v1"
)
In this basic example, SimpleDeployments are used to create and manage simple
Deployments that can be configured through the SimpleDeployment Spec.
Build the deployment that we want to see exist within the cluster
deployment := &kapps.Deployment{}
Set the controller reference, specifying that this Deployment is controlled by the
SimpleDeployment being reconciled.
This will allow for the SimpleDeployment to be reconciled when changes to the
Deployment are noticed.
if err := controllerutil.SetControllerReference(simpleDeployment,
deployment, r.scheme); err != nil {
return ctrl.Result{}, err
}
Finally, we add this reconciler to the manager, so that it gets started when the manager is
started.
Since we create dependency Deployments during the reconcile, we can specify that the
controller Owns Deployments . This will tell the manager that if a Deployment , or its
status, is updated, then the SimpleDeployment in its ownerRef field should be reconciled.
By default, Kubebuilder and the Controller Runtime libraries allow for controllers to easily
watch the resources that they manage as well as dependent resources that are Owned by
the controller. However, those are not always the only resources that need to be watched
in the cluster.
The ConfigDeployment CRD will hold a reference to a ConfigMap inside its Spec.
The ConfigDeployment controller will be in charge of creating a deployment with
Pods that use the ConfigMap. These pods should be updated anytime that the
referenced ConfigMap changes, therefore the ConfigDeployments will need to be
reconciled on changes to the referenced ConfigMap.
$ vim external-indexed-field/api.go
// Imports (hidden) ◀
In our type’s Spec, we want to allow the user to pass in a reference to a configMap in the
same namespace. It’s also possible for this to be a namespaced reference, but in this
example we will assume that the referenced object lives in the same namespace.
This field does not need to be optional. If the field is required, the indexing code in the
controller will need to be modified.
$ vim external-indexed-field/controller.go
// Apache License (hidden) ◀
package external_indexed_field
import (
"context"
"github.com/go-logr/logr"
kapps "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/fields" // Required for Watching
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types" // Required for Watching
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder" // Required for Watching
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/handler" // Required for Watching
"sigs.k8s.io/controller-runtime/pkg/predicate" // Required for Watching
"sigs.k8s.io/controller-runtime/pkg/reconcile" // Required for Watching
"sigs.k8s.io/controller-runtime/pkg/source" // Required for Watching
appsv1 "tutorial.kubebuilder.io/project/api/v1"
)
Determine the path of the field in the ConfigDeployment CRD that we wish to use as the
“object reference”. This will be used in both the indexing and watching.
const (
configMapField = ".spec.configMap"
)
There are two additional resources that the controller needs to have access to, other than
ConfigDeployments.
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=configdeploym
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=configdeploym
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=configdeploym
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;crea
//+kubebuilder:rbac:groups=apps,resources=deployments/status,verbs=get
//+kubebuilder:rbac:groups="",resources=configmaps,verbs=get;list;watch
Reconcile will be in charge of reconciling the state of ConfigDeployments.
ConfigDeployments are used to manage Deployments whose pods are updated
whenever the configMap that they use is updated.
For that reason we need to add an annotation to the PodTemplate within the
Deployment we create. This annotation will keep track of the latest version of the data
within the referenced ConfigMap. Therefore when the version of the configMap is
changed, the PodTemplate in the Deployment will change. This will cause a rolling
upgrade of all Pods managed by the Deployment.
Skip down to the SetupWithManager function to see how we ensure that Reconcile is
called when the referenced ConfigMaps are updated.
// Hash the data in some way, or just use the version of the Object
configMapVersion = foundConfigMap.ResourceVersion
}
Finally, we add this reconciler to the manager, so that it gets started when the manager is
started.
Since we create dependency Deployments during the reconcile, we can specify that the
controller Owns Deployments.
However the ConfigMaps that we want to watch are not owned by the ConfigDeployment
object. Therefore we must specify a custom way of watching those objects. This watch
logic is complex, so we have split it into a separate method.
The configMap field must be indexed by the manager, so that we will be able to lookup
ConfigDeployments by a referenced ConfigMap name. This will allow for quickly answer
the question:
if err := mgr.GetFieldIndexer().IndexField(context.Background(),
&appsv1.ConfigDeployment{}, configMapField, func(rawObj client.Object)
[]string {
// Extract the ConfigMap name from the ConfigDeployment Spec, if one
is provided
configDeployment := rawObj.(*appsv1.ConfigDeployment)
if configDeployment.Spec.ConfigMap == "" {
return nil
}
return []string{configDeployment.Spec.ConfigMap}
}); err != nil {
return err
}
As explained in the CronJob tutorial, the controller will first register the Type that it
manages, as well as the types of subresources that it controls. Since we also want to
watch ConfigMaps that are not controlled or managed by the controller, we will need to
use the Watches() functionality as well.
builder.WithPredicates(predicate.ResourceVersionChangedPredicate{}),
).
Complete(r)
}
Because we have already created an index on the configMap reference field, this
mapping function is quite straight forward. We first need to list out all
ConfigDeployments that use ConfigMap given in the mapping function. This is done by
merely submitting a List request using our indexed field as the field selector.
When the list of ConfigDeployments that reference the ConfigMap is found, we just need
to loop through the list and create a reconcile request for each one. If an error occurs
fetching the list, or no ConfigDeployments are found, then no reconcile requests will be
returned.
requests := make([]reconcile.Request,
len(attachedConfigDeployments.Items))
for i, item := range attachedConfigDeployments.Items {
requests[i] = reconcile.Request{
NamespacedName: types.NamespacedName{
Name: item.GetName(),
Namespace: item.GetNamespace(),
},
}
}
return requests
}
Kind Cluster
This only cover the basics to use a kind cluster. You can find more details at kind
documentation.
Installation
You can follow this to install kind .
Create a Cluster
You can simply create a kind cluster by
To customize your cluster, you can provide additional configuration. For example, the
following is a sample kind configuration.
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
Using the configuration above, run the following command will give you a k8s v1.17.2
cluster with 1 control-plane node and 3 worker nodes.
You can use --image flag to specify the cluster version you want, e.g. --
image=kindest/node:v1.17.2 , the supported version are listed here.
See Load a local image into a kind cluster for more information.
Delete a Cluster
kind delete cluster
Webhook
Webhooks are requests for information sent in a blocking fashion. A web application
implementing webhooks will send an HTTP request to other application when certain
event happens.
Kubernetes supports these dynamic admission webhooks as of version 1.9 (when the
feature entered beta).
Kubernetes supports the conversion webhooks as of version 1.15 (when the feature
entered beta).
By default, kubebuilder create webhook will create webhook configs of API version v1 ,
a version introduced in Kubernetes v1.16. If your project intends to support Kubernetes
cluster versions older than v1.16, you must use the v1beta1 API version:
Admission Webhooks
Admission webhooks are HTTP callbacks that receive admission requests, process them
and return admission responses.
Mutating Admission Webhook: These can mutate the object while it’s being
created or updated, before it gets stored. It can be used to default fields in a
resource requests, e.g. fields in Deployment that are not specified by the user. It can
be used to inject sidecar containers.
Validating Admission Webhook: These can validate the object while it’s being
created or updated, before it gets stored. It allows more complex validation than
pure schema-based validation. e.g. cross-field validation and pod image whitelisting.
The apiserver by default doesn’t authenticate itself to the webhooks. However, if you
want to authenticate the clients, you can configure the apiserver to use basic auth, bearer
token, or a cert to authenticate itself to the webhooks. You can find detailed steps here.
It is very easy to build admission webhooks for CRDs, which has been covered in the
CronJob tutorial. Given that kubebuilder doesn’t support webhook scaffolding for core
types, you have to use the library from controller-runtime to handle it. There is an
example in controller-runtime.
It is suggested to use kubebuilder to initialize a project, and then you can follow the steps
below to add admission webhooks for core types.
If you need a client, just pass in the client at struct construction time.
If you add the InjectDecoder method for your handler, a decoder will be injected for
you.
func (a *podAnnotator) InjectDecoder(d *admission.Decoder) error {
a.decoder = d
return nil
}
Note: in order to have controller-gen generate the webhook configuration for you, you
need to add markers. For example, // +kubebuilder:webhook:path=/mutate-v1-
pod,mutating=true,failurePolicy=fail,groups="",resources=pods,verbs=create;update,
Update main.go
Now you need to register your handler in the webhook server.
mgr.GetWebhookServer().Register("/mutate-v1-pod", &webhook.Admission{Handler:
&podAnnotator{Client: mgr.GetClient()}})
You need to ensure the path here match the path in the marker.
Deploy
Deploying it is just like deploying a webhook server for CRD. You need to
Kubebuilder makes use of a tool called controller-gen for generating utility code and
Kubernetes YAML. This code and config generation is controlled by the presence of
special “marker comments” in Go code.
Markers are single-line comments that start with a plus, followed by a marker name,
optionally followed by some marker specific configuration:
// +kubebuilder:validation:Optional
// +kubebuilder:validation:MaxItems=2
//
+kubebuilder:printcolumn:JSONPath=".status.replicas",name=Replicas,type=string
difference between // +optional and //
+kubebuilder:validation:Optional
If you’re using controller-gen only then they’re redundant, but if you’re using other
generators or you want developers that need to build their own clients for your API,
you’ll want to also include +optional .
See each subsection for information about different types of code and YAML generation.
Marker Syntax
Exact syntax is described in the godocs for controller-tools.
Multi-option
( +kubebuilder:printcolumn:JSONPath=".status.replicas",name=Replicas,type=stri
multi-option markers take one or more named arguments. The first argument is
separated from the name by a colon, and latter arguments are comma-separated.
Order of arguments doesn’t matter. Some arguments may be optional.
Marker arguments may be strings, ints, bools, slices, or maps thereof. Strings, ints, and
bools follow their Go syntax:
// +kubebuilder:validation:ExclusiveMaximum=false
// +kubebuilder:validation:Format="date-time"
// +kubebuilder:validation:Maximum=42
For convenience, in simple cases the quotes may be omitted from strings, although this is
not encouraged for anything other than single-word strings:
// +kubebuilder:validation:Type=string
Slices may be specified either by surrounding them with curly braces and separating with
commas:
// +kubebuilder:validation:Enum=Wallace;Gromit;Chicken
Maps are specified with string keys and values of any type (effectively
map[string]interface{} ). A map is surrounded by curly braces ( {} ), each key and value
is separated by a colon ( : ), and each key-value pair is separated by a comma:
CRD Generation
These markers describe how to construct a custom resource definition from a series of
Go types and packages. Generation of the actual validation schema is described by the
validation markers.
// +kubebuilder:deprecatedversion:warning=‹string› on type
marks this version as deprecated.
// +kubebuilder:metadata:annotations=‹[]string›,labels=‹[]string› on type
configures the additional annotations or labels for this CRD. For example adding
annotation "api-approved.kubernetes.io" for a CRD with Kubernetes groups, or
annotation "cert-manager.io/inject-ca-from-secret" for a CRD that needs CA
injection.
// +kubebuilder:printcolumn
:JSONPath=‹string›,description=‹string›,format=‹string›,name=‹string›,priority=‹int›,type=
‹string›
on type
adds a column to "kubectl get" output for this CRD.
// +kubebuilder:resource
:categories=‹[]string›,path=‹string›,scope=‹string›,shortName=‹[]string›,singular=‹string›
on type
configures naming and scope for a CRD.
// +kubebuilder:skipversion on type
removes the particular version of the CRD from the CRDs spec.
// +kubebuilder:storageversion on type
marks this version as the "storage version" for the CRD for conversion.
// +kubebuilder:subresource:scale
:selectorpath=‹string›,specpath=‹string›,statuspath=‹string› on type
enables the "/scale" subresource on a CRD.
// +kubebuilder:subresource:status on type
enables the "/status" subresource on a CRD.
// +kubebuilder:unservedversion on type
does not serve this version.
// +groupName:=‹string› on package
specifies the API group name for this package.
// +kubebuilder:skip on package
don't consider this package as an API version.
// +versionName:=‹string› on package
overrides the API group version for this package (defaults to the package name).
CRD Validation
These markers modify how the CRD validation schema is produced for the types and
fields they modify. Each corresponds roughly to an OpenAPI/JSON schema option.
// +kubebuilder:default:=‹any› on field
sets the default value for this field.
// +kubebuilder:example:=‹any› on field
sets the example value for this field.
// +kubebuilder:validation:EmbeddedResource on field
EmbeddedResource marks a fields as an embedded resource with apiVersion,
kind and metadata fields.
// +kubebuilder:validation:Enum:=‹[]any› on field
specifies that this (scalar) field is restricted to the *exact* values specified here.
// +kubebuilder:validation:ExclusiveMaximum:=‹bool› on field
indicates that the maximum is "up to" but not including that value.
// +kubebuilder:validation:ExclusiveMinimum:=‹bool› on field
indicates that the minimum is "up to" but not including that value.
// +kubebuilder:validation:Format:=‹string› on field
specifies additional "complex" formatting for this field.
// +kubebuilder:validation:MaxItems:=‹int› on field
specifies the maximum length for this list.
// +kubebuilder:validation:MaxLength:=‹int› on field
specifies the maximum length for this string.
// +kubebuilder:validation:MaxProperties:=‹int› on field
restricts the number of keys in an object
// +kubebuilder:validation:Maximum:=‹› on field
specifies the maximum numeric value that this field can have.
// +kubebuilder:validation:MinItems:=‹int› on field
specifies the minimum length for this list.
// +kubebuilder:validation:MinLength:=‹int› on field
specifies the minimum length for this string.
// +kubebuilder:validation:MinProperties:=‹int› on field
restricts the number of keys in an object
// +kubebuilder:validation:Minimum:=‹› on field
specifies the minimum numeric value that this field can have. Negative numbers are
supported.
// +kubebuilder:validation:MultipleOf:=‹› on field
specifies that this field must have a numeric value that's a multiple of this one.
// +kubebuilder:validation:Optional on field
specifies that this field is optional, if fields are required by default.
// +kubebuilder:validation:Pattern:=‹string› on field
specifies that this string must match the given regular expression.
// +kubebuilder:validation:Required on field
specifies that this field is required, if fields are optional by default.
// +kubebuilder:validation:Schemaless on field
marks a field as being a schemaless object.
// +kubebuilder:validation:Type:=‹string› on field
overrides the type for this field (which defaults to the equivalent of the Go type).
// +kubebuilder:validation:UniqueItems:=‹bool› on field
specifies that all items in this list must be unique.
// +kubebuilder:validation:XEmbeddedResource on field
EmbeddedResource marks a fields as an embedded resource with apiVersion,
kind and metadata fields.
// +kubebuilder:validation:XIntOrString on field
IntOrString marks a fields as an IntOrString.
// +kubebuilder:validation:XValidation:message=‹string›,rule=‹string› on field
marks a field as requiring a value for which a given expression evaluates to true.
// +nullable on field
marks this field as allowing the "null" value.
// +optional on field
specifies that this field is optional, if fields are required by default.
// +kubebuilder:validation:Enum:=‹[]any› on type
specifies that this (scalar) field is restricted to the *exact* values specified here.
// +kubebuilder:validation:ExclusiveMaximum:=‹bool› on type
indicates that the maximum is "up to" but not including that value.
// +kubebuilder:validation:ExclusiveMinimum:=‹bool› on type
indicates that the minimum is "up to" but not including that value.
// +kubebuilder:validation:Format:=‹string› on type
specifies additional "complex" formatting for this field.
// +kubebuilder:validation:MaxItems:=‹int› on type
specifies the maximum length for this list.
// +kubebuilder:validation:MaxLength:=‹int› on type
specifies the maximum length for this string.
// +kubebuilder:validation:MaxProperties:=‹int› on type
restricts the number of keys in an object
// +kubebuilder:validation:Maximum:=‹› on type
specifies the maximum numeric value that this field can have.
// +kubebuilder:validation:MinItems:=‹int› on type
specifies the minimum length for this list.
// +kubebuilder:validation:MinLength:=‹int› on type
specifies the minimum length for this string.
// +kubebuilder:validation:MinProperties:=‹int› on type
restricts the number of keys in an object
// +kubebuilder:validation:Minimum:=‹› on type
specifies the minimum numeric value that this field can have. Negative numbers are
supported.
// +kubebuilder:validation:MultipleOf:=‹› on type
specifies that this field must have a numeric value that's a multiple of this one.
// +kubebuilder:validation:Pattern:=‹string› on type
specifies that this string must match the given regular expression.
// +kubebuilder:validation:Type:=‹string› on type
overrides the type for this field (which defaults to the equivalent of the Go type).
// +kubebuilder:validation:UniqueItems:=‹bool› on type
specifies that all items in this list must be unique.
// +kubebuilder:validation:XEmbeddedResource on type
EmbeddedResource marks a fields as an embedded resource with apiVersion,
kind and metadata fields.
// +kubebuilder:validation:XIntOrString on type
IntOrString marks a fields as an IntOrString.
// +kubebuilder:validation:XValidation:message=‹string›,rule=‹string› on type
marks a field as requiring a value for which a given expression evaluates to true.
// +kubebuilder:validation:Optional on package
specifies that all fields in this package are optional by default.
// +kubebuilder:validation:Required on package
specifies that all fields in this package are required by default.
CRD Processing
These markers help control how the Kubernetes API server processes API requests
involving your custom resources.
// +kubebuilder:pruning:PreserveUnknownFields on field
PreserveUnknownFields stops the apiserver from pruning fields which are not
specified.
// +kubebuilder:validation:XPreserveUnknownFields on field
PreserveUnknownFields stops the apiserver from pruning fields which are not
specified.
// +listMapKey:=‹string› on field
specifies the keys to map listTypes.
// +listType:=‹string› on field
specifies the type of data-structure that the list represents (map, set, atomic).
// +mapType:=‹string› on field
specifies the level of atomicity of the map; i.e. whether each item in the map is
independent of the others, or all fields are treated as a single unit.
// +structType:=‹string› on field
specifies the level of atomicity of the struct; i.e. whether each field in the struct is
independent of the others, or all fields are treated as a single unit.
// +kubebuilder:pruning:PreserveUnknownFields on type
PreserveUnknownFields stops the apiserver from pruning fields which are not
specified.
// +kubebuilder:validation:XPreserveUnknownFields on type
PreserveUnknownFields stops the apiserver from pruning fields which are not
specified.
// +listMapKey:=‹string› on type
specifies the keys to map listTypes.
// +listType:=‹string› on type
specifies the type of data-structure that the list represents (map, set, atomic).
// +mapType:=‹string› on type
specifies the level of atomicity of the map; i.e. whether each item in the map is
independent of the others, or all fields are treated as a single unit.
// +structType:=‹string› on type
specifies the level of atomicity of the struct; i.e. whether each field in the struct is
independent of the others, or all fields are treated as a single unit.
Webhook
These markers describe how webhook configuration is generated. Use these to keep the
description of your webhooks close to the code that implements them.
// +kubebuilder:webhook
:admissionReviewVersions=‹[]string›,failurePolicy=‹string›,groups=‹[]string›,matchPolicy=
‹string›,mutating=‹bool›,name=‹string›,path=‹string›,reinvocationPolicy=‹string›,resources=
‹[]string›,sideEffects=‹string›,verbs=‹[]string›,versions=‹[]string›,webhookVersions=‹[]string›
on package
specifies how a webhook should be served.
Object/DeepCopy
// +kubebuilder:object:generate:=‹bool› on type
overrides enabling or disabling deepcopy generation for this type
// +kubebuilder:object:root:=‹bool› on type
enables object interface implementation generation for this type
// +kubebuilder:object:generate:=‹bool› on package
enables or disables object interface & deepcopy implementation generation for this
package
// +k8s:deepcopy-gen:=‹raw› use kubebuilder:object:generate (on package)
enables or disables object interface & deepcopy implementation generation for this
package
// +k8s:deepcopy-gen:=‹raw› use kubebuilder:object:generate (on type)
overrides enabling or disabling deepcopy generation for this type
// +k8s:deepcopy-gen:interfaces:=‹string› use kubebuilder:object:root (on type)
enables object interface implementation generation for this type
RBAC
These markers cause an RBAC ClusterRole to be generated. This allows you to describe
the permissions that your controller requires alongside the code that makes use of those
permissions.
// +kubebuilder:rbac
:groups=‹[]string›,namespace=‹string›,resourceNames=‹[]string›,resources=‹[]string›,urls=
‹[]string›,verbs=‹[]string›
on package
specifies an RBAC rule to all access to some resources or non-resource URLs.
controller-gen CLI
Kubebuilder makes use of a tool called controller-gen for generating utility code and
Kubernetes YAML. This code and config generation is controlled by the presence of
special “marker comments” in Go code.
controller-gen is built out of different “generators” (which specify what to generate) and
“output rules” (which specify how and where to write the results).
Both are configured through command line options specified in marker format.
generates CRDs and RBAC, and specifically stores the generated CRD YAML in
config/crd/bases . For the RBAC, it uses the default output rules ( config/rbac ). It
considers every package in the current directory tree (as per the normal rules of the go
... wildcard).
Generators
Each different generator is configured through a CLI option. Multiple generators may be
used in a single invocation of controller-gen .
// +webhook:headerFile=‹string›,year=‹string› on package
generates (partial) {Mutating,Validating}WebhookConfiguration objects.
// +schemapatch:generateEmbeddedObjectMeta=‹bool›,manifests=‹string›,maxDescLen=‹int›
on package
patches existing CRDs with new schemata.
// +rbac:headerFile=‹string›,roleName=‹string›,year=‹string› on package
generates ClusterRole objects.
// +object:headerFile=‹string›,year=‹string› on package
generates code containing DeepCopy, DeepCopyInto, and DeepCopyObject method
implementations.
// +crd
:allowDangerousTypes=‹bool›,crdVersions=‹[]string›,generateEmbeddedObjectMeta=‹bool›
,headerFile=‹string›,ignoreUnexportedFields=‹bool›,maxDescLen=‹int›,year=‹string›
on package
generates CustomResourceDefinition objects.
Output Rules
Output rules configure how a given generator outputs its results. There is always one
global “fallback” output rule (specified as output:<rule> ), plus per-generator overrides
(specified as output:<generator>:<rule> ).
Default Rules
When no fallback rule is specified manually, a set of default per-generator rules are
used which result in YAML going to config/<generator> , and code staying where it
belongs.
When a “fallback” rule is specified, that’ll be used instead of the default rules.
// +output:artifacts:code=‹string›,config=‹string› on package
outputs artifacts to different locations, depending on whether they're package-
associated or not.
// +output:dir:=‹string› on package
outputs each artifact to the given directory, regardless of if it's package-associated
or not.
// +output:none on package
skips outputting anything.
// +output:stdout on package
outputs everything to standard-out, with no separation.
Other Options
▶ Show Detailed Argument Help
// +paths:=‹[]string› on package
represents paths and go-style path patterns to use as package roots.
The Kubebuilder completion script can be generated with the command kubebuilder
completion [bash|fish|powershell|zsh] . Note that sourcing the completion script in
your shell enables Kubebuilder autocompletion.
The completion Bash script depends on bash-completion, which means that you
have to install this software first (you can test if you have bash-completion already
installed). Also, ensure that your Bash version is 4.1+.
chsh -s /usr/local/bin/bash
# kubebuilder autocompletion
if [ -f /usr/local/share/bash-completion/bash_completion ]; then
. /usr/local/share/bash-completion/bash_completion
fi
. <(kubebuilder completion bash)
Zsh
Fish
Artifacts
Kubebuilder publishes test binaries and container images in addition to the main binary
releases.
Test Binaries
You can find test binary tarballs for all Kubernetes versions and host platforms at
https://go.kubebuilder.io/test-tools . You can find a test binary tarball for a
particular Kubernetes version and host platform at https://go.kubebuilder.io/test-
tools/${version}/${os}/${arch} .
Container Images
You can find all container image versions for a particular platform at
https://go.kubebuilder.io/images/${os}/${arch} or at
gcr.io/kubebuilder/thirdparty-${os}-${arch} . You can find the container image for a
particular Kubernetes version and host platform at
https://go.kubebuilder.io/images/${os}/${arch}/${version} or at
gcr.io/kubebuilder/thirdparty-${os}-${arch}:${version} .
Platforms Supported
Kubebuilder produces solutions that by default can work on multiple platforms or specific
ones, depending on how you build and configure your workloads. This guide aims to help
you properly configure your projects according to your needs.
Overview
To provide support on specific or multiple platforms, you must ensure that all images
used in workloads are built to support the desired platforms. Note that may not be the
same as the platform where you develop your solutions and use KubeBuilder, but instead
the platform(s) where your solution should run and be distributed. It is recommended to
build solutions that work on multiple platforms so that your project works on any
Kubernetes cluster regardless of the underlying operating system and architecture.
The images used in workloads such as in your Pods/Deployments will need to provide the
support for this other platform. You can inspect the images using a ManifestList of
supported platforms using the command docker manifest inspect , i.e.:
Kubernetes provides a mechanism called nodeAffinity which can be used to limit the
possible node targets where a pod can be scheduled. This is especially important to
ensure correct scheduling behavior in clusters with nodes that span across multiple
platforms (i.e. heterogeneous clusters).
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
- ppc64le
- s390x
- key: kubernetes.io/os
operator: In
values:
- linux
Golang Example
Template: corev1.PodTemplateSpec{
...
Spec: corev1.PodSpec{
Affinity: &corev1.Affinity{
NodeAffinity: &corev1.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution:
&corev1.NodeSelector{
NodeSelectorTerms: []corev1.NodeSelectorTerm{
{
MatchExpressions:
[]corev1.NodeSelectorRequirement{
{
Key: "kubernetes.io/arch",
Operator: "In",
Values: []string{"amd64"},
},
{
Key: "kubernetes.io/os",
Operator: "In",
Values: []string{"linux"},
},
},
},
},
},
},
},
SecurityContext: &corev1.PodSecurityContext{
...
},
Containers: []corev1.Container{{
...
}},
},
Example(s)
You can look for some code examples by checking the code which is generated via
the Deploy Image plugin. (More info)
Example of Usage
You will probably want to automate the releases of your projects to ensure that the
images are always built for the same platforms. Note that Goreleaser also supports
docker buildx. See its documentation for more detail.
Also, you may want to configure GitHub Actions, Prow jobs, or any other solution
that you use to build images to provide multi-platform support. Note that you can
also use other options like docker manifest create to customize your solutions to
achieve the same goals with other tools.
By using Docker and the target provided by default you should NOT change the
Dockerfile to use any specific GOOS and GOARCH to build the manager binary.
However, if you are looking for to customize the default scaffold and create your own
implementations you might want to give a look in the Golang doc to knows its
available options.
Mac Os
If you are running from an Mac Os environment then, Docker also will consider it as
linux/$arch. Be aware that when, for example, is running Kind on a Mac OS
operational system the nodes will end up labeled with kubernetes.io/os=linux
Kubebuilder has been building this image with support for multiple architectures by
default.( Check it here ). If you need to address any edge case scenario where you want to
produce a project that only provides support for a specific architecture platform, you can
customize your configuration manifests to use the specific architecture types built for this
image.
Installation
Installing the binaries is as a simple as running make envtest . envtest will download
the Kubernetes API server binaries to the bin/ folder in your project by default. make
test is the one-stop shop for downloading the binaries, setting up the test environment,
and running the tests.
make envtest
Installing the binaries using setup-envtest stores the binary in OS specific locations, you
can read more about them here
Once these binaries are installed, change the test make target to include a -i like
below. -i will only check for locally installed binaries and not reach out to remote
resources. You could also set the ENVTEST_INSTALLED_ONLY env variable.
Writing tests
Using envtest in integration tests follows the general flow of:
import sigs.k8s.io/controller-runtime/pkg/envtest
//start testEnv
cfg, err = testEnv.Start()
//stop testEnv
err = testEnv.Stop()
kubebuilder does the boilerplate setup and teardown of testEnv for you, in the ginkgo
test suite that it generates under the /controllers directory.
Examples
You can use the plugin DeployImage to check examples. This plugin allows users to
scaffold API/Controllers to deploy and manage an Operand (image) on the cluster
following the guidelines and best practices. It abstracts the complexities of achieving
this goal while allowing users to customize the generated code.
Therefore, you can check that a test using ENV TEST will be generated for the
controller which has the purpose to ensure that the Deployment is created
successfully. You can see an example of its code implementation under the
testdata directory with the DeployImage samples here.
Configuring your test control plane
The make test command will install these binaries to the bin/ directory and use them
when running tests that use envtest . Ie,
./bin/k8s/
└── 1.25.0-darwin-amd64
├── etcd
├── kube-apiserver
└── kubectl
1 directory, 3 files
You can use environment variables and/or flags to specify the kubectl , api-server and
etcd setup within your integration tests.
Environment Variables
See that the test makefile target will ensure that all is properly setup when you are
using it. However, if you would like to run the tests without use the Makefile targets, for
example via an IDE, then you can set the environment variables directly in the code of
your suite_test.go :
var _ = BeforeSuite(func(done Done) {
Expect(os.Setenv("TEST_ASSET_KUBE_APISERVER", "../bin/k8s/1.25.0-darwin-
amd64/kube-apiserver")).To(Succeed())
Expect(os.Setenv("TEST_ASSET_ETCD", "../bin/k8s/1.25.0-darwin-
amd64/etcd")).To(Succeed())
Expect(os.Setenv("TEST_ASSET_KUBECTL", "../bin/k8s/1.25.0-darwin-
amd64/kubectl")).To(Succeed())
// OR
Expect(os.Setenv("KUBEBUILDER_ASSETS", "../bin/k8s/1.25.0-darwin-
amd64")).To(Succeed())
logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true)))
testenv = &envtest.Environment{}
_, err := testenv.Start()
Expect(err).NotTo(HaveOccurred())
close(done)
}, 60)
var _ = AfterSuite(func() {
Expect(testenv.Stop()).To(Succeed())
Expect(os.Unsetenv("TEST_ASSET_KUBE_APISERVER")).To(Succeed())
Expect(os.Unsetenv("TEST_ASSET_ETCD")).To(Succeed())
Expect(os.Unsetenv("TEST_ASSET_KUBECTL")).To(Succeed())
})
You can look at the controller-runtime docs to know more about its configuration
options, see here. On top of that, if you are looking to use ENV TEST to test your
webhooks then you might want to give a look at its install options.
Flags
Here’s an example of modifying the flags with which to start the API server in your
integration tests, compared to the default values in
envtest.DefaultKubeAPIServerFlags :
customApiServerFlags := []string{
"--secure-port=6884",
"--admission-control=MutatingAdmissionWebhook",
}
testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "config", "crd",
"bases")},
KubeAPIServerFlags: apiServerFlags,
}
Testing considerations
Unless you’re using an existing cluster, keep in mind that no built-in controllers are
running in the test context. In some ways, the test control plane will behave differently
from “real” clusters, and that might have an impact on how you write tests. One common
example is garbage collection; because there are no controllers monitoring built-in
resources, objects do not get deleted, even if an OwnerReference is set up.
To test that the deletion lifecycle works, test the ownership instead of asserting on
existence. For example:
expectedOwnerReference := v1.OwnerReference{
Kind: "MyCoolCustomResource",
APIVersion: "my.api.example.com/v1beta1",
UID: "d9607e19-f88f-11e6-a518-42010a800195",
Name: "userSpecifiedResourceName",
}
Expect(deployment.ObjectMeta.OwnerReferences).To(ContainElement(expectedOwnerRe
To overcome this limitation you can create a new namespace for each test. Even so, when
one test completes (e.g. in “namespace-1”) and another test starts (e.g. in “namespace-2”),
the controller will still be reconciling any active objects from “namespace-1”. This can be
avoided by ensuring that all tests clean up after themselves as part of the test teardown.
If teardown of a namespace is difficult, it may be possible to wire the reconciler in such a
way that it ignores reconcile requests that come from namespaces other than the one
being tested:
Whenever your tests create a new namespace, it can modify the value of
reconciler.Namespace. The reconciler will effectively ignore the previous namespace. For
further information see the issue raised in the controller-runtime controller-
runtime/issues/880 to add this support.
Therefore, to test a reconciliation in common cases you do not need to care about these
options. However, if you would like to do tests with the Prometheus and the Cert-
manager installed you can add the required steps to install them before running the
tests. Following an example.
// Add the operations to install the Prometheus operator and the cert-
manager
// before the tests.
BeforeEach(func() {
By("installing prometheus operator")
Expect(utils.InstallPrometheusOperator()).To(Succeed())
Check the following example of how you can implement the above operations:
const (
prometheusOperatorVersion = "0.51"
prometheusOperatorURL =
"https://raw.githubusercontent.com/prometheus-operator/" + "prometheus-
operator/release-%s/bundle.yaml"
certmanagerVersion = "v1.5.3"
certmanagerURLTmpl = "https://github.com/jetstack/cert-
manager/releases/download/%s/cert-manager.yaml"
)
However, see that tests for the metrics and cert-manager might fit better well as e2e tests
and not under the tests done using ENV TEST for the controllers. You might want to give a
look at the sample example implemented into Operator-SDK repository to know how you
can write your e2e tests to ensure the basic workflows of your project. Also, see that you
can run the tests against a cluster where you have some configurations in place they can
use the option to test using an existing cluster:
testEnv = &envtest.Environment{
UseExistingCluster: true,
}
Metrics
You will need to grant permissions to your Prometheus server so that it can scrape the
protected metrics. To achieve that, you can create a clusterRoleBinding to bind the
clusterRole to the service account that your Prometheus server uses. If you are using
kube-prometheus, this cluster binding already exists.
You can either run the following command, or apply the example yaml file provided
below to create clusterRoleBinding .
If using kubebuilder <project-prefix> is the namePrefix field in
config/default/kustomization.yaml .
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus-k8s-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-k8s-role
subjects:
- kind: ServiceAccount
name: <prometheus-service-account>
namespace: <prometheus-service-account-namespace>
Note that, when you install your project in the cluster, it will create the ServiceMonitor
to export the metrics. To check the ServiceMonitor, run kubectl get ServiceMonitor -n
<project>-system . See an example:
Alternatively, you can give the Prometheus Operator permissions to monitor other
namespaces using RBAC. See the Prometheus Operator Enable RBAC rules for
Prometheus pods documentation to know how to enable the permissions on the
namespace where the ServiceMonitor and manager exist.
Also, notice that the metrics are exported by default through port 8443 . In this way, you
are able to check the Prometheus metrics in its dashboard. To verify it, search for the
metrics exported from the namespace where the project is running {namespace="
<project>-system"} . See an example:
One way to achieve this is to declare your collectors as global variables and then register
them using init() in the controller’s package.
For example:
import (
"github.com/prometheus/client_golang/prometheus"
"sigs.k8s.io/controller-runtime/pkg/metrics"
)
var (
goobers = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "goobers_total",
Help: "Number of goobers proccessed",
},
)
gooberFailures = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "goober_failures_total",
Help: "Number of failed goobers",
},
)
)
func init() {
// Register custom metrics with the global prometheus registry
metrics.Registry.MustRegister(goobers, gooberFailures)
}
You may then record metrics to those collectors from any part of your reconcile loop.
These metrics can be evaluated from anywhere in the operator code.
In order to publish metrics and view them on the Prometheus UI, the Prometheus
instance would have to be configured to select the Service Monitor instance based
on its labels.
Those metrics will be available for prometheus or other openmetrics systems to scrape.
Following the metrics which are exported and provided by controller-runtime by default:
By default, the projects are scaffolded with a Makefile . You can customize and update
this file as please you. Here, you will find some helpers that can be useful.
# Run with Delve for development purposes against the configured Kubernetes
cluster in ~/.kube/config
# Delve is a debugger for the Go programming language. More info:
https://github.com/go-delve/delve
run-delve: generate fmt vet manifests
go build -gcflags "all=-trimpath=$(shell go env GOPATH)" -o bin/manager
main.go
dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient
exec ./bin/manager
manifests: controller-gen
$(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..."
output:crd:artifacts:config=config/crd/bases
controller-gen lets you specify what CRD API version to generate (either “v1”, the
default, or “v1beta1”). You can direct it to generate a specific version by adding
crd:crdVersions={<version>} to your CRD_OPTIONS , found at the top of your Makefile:
CRD_OPTIONS ?= "crd:crdVersions={v1beta1},preserveUnknownFields=false"
manifests: controller-gen
$(CONTROLLER_GEN) rbac:roleName=manager-role $(CRD_OPTIONS) webhook
paths="./..." output:crd:artifacts:config=config/crd/bases
dry-run: manifests
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
mkdir -p dry-run
$(KUSTOMIZE) build config/default > dry-run/manifests.yaml
Project Config
Overview
The Project Config represents the configuration of a KubeBuilder project. All projects that
are scaffolded with the CLI (KB version 3.0 and higher) will generate the PROJECT file in
the projects’ root directory. Therefore, it will store all plugins and input data used to
generate the project and APIs to better enable plugins to make useful decisions when
scaffolding.
Example
Following is an example of a PROJECT config file which is the result of a project generated
with two APIs using the Deploy Image Plugin.
# Code generated by tool. DO NOT EDIT.
# This file is used to track the info used to scaffold your project
# and allow the plugins properly work.
# More info: https://book.kubebuilder.io/reference/project-config.html
domain: testproject.org
layout:
- go.kubebuilder.io/v4
plugins:
deploy-image.go.kubebuilder.io/v1-alpha:
resources:
- domain: testproject.org
group: example.com
kind: Memcached
options:
containerCommand: memcached,-m=64,-o,modern,-v
containerPort: "11211"
image: memcached:1.4.36-alpine
runAsUser: "1001"
version: v1alpha1
- domain: testproject.org
group: example.com
kind: Busybox
options:
image: busybox:1.28
version: v1alpha1
projectName: project-v4-with-deploy-image
repo: sigs.k8s.io/kubebuilder/testdata/project-v4-with-deploy-image
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: testproject.org
group: example.com
kind: Memcached
path: sigs.k8s.io/kubebuilder/testdata/project-v4-with-deploy-
image/api/v1alpha1
version: v1alpha1
webhooks:
validation: true
webhookVersion: v1
- api:
crdVersion: v1
namespaced: true
controller: true
domain: testproject.org
group: example.com
kind: Busybox
path: sigs.k8s.io/kubebuilder/testdata/project-v4-with-deploy-
image/api/v1alpha1
version: v1alpha1
version: "3"
Why do we need to store the plugins and data used?
Following some examples of motivations to track the input used:
check if a plugin can or cannot be scaffolded on top of an existing plugin (i.e.) plugin
compatibility while chaining multiple of them together.
what operations can or cannot be done such as verify if the layout allow API(s) for
different groups to be scaffolded for the current configuration or not.
verify what data can or not be used in the CLI operations such as to ensure that
WebHooks can only be created for pre-existent API(s)
Note that KubeBuilder is not only a CLI tool but can also be used as a library to allow
users to create their plugins/tools, provide helpers and customizations on top of their
existing projects - an example of which is Operator-SDK. SDK leverages KubeBuilder to
create plugins to allow users to work with other languages and provide helpers for their
users to integrate their projects with, for example, the Operator Framework
solutions/OLM. You can check the plugin’s documentation to know more about creating
custom plugins.
Additionally, another motivation for the PROJECT file is to help us to create a feature that
allows users to easily upgrade their projects by providing helpers that automatically re-
scaffold the project. By having all the required metadata regarding the APIs, their
configurations and versions in the PROJECT file. For example, it can be used to automate
the process of re-scaffolding while migrating between plugin versions. (More info).
Versioning
The Project config is versioned according to its layout. For further information see
Versioning.
Layout Definition
The PROJECT version 3 layout looks like:
domain: testproject.org
layout:
- go.kubebuilder.io/v3
plugins:
declarative.go.kubebuilder.io/v1:
resources:
- domain: testproject.org
group: crew
kind: FirstMate
version: v1
projectName: example
repo: sigs.k8s.io/kubebuilder/example
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: testproject.org
group: crew
kind: Captain
path: sigs.k8s.io/kubebuilder/example/api/v1
version: v1
webhooks:
defaulting: true
validation: true
webhookVersion: v1
Field Description
Defines the global plugins, e.g. a project
init with --
plugins="go/v3,declarative" means
layout
that any sub-command used will always
call its implementation for both plugins
in a chain.
Store the domain of the project. This
information can be provided by the
domain user when the project is generate with
the init sub-command and the
domain flag.
Plugins
Since the 3.0.0 Kubebuilder version, preliminary support for plugins was added. You
can Extend the CLI and Scaffolds as well. See that when users run the CLI commands to
perform the scaffolds, the plugins are used:
This section details how to extend Kubebuilder and create your plugins following the
same layout structures.
Note
You can check the existing design proposal docs at Extensible CLI and Scaffolding
Plugins: phase 1 and Extensible CLI and Scaffolding Plugins: phase 1.5 to know more
on what is provided by Kubebuilder CLI and API currently.
To know more about Kubebuilder’s future vision of the Plugins architecture, see the
section Future vision for Kubebuilder Plugins.
This section describes the plugins supported and shipped in with the Kubebuilder project.
Then, see that you can use the kustomize plugin, which is responsible for to scaffold the
kustomize files under config/ , as the base language plugins which are responsible for to
scaffold the Golang files to create your own plugins to work with another languages (i.e.
Operator-SDK does to allow users work with Ansible/Helm) or to add helpers on top, such
as Operator-SDK does to add their features to integrate the projects with OLM.
Responsible for
scaffolding all files
that specifically
require Golang.
base.go.kubebuilder.io/v3 base/v3
This plugin is used
in composition to
create the plugin
( go/v3 )
Responsible for
scaffolding all files
which specifically
requires Golang.
base.go.kubebuilder.io/v4 base/v4
This plugin is used
in the composition
to create the plugin
( go/v4 )
Plugins Versioning
ALPHA plugins can introduce breaking changes. For further info see Plugins
Versioning.
! Deprecated
The go/v2 plugin cannot scaffold projects in which CRDs and/or Webhooks have a
v1 API version. The go/v2 plugin scaffolds with the v1beta1 API version which was
deprecated in Kubernetes 1.16 and removed in 1.22 . This plugin was kept to
ensure backwards compatibility with projects that were scaffolded with the old
"Kubebuilder 2.x" layout and does not work with the new plugin ecosystem that
was introduced with Kubebuilder 3.0.0 More info
Since 28 Apr 2021 , the default layout produced by Kubebuilder changed and is
done via the go/v3 . We encourage you migrate your project to the latest version if
your project was built with a Kubebuilder versions < 3.0.0 .
The go/v2 plugin has the purpose to scaffold Golang projects to help users to build
projects with controllers and keep the backwards compatibility with the default scaffold
made using Kubebuilder CLI 2.x.z releases.
You can check samples using this plugin by looking at the project-v2-<options>
directories under the testdata projects on the root directory of the Kubebuilder
project.
Be aware that this plugin version does not provide a scaffold compatible with the
latest versions of the dependencies used in order to keep its backwards
compatibility.
How to use it ?
To initialize a Golang project using the legacy layout and with this plugin run, e.g.:
Note
By creating a project with this plugin, the PROJECT file scaffold will be using the
previous schema (project version 2), so that Kubebuilder CLI knows what plugin
version was used and will call its subcommands such as create api and create
webhooks .
Note that further Golang plugins versions use the new Project file schema, which
tracks the information about what plugins and versions have been used so far.
Further resources
Check the code implementation of the go/v2 plugin.
[Deprecated] go/v3 (go.kubebuilder.io/v3)
! Deprecated
The go/v3 cannot fully support Kubernetes 1.25+ and work with Kustomize versions
> v3.
Kubebuilder tool will scaffold the go/v3 plugin by default. This plugin is a composition of
the plugins kustomize.common.kubebuilder.io/v1 and base.go.kubebuilder.io/v3 .
By using you can scaffold the default project which is a helper to construct sets of
controllers.
It basically scaffolds all the boilerplate code required to create and design controllers.
Note that by following the quickstart you will be using this plugin.
Examples
Samples are provided under the testdata directory of the Kubebuilder project. You
can check samples using this plugin by looking at the project-v3-<options>
projects under the testdata directory on the root directory of the Kubebuilder
project.
When to use it ?
If you are looking to scaffold Golang projects to develop projects using controllers
How to use it ?
As go/v3 is the default plugin there is no need to explicitly mention to Kubebuilder to
use this plugin.
To create a new project with the go/v3 plugin the following command can be used:
Note
Also, if you need you can explicitly inform the plugin via the option provided --
plugins=go/v3 .
Further resources
To check how plugins are composited by looking at this definition in the main.go.
Check the code implementation of the base Golang plugin
base.go.kubebuilder.io/v3 .
Check the code implementation of the Kustomize/v1 plugin.
Check controller-runtime to know more about controllers.
Kubebuilder will scaffold using the go/v4 plugin only if specified when initializing the
project. This plugin is a composition of the plugins
kustomize.common.kubebuilder.io/v2 and base.go.kubebuilder.io/v4 . It scaffolds a
project template that helps in constructing sets of controllers.
It scaffolds boilerplate code to create and design controllers. Note that by following the
quickstart you will be using this plugin.
Examples
You can check samples using this plugin by looking at the project-v4-<options>
projects under the testdata directory on the root directory of the Kubebuilder
project.
When to use it ?
If you are looking to scaffold Golang projects to develop projects using controllers
If you have a project created with go/v3 (default layout since 28 Apr 2021 and
Kubebuilder release version 3.0.0 ) to go/v4 then, see the migration guide
Migration from go/v3 to go/v4
How to use it ?
To create a new project with the go/v4 plugin the following command can be used:
Further resources
To see the composition of plugins, you can check the source code for the
Kubebuilder main.go.
Check the code implementation of the base Golang plugin
base.go.kubebuilder.io/v4 .
Check the code implementation of the Kustomize/v2 plugin.
Check controller-runtime to know more about controllers.
Declarative Plugin
The declarative plugin allows you to create controllers using the kubebuilder-declarative-
pattern. By using the declarative plugin, you can make the required changes on top of
what is scaffolded by default when you create a Go project with Kubebuilder and the
Golang plugins (i.e. go/v2, go/v3).
Examples
You can check samples using this plugin by looking at the “addon” samples inside the
testdata directory of the Kubebuilder project.
When to use it ?
If you are looking to scaffold one or more controllers following the pattern ( See an
e.g. of the reconcile method implemented here)
If you want to have manifests shipped inside your Manager container. The
declarative plugin works with channels, which allow you to push manifests. More
info
How to use it ?
The declarative plugin requires to be used with one of the available Golang plugins If you
want that any API(s) and its respective controller(s) generate to reconcile them of your
project adopt this partner then:
If you want to adopt this pattern for specific API(s) and its respective controller(s) (not for
any API/controller scaffold using Kubebuilder CLI) then:
Subcommands
The declarative plugin implements the following subcommands:
Affected files
The following scaffolds will be created or updated by this plugin:
controllers/*_controller.go
api/*_types.go
channels/packages/<packagename>/<version>/manifest.yaml
channels/stable
Dockerfile
Further resources
Read more about the declarative pattern
Watch the KubeCon 2018 Video Managing Addons with Operators
Check the plugin implementation
Grafana Plugin (grafana/v1-alpha)
The Grafana plugin is an optional plugin that can be used to scaffold Grafana Dashboards
to allow you to check out the default metrics which are exported by projects using
controller-runtime.
Examples
When to use it ?
If you are looking to observe the metrics exported by controller metrics and
collected by Prometheus via Grafana.
How to use it ?
Prerequisites:
Your project must be using controller-runtime to expose the metrics via the
controller default metrics and they need to be collected by Prometheus.
Access to Prometheus.
Prometheus should have an endpoint exposed. (For prometheus-operator ,
this is similar as: http://prometheus-k8s.monitoring.svc:9090 )
The endpoint is ready to/already become the datasource of your Grafana. See
Add a data source
Access to Grafana. Make sure you have:
Dashboard edit permission
Prometheus Data source
Check the metrics to know how to enable the metrics for your projects scaffold with
Kubebuilder.
See that in the config/prometheus you will find the ServiceMonitor to enable the
metrics in the default endpoint /metrics .
Basic Usage
The Grafana plugin is attached to the init subcommand and the edit subcommand:
The plugin will create a new directory and scaffold the JSON files under it (i.e.
grafana/controller-runtime-metrics.json ).
Show case:
Grafana Dashboard
Metrics:
controller_runtime_reconcile_total
controller_runtime_reconcile_errors_total
Query:
sum(rate(controller_runtime_reconcile_total{job=”$job”}[5m])) by (instance,
pod)
sum(rate(controller_runtime_reconcile_errors_total{job=”$job”}[5m])) by
(instance, pod)
Description:
Per-second rate of total reconciliation as measured over the last 5 minutes
Per-second rate of reconciliation errors as measured over the last 5 minutes
Sample:
Metrics:
process_cpu_seconds_total
process_resident_memory_bytes
Query:
rate(process_cpu_seconds_total{job=”$job”, namespace=”$namespace”,
pod=”$pod”}[5m]) * 100
process_resident_memory_bytes{job=”$job”, namespace=”$namespace”,
pod=”$pod”}
Description:
Per-second rate of CPU usage as measured over the last 5 minutes
Allocated Memory for the running controller
Sample:
Metrics
workqueue_queue_duration_seconds_bucket
Query:
histogram_quantile(0.50,
sum(rate(workqueue_queue_duration_seconds_bucket{job=”$job”,
namespace=”$namespace”}[5m])) by (instance, name, le))
Description
Seconds an item stays in workqueue before being requested.
Sample:
Metrics
workqueue_work_duration_seconds_bucket
Query:
histogram_quantile(0.50,
sum(rate(workqueue_work_duration_seconds_bucket{job=”$job”,
namespace=”$namespace”}[5m])) by (instance, name, le))
Description
Seconds of processing an item from workqueue takes.
Sample:
Metrics
workqueue_adds_total
Query:
sum(rate(workqueue_adds_total{job=”$job”, namespace=”$namespace”}[5m]))
by (instance, name)
Description
Per-second rate of items added to work queue
Sample:
Metrics
workqueue_retries_total
Query:
sum(rate(workqueue_retries_total{job=”$job”, namespace=”$namespace”}
[5m])) by (instance, name)
Description
Per-second rate of retries handled by workqueue
Sample:
---
customMetrics:
# - metric: # Raw custom metric (required)
# type: # Metric type: counter/gauge/histogram (required)
# expr: # Prom_ql for the metric (optional)
# unit: # Unit of measurement, examples: s,none,bytes,percent,etc.
(optional)
Add Custom Metrics to Config
You can enter multiple custom metrics in the file. For each element, you need to specify
the metric and its type . The Grafana plugin can automatically generate expr for
visualization. Alternatively, you can provide expr and the plugin will use the specified
one directly.
---
customMetrics:
- metric: memcached_operator_reconcile_total # Raw custom metric (required)
type: counter # Metric type: counter/gauge/histogram (required)
unit: none
- metric: memcached_operator_reconcile_time_seconds_bucket
type: histogram
Scaffold Manifest
Show case:
Affected files
The following scaffolds will be created or updated by this plugin:
grafana/*.json
Further resources
Check out video to show how it works
Checkout the video to show how the custom metrics feature works
Refer to a sample of servicemonitor provided by kustomize plugin
Check the plugin implementation
Grafana Docs of importing JSON file
The usage of servicemonitor by Prometheus Operator
The deploy-image plugin allows users to create controllers and custom resources which
will deploy and manage an image on the cluster following the guidelines and best
practices. It abstracts the complexities to achieve this goal while allowing users to
improve and customize their projects.
Examples
When to use it ?
This plugin is helpful for those who are getting started.
If you are looking to Deploy and Manage an image (Operand) using Operator
pattern and the tool the plugin will create an API/controller to be reconciled to
achieve this goal
If you are looking to speed up
How to use it ?
After you create a new project with kubebuilder init you can create APIs using this
plugin. Ensure that you have followed the quick start before trying to use it.
Then, by using this plugin you can create APIs informing the image (Operand) that you
would like to deploy on the cluster. Note that you can optionally specify the command
that could be used to initialize this container via the flag --image-container-command
and the port with --image-container-port flag. You can also specify the RunAsUser
value for the Security Context of the container via the flag --run-as-user ., i.e:
The make run will execute the main.go outside of the cluster to let you test the project
running it locally. Note that by using this plugin the Operand image informed will be
stored via an environment variable in the config/manager/manager.yaml manifest.
Therefore, before run make run you need to export any environment variable that you
might have. Example:
export MEMCACHED_IMAGE="memcached:1.4.36-alpine"
Subcommands
The deploy-image plugin implements the following subcommands:
create api ( $ kubebuilder create api [OPTIONS] )
Affected files
With the create api command of this plugin, in addition to the existing scaffolding, the
following files are affected:
Further Resources:
Check out video to show how it works
See the desing proposal documentation
Then, see that you can use the kustomize plugin, which is responsible for to scaffold the
kustomize files under config/ , as the base language plugins which are responsible for to
scaffold the Golang files to create your own plugins to work with another languages (i.e.
Operator-SDK does to allow users work with Ansible/Helm) or to add helpers on top, such
as Operator-SDK does to add their features to integrate the projects with OLM.
Responsible for
scaffolding all files
that specifically
require Golang.
base.go.kubebuilder.io/v3 base/v3
This plugin is used
in composition to
create the plugin
( go/v3 )
Responsible for
scaffolding all files
which specifically
requires Golang.
base.go.kubebuilder.io/v4 base/v4
This plugin is used
in the composition
to create the plugin
( go/v4 )
[Deprecated] Kustomize (kustomize/v1)
! Deprecated
If you are using Golang projects scaffolded with go/v3 which uses this version
please, check the Migration guide to learn how to upgrade your projects.
The kustomize plugin allows you to scaffold all kustomize manifests used to work with
the language plugins such as go/v2 and go/v3 . By using the kustomize plugin, you can
create your own language plugins and ensure that you will have the same configurations
and features provided by it.
Supportability
linux/amd64
linux/arm64
darwin/amd64
You might want to consider using kustomize/v2 if you are looking to scaffold projects
in other architecture environments. (i.e. if you are looking to scaffold projects with
Apple Silicon/M1 ( darwin/arm64 ) this plugin will not work, more info: kubernetes-
sigs/kustomize#4612).
Note that projects such as Operator-sdk consume the Kubebuilder project as a lib and
provide options to work with other languages like Ansible and Helm. The kustomize
plugin allows them to easily keep a maintained configuration and ensure that all
languages have the same configuration. It is also helpful if you are looking to provide nice
plugins which will perform changes on top of what is scaffolded by default. With this
approach we do not need to keep manually updating this configuration in all possible
language plugins which uses the same and we are also able to create “helper” plugins
which can work with many projects and languages.
Examples
You can check the kustomize content by looking at the config/ directory. Samples
are provided under the testdata directory of the Kubebuilder project.
When to use it ?
If you are looking to scaffold the kustomize configuration manifests for your own
language plugin
How to use it ?
If you are looking to define that your language plugin should use kustomize use the
Bundle Plugin to specify that your language plugin is a composition with your plugin
responsible for scaffold all that is language specific and kustomize for its configuration,
see:
Its implementation for the subcommand create api will scaffold the kustomize
manifests which are specific for each API, see here. The same applies to its
implementation for create webhook.
Affected files
The following scaffolds will be created or updated by this plugin:
config/*
Further resources
Check the kustomize plugin implementation
Check the kustomize documentation
Check the kustomize repository
The kustomize plugin allows you to scaffold all kustomize manifests used to work with
the language base plugin base.go.kubebuilder.io/v4 . This plugin is used to generate
the manifest under config/ directory for the projects build within the go/v4 plugin
(default scaffold).
Note that projects such as Operator-sdk consume the Kubebuilder project as a lib and
provide options to work with other languages like Ansible and Helm. The kustomize
plugin allows them to easily keep a maintained configuration and ensure that all
languages have the same configuration. It is also helpful if you are looking to provide nice
plugins which will perform changes on top of what is scaffolded by default. With this
approach we do not need to keep manually updating this configuration in all possible
language plugins which uses the same and we are also able to create “helper” plugins
which can work with many projects and languages.
Examples
You can check the kustomize content by looking at the config/ directory provide on
the sample project-v4-* under the testdata directory of the Kubebuilder project.
When to use it
If you are looking to scaffold the kustomize configuration manifests for your own
language plugin
If you are looking for support on Apple Silicon ( darwin/arm64 ). (Before kustomize
4.x the binary for this plataform is not provided)
If you are looking for to begin to try out the new syntax and features provide by
kustomize v4 (More info) and v5 (More info)
If you are NOT looking to build projects which will be used on Kubernetes cluster
versions < 1.22 (The new features provides by kustomize v4 are not officially supported
and might not work with kubectl < 1.22 )
If you are NOT looking to rely on special URLs in resource fields
If you want to use replacements since vars are deprecated and might be removed
soon
How to use it
If you are looking to define that your language plugin should use kustomize use the
Bundle Plugin to specify that your language plugin is a composition with your plugin
responsible for scaffold all that is language specific and kustomize for its configuration,
see:
import (
...
kustomizecommonv2alpha
"sigs.k8s.io/kubebuilder/v3/pkg/plugins/common/kustomize/v2"
golangv4 "sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang/v4"
...
)
# Provides the same scaffold of go/v3 plugin which is composition but with
kustomize/v2
kubebuilder init --plugins=kustomize/v2,base.go.kubebuilder.io/v4 --domain
example.org --repo example.org/guestbook-operator
Subcommands
The kustomize plugin implements the following subcommands:
Its implementation for the subcommand create api will scaffold the kustomize
manifests which are specific for each API, see here. The same applies to its
implementation for create webhook.
Affected files
The following scaffolds will be created or updated by this plugin:
config/*
Further resources
Check the kustomize plugin implementation
Check the kustomize documentation
Check the kustomize repository
Check the release notes for Kustomize v5.0.0
Check the release notes for Kustomuze v4.0.0
Also, you can compare the config/ directory between the samples project-v3
and project-v4 to check the difference in the syntax of the manifests provided by
default
Overview
You can extend Kubebuilder to allow your project to have the same CLI features and
provide the plugins scaffolds.
CLI system
Plugins are run using a CLI object, which maps a plugin type to a subcommand and calls
that plugin’s methods. For example, writing a program that injects an Init plugin into a
CLI then calling CLI.Run() will call the plugin’s SubcommandMetadata,
UpdatesMetadata and Run methods with information a user has passed to the program
in kubebuilder init . Following an example:
package cli
import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"sigs.k8s.io/kubebuilder/v3/pkg/cli"
cfgv3 "sigs.k8s.io/kubebuilder/v3/pkg/config/v3"
"sigs.k8s.io/kubebuilder/v3/pkg/plugin"
kustomizecommonv1
"sigs.k8s.io/kubebuilder/v3/pkg/plugins/common/kustomize/v1"
"sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang"
declarativev1
"sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang/declarative/v1"
golangv3 "sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang/v3"
var (
// The following is an example of the commands
// that you might have in your own binary
commands = []*cobra.Command{
myExampleCommand.NewCmd(),
}
alphaCommands = []*cobra.Command{
myExampleAlphaCommand.NewCmd(),
}
)
c, err := cli.New(
// Add the name of your CLI binary
cli.WithCommandName("example-cli"),
return c
}
This program can then be built and run in the following ways:
Default behavior:
The CLI is responsible for managing the PROJECT file config, representing the
configuration of the projects that are scaffold by the CLI tool.
Plugins
Kubebuilder provides scaffolding options via plugins. Plugins are responsible for
implementing the code that will be executed when the sub-commands are called. You can
create a new plugin by implementing the Plugin interface.
Kubebuilder CLI plugins wrap scaffolding and CLI features in conveniently packaged Go
types that are executed by the kubebuilder binary, or any binary which imports them.
More specifically, a plugin configures the execution of one of the following CLI
commands:
Plugins are identified by a key of the form <name>/<version> . There are two ways to
specify a plugin to run:
Plugin naming
Plugin names must be DNS1123 labels and should be fully qualified, i.e. they have a suffix
like .example.com . For example, the base Go scaffold used with kubebuilder
commands has name go.kubebuilder.io . Qualified names prevent conflicts between
plugin names; both go.kubebuilder.io and go.example.com can both scaffold Go code
and can be specified by a user.
Plugin versioning
alpha : should be used for plugins that are frequently changed and may break
between uses.
beta : should be used for plugins that are only changed in minor ways, ex. bug
fixes.
Breaking changes
Any change that will break a project scaffolded by the previous plugin version is a
breaking change.
Plugins Deprecation
Bundle Plugins
Bundle Plugins allow you to create a plugin that is a composition of many plugins:
Note that it means that when a user of your CLI calls this plugin, the execution of the sub-
commands will be sorted by the order to which they were added in a chain:
Overview
You can extend the Kubebuilder API to create your own plugins. If extending the CLI, your
plugin will be implemented in your project and registered to the CLI as has been done by
the SDK project. See its CLI code as an example.
When it is useful?
If you are looking to create plugins which support and work with another language.
If you would like to create helpers and integrations on top of the scaffolds done by
the plugins provided by Kubebuiler.
If you would like to have customized layouts according to your needs.
Therefore, if you have a need you might want to propose a solution by adding a new
plugin which would be shipped with Kubebuilder by default.
However, you might also want to have your own tool to address your specific scenarios
and by taking advantage of what is provided by Kubebuilder as a library. That way, you
can focus on addressing your needs and keep your solutions easier to maintain.
Note that by using Kubebuilder as a library, you can import its plugins and then create
your own plugins that do customizations on top. For instance, Operator-SDK does with
the plugins manifest and scorecard to add its features. Also see here.
Another option implemented with the Extensible CLI and Scaffolding Plugins - Phase 2 is
to extend Kibebuilder as a LIB to create only a specific plugin that can be called and used
with Kubebuilder as well.
You can check the proposal documentation for better understanding its motivations.
See the Extensible CLI and Scaffolding Plugins: phase 1, the Extensible CLI and
Scaffolding Plugins: phase 1.5 and the Extensible CLI and Scaffolding Plugins - Phase
2 design docs. Also, you can check the Plugins section.
Language-based Plugins
Kubebuilder offers the Golang-based operator plugins, which will help its CLI tool users
create projects following the Operator Pattern.
The SDK project, for example, has language plugins for Ansible and Helm, which are
similar options but for users who would like to work with these respective languages and
stacks instead of Golang.
In this way, currently, you can Extend the CLI and use the Bundle Plugin to create your
language plugins such as:
mylanguagev1Bundle, _ :=
plugin.NewBundle(plugin.WithName(language.DefaultNameQualifier),
plugin.WithVersion(plugin.Version{Number: 1}),
plugin.WithPlugins(kustomizecommonv1.Plugin{},
mylanguagev1.Plugin{}), // extend the common base from Kubebuilder
// your plugin language which will do the scaffolds for the specific
language on top of the common base
)
If you do not want to develop your plugin using Golang, you can follow its standard by
using the binary as follows:
Then you can, for example, create your implementations for the sub-commands create
api and create webhook using your language of preference.
Why use the Kubebuilder style?
Kubebuilder and SDK are both broadly adopted projects which leverage the
controller-runtime project. They both allow users to build solutions using the
Operator Pattern and follow common standards.
Adopting these standards can bring significant benefits, such as joining forces on
maintaining the common standards as the features provided by Kubebuilder and
take advantage of the contributions made by the community. This allows you to
focus on the specific needs and requirements for your plugin and use-case.
And then, you will also be able to use custom plugins and options currently or in the
future which might to be provided by these projects as any other which decides to
persuade the same standards.
Custom Plugins
Note that users are also able to use plugins to customize their scaffolds and address
specific needs.
See that Kubebuilder provides the deploy-image plugin that allows the user to create the
controller & CRs which will deploy and manage an image on the cluster:
This plugin will perform a custom scaffold following the Operator Pattern.
Another example is the grafana plugin that scaffolds a new folder container manifests
to visualize operator status on Grafana Web UI:
In this way, by Extending the Kubebuilder CLI, you can also create custom plugins such
this one.
deploy-image: https://github.com/kubernetes-
sigs/kubebuilder/tree/v3.7.0/pkg/plugins/golang/deploy-image/v1alpha1
grafana: https://github.com/kubernetes-
sigs/kubebuilder/tree/v3.7.0/pkg/plugins/optional/grafana/v1alpha
Plugin Scaffolding
Your plugin may add code on top of what is scaffolded by default with Kubebuilder sub-
commands( init , create , ...). This is common as you may expect your plugin to:
Create API
Update controller manager logic
Generate corresponding manifests
Boilerplates
The Kubebuilder internal plugins use boilerplates to generate the files of code.
For instance, the go/v3 scaffolds the main.go file by defining an object that implements
the machinery interface. In the implementation of Template.SetTemplateDefaults , the
raw template is set to the body. Such object that implements the machinery interface will
later pass to the execution of scaffold.
Similar, you may also design your code of plugin implementation by such reference. You
can also view the other parts of the code file given by the links above.
If your plugin is expected to modify part of the existing files with its scaffold, you may use
functions provided by sigs.k8s.io/kubebuilder/v3/pkg/plugin/util. See example of deploy-
image. In brief, the util package helps you customize your scaffold in a lower level.
Notice that Kubebuilder also provides machinery pkg where you can:
Overwrite A File
You might want for example to overwrite a scaffold done by using the option:
f.IfExistsAction = machinery.OverwriteFile
Let’s imagine that you would like to have a helper plugin that would be called in a chain
with go/v4 to add customizations on top. Therefore after we generate the code calling
the subcommand to init from go/v4 we would like to overwrite the Makefile to change
this scaffold via our plugin. In this way, we would implement the Bollerplate for our
Makefile and then use this option to ensure that it would be overwritten.
See example of deploy-image.
Since your plugin may work frequently with other plugins, the executing command for
scaffolding may become cumbersome, e.g:
You can probably define a method to your scaffolder that calls the plugin scaffolding
method in order. See example of deploy-image.
Alternatively, you can create a plugin bundle to include the target plugins. For instance:
mylanguagev1Bundle, _ :=
plugin.NewBundle(plugin.WithName(language.DefaultNameQualifier),
plugin.WithVersion(plugin.Version{Number: 1}),
plugin.WithPlugins(kustomizecommonv1.Plugin{},
mylanguagev1.Plugin{}), // extend the common base from Kuebebuilder
// your plugin language which will do the scaffolds for the specific
language on top of the common base
)
For example, Kubebuilder generate sample projects based on different plugins to validate
the layouts.
Simply, you can also use TextContext to generate folders of scaffolded projects from
your plugin. The commands are very similar as mentioned in creating-plugins.
To initialized a project:
By("initializing a project")
err = kbc.Init(
"--plugins", "go/v3",
"--project-version", "3",
"--domain", kbc.Domain,
"--fetch-deps=false",
"--component-config=true",
)
ExpectWithOffset(1, err).NotTo(HaveOccurred())
To define API:
By("creating API definition")
err = kbc.CreateAPI(
"--group", kbc.Group,
"--version", kbc.Version,
"--kind", kbc.Kind,
"--namespaced",
"--resource",
"--controller",
"--make=false",
)
ExpectWithOffset(1, err).NotTo(HaveOccurred())
Plugins Versioning
Incrementing versions
For more information on how Kubebuilder release versions work, see the semver
documentation.
Project versions should only be increased if a breaking change is introduced in the
PROJECT file scheme itself. Changes to the Go scaffolding or the Kubebuilder CLI do not
affect project version.
Similarly, the introduction of a new plugin version might only lead to a new minor version
release of Kubebuilder, since no breaking change is being made to the CLI itself. It’d only
be a breaking change to Kubebuilder if we remove support for an older plugin version.
See the plugins design doc versioning section for more details on plugin versioning.
The scheme for project version "2" was defined before the concept of plugins was
introduced, so plugin go.kubebuilder.io/v2 is implicitly used for those project
types. Schema for project versions "3" and beyond define a layout key that
informs the plugin system of which plugin to use.
You must also add a migration guide to the migrations section of the Kubebuilder book in
your PR. It should detail the steps required for users to upgrade their projects from vX to
v(X+1)-alpha .
Example
You create a feature that adds a new marker to the file main.go scaffolded by init
that create api will use to update that file. The changes introduced in your feature
would cause errors if used with projects built with plugins go.kubebuilder.io/v2
without users manually updating their projects. Thus, your changes introduce a
breaking change to plugin go.kubebuilder.io , and can only be merged into plugin
version v3-alpha . This plugin’s package should exist already.
FAQ
How does the value informed via the domain flag (i.e.
kubebuilder init --domain example.com) when
we init a project?
After creating a project, usually you will want to extend the Kubernetes APIs and define
new APIs which will be owned by your project. Therefore, the domain value is tracked in
the PROJECT file which defines the config of your project and will be used as a domain to
create the endpoints of your API(s). Please, ensure that you understand the Groups and
Versions and Kinds, oh my!.
The domain is for the group suffix, to explicitly show the resource group category. For
example, if set --domain=example.com :
opts := zap.Options{
Development: true,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()
ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))
with:
flag.Parse()
ctrl.SetLogger(klog.NewKlogr())
After make run, I see errors like “unable to find leader
election namespace: not running in-cluster...”
You can enable the leader election. However, if you are testing the project locally using
the make run target which will run the manager outside of the cluster then, you might
also need to set the namespace the leader election resource will be created, as follows:
If you are running the project on the cluster with make deploy target then, you might not
want to add this option. So, you might want to customize this behaviour using
environment variables to only add this option for development purposes, such as:
leaderElectionNS := ""
if os.Getenv("ENABLE_LEADER_ELECATION_NAMESPACE") != "false" {
leaderElectionNS = "<project-name>-system"
}
when you are running the project against a Kubernetes old version (maybe <= 1.21) , it
might be caused by the issue , the reason is the mounted token file set to 0600 , see
solution here. Then, the workaround is:
securityContext:
runAsNonRoot: true
fsGroup: 65532 # add this fsGroup to make the token file readable
However, note that this problem is fixed and will not occur if you deploy the project in
high versions (maybe >= 1.22).
TODO
If you’re seeing this page, it’s probably because something’s not done in the book yet, or
you stumbled upon an old link. Go see if anyone else has found this or bug the
maintainers.