The Kubebuilder Book

Download as pdf or txt
Download as pdf or txt
You are on page 1of 221

Note: Impatient readers may head straight to Quick Start.

Using Kubebuilder v1 or v2? Check the legacy documentation for v1 or v2

Who is this for

Users of Kubernetes

Users of Kubernetes will develop a deeper understanding of Kubernetes through learning the
fundamental concepts behind how APIs are designed and implemented. This book will teach
readers how to develop their own Kubernetes APIs and the principles from which the core
Kubernetes APIs are designed.

Including:

The structure of Kubernetes APIs and Resources


API versioning semantics
Self-healing
Garbage Collection and Finalizers
Declarative vs Imperative APIs
Level-Based vs Edge-Base APIs
Resources vs Subresources

Kubernetes API extension developers

API extension developers will learn the principles and concepts behind implementing
canonical Kubernetes APIs, as well as simple tools and libraries for rapid execution. This book
covers pitfalls and misconceptions that extension developers commonly encounter.

Including:

How to batch multiple events into a single reconciliation call


How to configure periodic reconciliation
Forthcoming
When to use the lister cache vs live lookups
Garbage Collection vs Finalizers
How to use Declarative vs Webhook Validation
How to implement API versioning
Why Kubernetes APIs
Kubernetes APIs provide consistent and well defined endpoints for objects adhering to a
consistent and rich structure.

This approach has fostered a rich ecosystem of tools and libraries for working with
Kubernetes APIs.

Users work with the APIs through declaring objects as yaml or json config, and using common
tooling to manage the objects.

Building services as Kubernetes APIs provides many advantages to plain old REST, including:

Hosted API endpoints, storage, and validation.


Rich tooling and clis such as kubectl and kustomize .
Support for Authn and granular Authz.
Support for API evolution through API versioning and conversion.
Facilitation of adaptive / self-healing APIs that continuously respond to changes in the
system state without user intervention.
Kubernetes as a hosting environment

Developers may build and publish their own Kubernetes APIs for installation into running
Kubernetes clusters.

Contribution
If you like to contribute to either this book or the code, please be so kind to read our
Contribution guidelines first.

Resources
Repository: sigs.k8s.io/kubebuilder

Slack channel: #kubebuilder

Google Group: kubebuilder@googlegroups.com

Quick Start
This Quick Start guide will cover:

Creating a project
Creating an API
Running locally
Running in-cluster

Prerequisites
go version v1.20.0+
docker version 17.03+.
kubectl version v1.11.3+.
Access to a Kubernetes v1.11.3+ cluster.

Versions and Supportability

Projects created by Kubebuilder contain a Makefile that will install tools at versions
defined at creation time. Those tools are:

kustomize
controller-gen

The versions which are defined in the Makefile and go.mod files are the versions tested
and therefore is recommend to use the specified versions.

Installation
Install kubebuilder:

# download kubebuilder and install locally.


curl -L -o kubebuilder "https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go
env GOARCH)"
chmod +x kubebuilder && mv kubebuilder /usr/local/bin/

Using master branch

You can work with a master snapshot by installing from


https://go.kubebuilder.io/dl/master/$(go env GOOS)/$(go env GOARCH) .

Enabling shell autocompletion

Kubebuilder provides autocompletion support via the command kubebuilder


completion <bash|fish|powershell|zsh> , which can save you a lot of typing. For further
information see the completion document.
Create a Project
Create a directory, and then run the init command inside of it to initialize a new project.
Follows an example.

mkdir -p ~/projects/guestbook
cd ~/projects/guestbook
kubebuilder init --domain my.domain --repo my.domain/guestbook

Developing in $GOPATH

If your project is initialized within GOPATH , the implicitly called go mod init will
interpolate the module path for you. Otherwise --repo=<module path> must be set.

Read the Go modules blogpost if unfamiliar with the module system.

Create an API
Run the following command to create a new API (group/version) as webapp/v1 and the new
Kind(CRD) Guestbook on it:

kubebuilder create api --group webapp --version v1 --kind Guestbook

Press Options

If you press y for Create Resource [y/n] and for Create Controller [y/n] then this will
create the files api/v1/guestbook_types.go where the API is defined and the
internal/controllers/guestbook_controller.go where the reconciliation business
logic is implemented for this Kind(CRD).

OPTIONAL: Edit the API definition and the reconciliation business logic. For more info see
Designing an API and What’s in a Controller.

If you are editing the API definitions, generate the manifests such as Custom Resources (CRs)
or Custom Resource Defintions (CRDs) using

make manifests

Click here to see an example. (api/v1/guestbook_types.go)


Test It Out
You’ll need a Kubernetes cluster to run against. You can use KIND to get a local cluster for
testing, or run against a remote cluster.

Context Used

Your controller will automatically use the current context in your kubeconfig file (i.e.
whatever cluster kubectl cluster-info shows).

Install the CRDs into the cluster:

make install

Run your controller (this will run in the foreground, so switch to a new terminal if you want to
leave it running):

make run

Install Instances of Custom Resources


If you pressed y for Create Resource [y/n] then you created a CR for your CRD in your
samples (make sure to edit them first if you’ve changed the API definition):

kubectl apply -f config/samples/

Run It On the Cluster


Build and push your image to the location specified by IMG :

make docker-build docker-push IMG=<some-registry>/<project-name>:tag

Deploy the controller to the cluster with image specified by IMG :

make deploy IMG=<some-registry>/<project-name>:tag

registry permission

This image ought to be published in the personal registry you specified. And it is required
to have access to pull the image from the working environment. Make sure you have the
proper permission to the registry if the above commands don’t work.
RBAC errors

If you encounter RBAC errors, you may need to grant yourself cluster-admin privileges or
be logged in as admin. See Prerequisites for using Kubernetes RBAC on GKE cluster
v1.11.x and older which may be your case.

Uninstall CRDs
To delete your CRDs from the cluster:

make uninstall

Undeploy controller
Undeploy the controller to the cluster:

make undeploy

Next Step
Now, see the architecture concept diagram for a better overview and follow up the CronJob
tutorial to better understand how it works by developing a demo example project.

Using Deploy Image plugin to generate APIs and controllers code

Ensure that you check out the Deploy Image Plugin. This plugin allows users to scaffold
API/Controllers to deploy and manage an Operand (image) on the cluster following the
guidelines and best practices. It abstracts the complexities of achieving this goal while
allowing users to customize the generated code.

Architecture Concept Diagram


The following diagram will help you get a better idea over the Kubebuilder concepts and
architecture.
Tutorial: Building CronJob
Too many tutorials start out with some really contrived setup, or some toy application that
gets the basics across, and then stalls out on the more complicated stuff. Instead, this tutorial
should take you through (almost) the full gamut of complexity with Kubebuilder, starting off
simple and building up to something pretty full-featured.

Let’s pretend (and sure, this is a teensy bit contrived) that we’ve finally gotten tired of the
maintenance burden of the non-Kubebuilder implementation of the CronJob controller in
Kubernetes, and we’d like to rewrite it using Kubebuilder.

The job (no pun intended) of the CronJob controller is to run one-off tasks on the Kubernetes
cluster at regular intervals. It does this by building on top of the Job controller, whose task is to
run one-off tasks once, seeing them to completion.

Instead of trying to tackle rewriting the Job controller as well, we’ll use this as an opportunity
to see how to interact with external types.

Following Along vs Jumping Ahead

Note that most of this tutorial is generated from literate Go files that live in the book
source directory: docs/book/src/cronjob-tutorial/testdata. The full, runnable project lives
in project, while intermediate files live directly under the testdata directory.

Scaffolding Out Our Project


As covered in the quick start, we’ll need to scaffold out a new project. Make sure you’ve
installed Kubebuilder, then scaffold out a new project:

# create a project directory, and then run the init command.


mkdir project
cd project
# we'll use a domain of tutorial.kubebuilder.io,
# so all API groups will be <group>.tutorial.kubebuilder.io.
kubebuilder init --domain tutorial.kubebuilder.io --repo
tutorial.kubebuilder.io/project

Your project’s name defaults to that of your current working directory. You can pass --
project-name=<dns1123-label-string> to set a different project name.

Now that we’ve got a project in place, let’s take a look at what Kubebuilder has scaffolded for
us so far...

Developing in $GOPATH

If your project is initialized within GOPATH , the implicitly called go mod init will
interpolate the module path for you. Otherwise --repo=<module path> must be set.

Read the Go modules blogpost if unfamiliar with the module system.

What’s in a basic project?


When scaffolding out a new project, Kubebuilder provides us with a few basic pieces of
boilerplate.

Build Infrastructure
First up, basic infrastructure for building your project:

go.mod : A new Go module matching our project, with basic dependencies


Makefile : Make targets for building and deploying your controller
PROJECT : Kubebuilder metadata for scaffolding new components
Launch Configuration
We also get launch configurations under the config/ directory. Right now, it just contains
Kustomize YAML definitions required to launch our controller on a cluster, but once we get
started writing our controller, it’ll also hold our CustomResourceDefinitions, RBAC
configuration, and WebhookConfigurations.

config/default contains a Kustomize base for launching the controller in a standard


configuration.

Each other directory contains a different piece of configuration, refactored out into its own
base:

config/manager : launch your controllers as pods in the cluster

config/rbac : permissions required to run your controllers under their own service
account

The Entrypoint
Last, but certainly not least, Kubebuilder scaffolds out the basic entrypoint of our project:
main.go . Let’s take a look at that next...

Every journey needs a start, every program


needs a main
$ vim emptymain.go

// Apache License (hidden) ◀

Our package starts out with some basic imports. Particularly:

The core controller-runtime library


The default controller-runtime logging, Zap (more on that a bit later)
package main

import (
"flag"
"fmt"
"os"

// Import all Kubernetes client auth plugins (e.g. Azure, GCP, OIDC, etc.)
// to ensure that exec-entrypoint and run can make use of them.
_ "k8s.io/client-go/plugin/pkg/client/auth"

"k8s.io/apimachinery/pkg/runtime"
utilruntime "k8s.io/apimachinery/pkg/util/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/cache"
"sigs.k8s.io/controller-runtime/pkg/healthz"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
// +kubebuilder:scaffold:imports
)

Every set of controllers needs a Scheme, which provides mappings between Kinds and their
corresponding Go types. We’ll talk a bit more about Kinds when we write our API definition, so
just keep this in mind for later.

var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)

func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))

//+kubebuilder:scaffold:scheme
}

At this point, our main function is fairly simple:

We set up some basic flags for metrics.

We instantiate a manager, which keeps track of running all of our controllers, as well as
setting up shared caches and clients to the API server (notice we tell the manager about
our Scheme).

We run our manager, which in turn runs all of our controllers and webhooks. The
manager is set up to run until it receives a graceful shutdown signal. This way, when
we’re running on Kubernetes, we behave nicely with graceful pod termination.

While we don’t have anything to run just yet, remember where that
+kubebuilder:scaffold:builder comment is -- things’ll get interesting there soon.
func main() {
var metricsAddr string
var enableLeaderElection bool
var probeAddr string
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address
the metric endpoint binds to.")
flag.StringVar(&probeAddr, "health-probe-bind-address", ":8081", "The address
the probe endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller
manager.")
opts := zap.Options{
Development: true,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()

ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))

mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{


Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "80807133.tutorial.kubebuilder.io",
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}

Note that the Manager can restrict the namespace that all controllers will watch for resources
by:

mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{


Scheme: scheme,
Namespace: namespace,
MetricsBindAddress: metricsAddr,
Port: 9443,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "80807133.tutorial.kubebuilder.io",
})

The above example will change the scope of your project to a single Namespace . In this
scenario, it is also suggested to restrict the provided authorization to this namespace by
replacing the default ClusterRole and ClusterRoleBinding to Role and RoleBinding
respectively. For further information see the Kubernetes documentation about Using RBAC
Authorization.

Also, it is possible to use the MultiNamespacedCacheBuilder to watch a specific set of


namespaces:
var namespaces []string // List of Namespaces

mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{


Scheme: scheme,
NewCache: cache.MultiNamespacedCacheBuilder(namespaces),
MetricsBindAddress: fmt.Sprintf("%s:%d", metricsHost, metricsPort),
Port: 9443,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "80807133.tutorial.kubebuilder.io",
})

For further information see MultiNamespacedCacheBuilder

// +kubebuilder:scaffold:builder

if err := mgr.AddHealthzCheck("healthz", healthz.Ping); err != nil {


setupLog.Error(err, "unable to set up health check")
os.Exit(1)
}
if err := mgr.AddReadyzCheck("readyz", healthz.Ping); err != nil {
setupLog.Error(err, "unable to set up ready check")
os.Exit(1)
}

setupLog.Info("starting manager")
if err := mgr.Start(ctrl.SetupSignalHandler()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
}

With that out of the way, we can get on to scaffolding our API!

Groups and Versions and Kinds, oh my!


Actually, before we get started with our API, we should talk terminology a bit.

When we talk about APIs in Kubernetes, we often use 4 terms: groups, versions, kinds, and
resources.

Groups and Versions


An API Group in Kubernetes is simply a collection of related functionality. Each group has one
or more versions, which, as the name suggests, allow us to change how an API works over
time.
Kinds and Resources
Each API group-version contains one or more API types, which we call Kinds. While a Kind may
change forms between versions, each form must be able to store all the data of the other
forms, somehow (we can store the data in fields, or in annotations). This means that using an
older API version won’t cause newer data to be lost or corrupted. See the Kubernetes API
guidelines for more information.

You’ll also hear mention of resources on occasion. A resource is simply a use of a Kind in the
API. Often, there’s a one-to-one mapping between Kinds and resources. For instance, the
pods resource corresponds to the Pod Kind. However, sometimes, the same Kind may be
returned by multiple resources. For instance, the Scale Kind is returned by all scale
subresources, like deployments/scale or replicasets/scale . This is what allows the
Kubernetes HorizontalPodAutoscaler to interact with different resources. With CRDs, however,
each Kind will correspond to a single resource.

Notice that resources are always lowercase, and by convention are the lowercase form of the
Kind.

So, how does that correspond to Go?


When we refer to a kind in a particular group-version, we’ll call it a GroupVersionKind, or GVK
for short. Same with resources and GVR. As we’ll see shortly, each GVK corresponds to a given
root Go type in a package.

Now that we have our terminology straight, we can actually create our API!

So, how can we create our API?


In the next section, Adding a new API, we will check how the tool helps us to create our own
APIs with the command kubebuilder create api .

The goal of this command is to create Custom Resource (CR) and Custom Resource Definition
(CRD) for our Kind(s). To check it further see; Extend the Kubernetes API with
CustomResourceDefinitions.

But, why create APIs at all?


New APIs are how we teach Kubernetes about our custom objects. The Go structs are used to
generate a CRD which includes the schema for our data as well as tracking data like what our
new type is called. We can then create instances of our custom objects which will be managed
by our controllers.

Our APIs and resources represent our solutions on the clusters. Basically, the CRDs are a
definition of our customized Objects, and the CRs are an instance of it.

Ah, do you have an example?


Let’s think about the classic scenario where the goal is to have an application and its database
running on the platform with Kubernetes. Then, one CRD could represent the App, and
another one could represent the DB. By having one CRD to describe the App and another one
for the DB, we will not be hurting concepts such as encapsulation, the single responsibility
principle, and cohesion. Damaging these concepts could cause unexpected side effects, such
as difficulty in extending, reuse, or maintenance, just to mention a few.

In this way, we can create the App CRD which will have its controller and which would be
responsible for things like creating Deployments that contain the App and creating Services to
access it and etc. Similarly, we could create a CRD to represent the DB, and deploy a controller
that would manage DB instances.

Err, but what’s that Scheme thing?


The Scheme we saw before is simply a way to keep track of what Go type corresponds to a
given GVK (don’t be overwhelmed by its godocs).

For instance, suppose we mark the "tutorial.kubebuilder.io/api/v1".CronJob{} type as


being in the batch.tutorial.kubebuilder.io/v1 API group (implicitly saying it has the Kind
CronJob ).

Then, we can later construct a new &CronJob{} given some JSON from the API server that
says

{
"kind": "CronJob",
"apiVersion": "batch.tutorial.kubebuilder.io/v1",
...
}

or properly look up the group-version when we go to submit a &CronJob{} in an update.


Adding a new API
To scaffold out a new Kind (you were paying attention to the last chapter, right?) and
corresponding controller, we can use kubebuilder create api :

kubebuilder create api --group batch --version v1 --kind CronJob

Press y for “Create Resource” and “Create Controller”.

The first time we call this command for each group-version, it will create a directory for the
new group-version.

Supporting older cluster versions

The default CustomResourceDefinition manifests created alongside your Go API types


use API version v1 . If your project intends to support Kubernetes cluster versions older
than v1.16, you must set --crd-version v1beta1 and remove
preserveUnknownFields=false from the CRD_OPTIONS Makefile variable. See the
CustomResourceDefinition generation reference for details.

In this case, the api/v1/ directory is created, corresponding to the


batch.tutorial.kubebuilder.io/v1 (remember our --domain setting from the beginning?).

It has also added a file for our CronJob Kind, api/v1/cronjob_types.go . Each time we call
the command with a different kind, it’ll add a corresponding new file.

Let’s take a look at what we’ve been given out of the box, then we can move on to filling it out.

$ vim emptyapi.go

// Apache License (hidden) ◀

We start out simply enough: we import the meta/v1 API group, which is not normally exposed
by itself, but instead contains metadata common to all Kubernetes Kinds.

package v1

import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)

Next, we define types for the Spec and Status of our Kind. Kubernetes functions by reconciling
desired state ( Spec ) with actual cluster state (other objects’ Status ) and external state, and
then recording what it observed ( Status ). Thus, every functional object includes spec and
status. A few types, like ConfigMap don’t follow this pattern, since they don’t encode desired
state, but most types do.
// EDIT THIS FILE! THIS IS SCAFFOLDING FOR YOU TO OWN!
// NOTE: json tags are required. Any new fields you add must have json tags for
the fields to be serialized.

// CronJobSpec defines the desired state of CronJob


type CronJobSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
}

// CronJobStatus defines the observed state of CronJob


type CronJobStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
}

Next, we define the types corresponding to actual Kinds, CronJob and CronJobList .
CronJob is our root type, and describes the CronJob kind. Like all Kubernetes objects, it
contains TypeMeta (which describes API version and Kind), and also contains ObjectMeta ,
which holds things like name, namespace, and labels.

CronJobList is simply a container for multiple CronJob s. It’s the Kind used in bulk
operations, like LIST.

In general, we never modify either of these -- all modifications go in either Spec or Status.

That little +kubebuilder:object:root comment is called a marker. We’ll see more of them in
a bit, but know that they act as extra metadata, telling controller-tools (our code and YAML
generator) extra information. This particular one tells the object generator that this type
represents a Kind. Then, the object generator generates an implementation of the
runtime.Object interface for us, which is the standard interface that all types representing
Kinds must implement.
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status

// CronJob is the Schema for the cronjobs API


type CronJob struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec CronJobSpec `json:"spec,omitempty"`


Status CronJobStatus `json:"status,omitempty"`
}

//+kubebuilder:object:root=true

// CronJobList contains a list of CronJob


type CronJobList struct {
metav1.TypeMeta `json:",inline"`
metav1.ListMeta `json:"metadata,omitempty"`
Items []CronJob `json:"items"`
}

Finally, we add the Go types to the API group. This allows us to add the types in this API group
to any Scheme.

func init() {
SchemeBuilder.Register(&CronJob{}, &CronJobList{})
}

Now that we’ve seen the basic structure, let’s fill it out!

Designing an API
In Kubernetes, we have a few rules for how we design APIs. Namely, all serialized fields must
be camelCase , so we use JSON struct tags to specify this. We can also use the omitempty
struct tag to mark that a field should be omitted from serialization when empty.

Fields may use most of the primitive types. Numbers are the exception: for API compatibility
purposes, we accept three forms of numbers: int32 and int64 for integers, and
resource.Quantity for decimals.

Hold up, what's a Quantity?

There’s one other special type that we use: metav1.Time . This functions identically to
time.Time , except that it has a fixed, portable serialization format.

With that out of the way, let’s take a look at what our CronJob object looks like!

$ vim project/api/v1/cronjob_types.go

// Apache License (hidden) ◀


package v1

// Imports (hidden) ◀

First, let’s take a look at our spec. As we discussed before, spec holds desired state, so any
“inputs” to our controller go here.

Fundamentally a CronJob needs the following pieces:

A schedule (the cron in CronJob)


A template for the Job to run (the job in CronJob)

We’ll also want a few extras, which will make our users’ lives easier:

A deadline for starting jobs (if we miss this deadline, we’ll just wait till the next scheduled
time)
What to do if multiple jobs would run at once (do we wait? stop the old one? run both?)
A way to pause the running of a CronJob, in case something’s wrong with it
Limits on old job history

Remember, since we never read our own status, we need to have some other way to keep
track of whether a job has run. We can use at least one old job to do this.

We’ll use several markers ( // +comment ) to specify additional metadata. These will be used by
controller-tools when generating our CRD manifest. As we’ll see in a bit, controller-tools will
also use GoDoc to form descriptions for the fields.
// CronJobSpec defines the desired state of CronJob
type CronJobSpec struct {
//+kubebuilder:validation:MinLength=0

// The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.


Schedule string `json:"schedule"`

//+kubebuilder:validation:Minimum=0

// Optional deadline in seconds for starting the job if it misses scheduled


// time for any reason. Missed jobs executions will be counted as failed
ones.
// +optional
StartingDeadlineSeconds *int64 `json:"startingDeadlineSeconds,omitempty"`

// Specifies how to treat concurrent executions of a Job.


// Valid values are:
// - "Allow" (default): allows CronJobs to run concurrently;
// - "Forbid": forbids concurrent runs, skipping next run if previous run
hasn't finished yet;
// - "Replace": cancels currently running job and replaces it with a new one
// +optional
ConcurrencyPolicy ConcurrencyPolicy `json:"concurrencyPolicy,omitempty"`

// This flag tells the controller to suspend subsequent executions, it does


// not apply to already started executions. Defaults to false.
// +optional
Suspend *bool `json:"suspend,omitempty"`

// Specifies the job that will be created when executing a CronJob.


JobTemplate batchv1.JobTemplateSpec `json:"jobTemplate"`

//+kubebuilder:validation:Minimum=0

// The number of successful finished jobs to retain.


// This is a pointer to distinguish between explicit zero and not specified.
// +optional
SuccessfulJobsHistoryLimit *int32
`json:"successfulJobsHistoryLimit,omitempty"`

//+kubebuilder:validation:Minimum=0

// The number of failed finished jobs to retain.


// This is a pointer to distinguish between explicit zero and not specified.
// +optional
FailedJobsHistoryLimit *int32 `json:"failedJobsHistoryLimit,omitempty"`
}

We define a custom type to hold our concurrency policy. It’s actually just a string under the
hood, but the type gives extra documentation, and allows us to attach validation on the type
instead of the field, making the validation more easily reusable.
// ConcurrencyPolicy describes how the job will be handled.
// Only one of the following concurrent policies may be specified.
// If none of the following policies is specified, the default one
// is AllowConcurrent.
// +kubebuilder:validation:Enum=Allow;Forbid;Replace
type ConcurrencyPolicy string

const (
// AllowConcurrent allows CronJobs to run concurrently.
AllowConcurrent ConcurrencyPolicy = "Allow"

// ForbidConcurrent forbids concurrent runs, skipping next run if previous


// hasn't finished yet.
ForbidConcurrent ConcurrencyPolicy = "Forbid"

// ReplaceConcurrent cancels currently running job and replaces it with a new


one.
ReplaceConcurrent ConcurrencyPolicy = "Replace"
)

Next, let’s design our status, which holds observed state. It contains any information we want
users or other controllers to be able to easily obtain.

We’ll keep a list of actively running jobs, as well as the last time that we successfully ran our
job. Notice that we use metav1.Time instead of time.Time to get the stable serialization, as
mentioned above.

// CronJobStatus defines the observed state of CronJob


type CronJobStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file

// A list of pointers to currently running jobs.


// +optional
Active []corev1.ObjectReference `json:"active,omitempty"`

// Information when was the last time the job was successfully scheduled.
// +optional
LastScheduleTime *metav1.Time `json:"lastScheduleTime,omitempty"`
}

Finally, we have the rest of the boilerplate that we’ve already discussed. As previously noted,
we don’t need to change this, except to mark that we want a status subresource, so that we
behave like built-in kubernetes types.

//+kubebuilder:object:root=true
//+kubebuilder:subresource:status

// CronJob is the Schema for the cronjobs API


type CronJob struct {

// Root Object Definitions (hidden) ◀


Now that we have an API, we’ll need to write a controller to actually implement the
functionality.

A Brief Aside: What’s the rest of this stuff?


If you’ve taken a peek at the rest of the files in the api/v1/ directory, you might have noticed
two additional files beyond cronjob_types.go : groupversion_info.go and
zz_generated.deepcopy.go .

Neither of these files ever needs to be edited (the former stays the same and the latter is
autogenerated), but it’s useful to know what’s in them.

groupversion_info.go
groupversion_info.go contains common metadata about the group-version:

$ vim project/api/v1/groupversion_info.go

// Apache License (hidden) ◀

First, we have some package-level markers that denote that there are Kubernetes objects in
this package, and that this package represents the group batch.tutorial.kubebuilder.io .
The object generator makes use of the former, while the latter is used by the CRD generator
to generate the right metadata for the CRDs it creates from this package.

// Package v1 contains API Schema definitions for the batch v1 API group
// +kubebuilder:object:generate=true
// +groupName=batch.tutorial.kubebuilder.io
package v1

import (
"k8s.io/apimachinery/pkg/runtime/schema"
"sigs.k8s.io/controller-runtime/pkg/scheme"
)

Then, we have the commonly useful variables that help us set up our Scheme. Since we need
to use all the types in this package in our controller, it’s helpful (and the convention) to have a
convenient method to add all the types to some other Scheme . SchemeBuilder makes this
easy for us.
var (
// GroupVersion is group version used to register these objects
GroupVersion = schema.GroupVersion{Group: "batch.tutorial.kubebuilder.io",
Version: "v1"}

// SchemeBuilder is used to add go types to the GroupVersionKind scheme


SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}

// AddToScheme adds the types in this group-version to the given scheme.


AddToScheme = SchemeBuilder.AddToScheme
)

zz_generated.deepcopy.go
zz_generated.deepcopy.go contains the autogenerated implementation of the
aforementioned runtime.Object interface, which marks all of our root types as representing
Kinds.

The core of the runtime.Object interface is a deep-copy method, DeepCopyObject .

The object generator in controller-tools also generates two other handy methods for each
root type and all its sub-types: DeepCopy and DeepCopyInto .

What’s in a controller?
Controllers are the core of Kubernetes, and of any operator.

It’s a controller’s job to ensure that, for any given object, the actual state of the world (both the
cluster state, and potentially external state like running containers for Kubelet or
loadbalancers for a cloud provider) matches the desired state in the object. Each controller
focuses on one root Kind, but may interact with other Kinds.

We call this process reconciling.

In controller-runtime, the logic that implements the reconciling for a specific kind is called a
Reconciler. A reconciler takes the name of an object, and returns whether or not we need to
try again (e.g. in case of errors or periodic controllers, like the HorizontalPodAutoscaler).

$ vim emptycontroller.go

// Apache License (hidden) ◀

First, we start out with some standard imports. As before, we need the core controller-
runtime library, as well as the client package, and the package for our API types.
package controllers

import (
"context"

"k8s.io/apimachinery/pkg/runtime"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"

batchv1 "tutorial.kubebuilder.io/project/api/v1"
)

Next, kubebuilder has scaffolded a basic reconciler struct for us. Pretty much every reconciler
needs to log, and needs to be able to fetch objects, so these are added out of the box.

// CronJobReconciler reconciles a CronJob object


type CronJobReconciler struct {
client.Client
Scheme *runtime.Scheme
}

Most controllers eventually end up running on the cluster, so they need RBAC permissions,
which we specify using controller-tools RBAC markers. These are the bare minimum
permissions needed to run. As we add more functionality, we’ll need to revisit these.

//
+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=get
//
+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/status,ve

The ClusterRole manifest at config/rbac/role.yaml is generated from the above markers


via controller-gen with the following command:

// make manifests

NOTE: If you receive an error, please run the specified command in the error and re-run make
manifests .

Reconcile actually performs the reconciling for a single named object. Our Request just has
a name, but we can use the client to fetch that object from the cache.

We return an empty result and no error, which indicates to controller-runtime that we’ve
successfully reconciled this object and don’t need to try again until there’s some changes.

Most controllers need a logging handle and a context, so we set them up here.

The context is used to allow cancelation of requests, and potentially things like tracing. It’s the
first argument to all client methods. The Background context is just a basic context without
any extra data or timing restrictions.

The logging handle lets us log. controller-runtime uses structured logging through a library
called logr. As we’ll see shortly, logging works by attaching key-value pairs to a static message.
We can pre-assign some pairs at the top of our reconcile method to have those attached to all
log lines in this reconciler.

func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request)


(ctrl.Result, error) {
_ = log.FromContext(ctx)

// your logic here

return ctrl.Result{}, nil


}

Finally, we add this reconciler to the manager, so that it gets started when the manager is
started.

For now, we just note that this reconciler operates on CronJob s. Later, we’ll use this to mark
that we care about related objects as well.

func (r *CronJobReconciler) SetupWithManager(mgr ctrl.Manager) error {


return ctrl.NewControllerManagedBy(mgr).
For(&batchv1.CronJob{}).
Complete(r)
}

Now that we’ve seen the basic structure of a reconciler, let’s fill out the logic for CronJob s.

Implementing a controller
The basic logic of our CronJob controller is this:

1. Load the named CronJob

2. List all active jobs, and update the status

3. Clean up old jobs according to the history limits

4. Check if we’re suspended (and don’t do anything else if we are)

5. Get the next scheduled run

6. Run a new job if it’s on schedule, not past the deadline, and not blocked by our
concurrency policy

7. Requeue when we either see a running job (done automatically) or it’s time for the next
scheduled run.
$ vim project/internal/controller/cronjob_controller.go

// Apache License (hidden) ◀

We’ll start out with some imports. You’ll see below that we’ll need a few more imports than
those scaffolded for us. We’ll talk about each one when we use it.

package controller

import (
"context"
"fmt"
"sort"
"time"

"github.com/robfig/cron"
kbatch "k8s.io/api/batch/v1"
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
ref "k8s.io/client-go/tools/reference"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/log"

batchv1 "tutorial.kubebuilder.io/project/api/v1"
)

Next, we’ll need a Clock, which will allow us to fake timing in our tests.

// CronJobReconciler reconciles a CronJob object


type CronJobReconciler struct {
client.Client
Scheme *runtime.Scheme
Clock
}

// Clock (hidden) ◀

Notice that we need a few more RBAC permissions -- since we’re creating and managing jobs
now, we’ll need permissions for those, which means adding a couple more markers.

//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs,verbs=g
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/status,
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/finaliz
//+kubebuilder:rbac:groups=batch,resources=jobs,verbs=get;list;watch;create;update;
//+kubebuilder:rbac:groups=batch,resources=jobs/status,verbs=get

Now, we get to the heart of the controller -- the reconciler logic.


var (
scheduledTimeAnnotation = "batch.tutorial.kubebuilder.io/scheduled-at"
)

// Reconcile is part of the main kubernetes reconciliation loop which aims to


// move the current state of the cluster closer to the desired state.
// TODO(user): Modify the Reconcile function to compare the state specified by
// the CronJob object against the actual cluster state, and then
// perform operations to make the cluster state reflect the state specified by
// the user.
//
// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.15.0/pkg/reconcile
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request)
(ctrl.Result, error) {
log := log.FromContext(ctx)

1: Load the CronJob by name

We’ll fetch the CronJob using our client. All client methods take a context (to allow for
cancellation) as their first argument, and the object in question as their last. Get is a bit
special, in that it takes a NamespacedName as the middle argument (most don’t have a middle
argument, as we’ll see below).

Many client methods also take variadic options at the end.

var cronJob batchv1.CronJob


if err := r.Get(ctx, req.NamespacedName, &cronJob); err != nil {
log.Error(err, "unable to fetch CronJob")
// we'll ignore not-found errors, since they can't be fixed by an
immediate
// requeue (we'll need to wait for a new notification), and we can get
them
// on deleted requests.
return ctrl.Result{}, client.IgnoreNotFound(err)
}

2: List all active jobs, and update the status

To fully update our status, we’ll need to list all child jobs in this namespace that belong to this
CronJob. Similarly to Get, we can use the List method to list the child jobs. Notice that we use
variadic options to set the namespace and field match (which is actually an index lookup that
we set up below).
var childJobs kbatch.JobList
if err := r.List(ctx, &childJobs, client.InNamespace(req.Namespace),
client.MatchingFields{jobOwnerKey: req.Name}); err != nil {
log.Error(err, "unable to list child Jobs")
return ctrl.Result{}, err
}

What is this index about?

The reconciler fetches all jobs owned by the cronjob for the status. As our number of
cronjobs increases, looking these up can become quite slow as we have to filter through
all of them. For a more efficient lookup, these jobs will be indexed locally on the
controller's name. A jobOwnerKey field is added to the cached job objects. This key
references the owning controller and functions as the index. Later in this document we
will configure the manager to actually index this field.

Once we have all the jobs we own, we’ll split them into active, successful, and failed jobs,
keeping track of the most recent run so that we can record it in status. Remember, status
should be able to be reconstituted from the state of the world, so it’s generally not a good
idea to read from the status of the root object. Instead, you should reconstruct it every run.
That’s what we’ll do here.

We can check if a job is “finished” and whether it succeeded or failed using status conditions.
We’ll put that logic in a helper to make our code cleaner.

// find the active list of jobs


var activeJobs []*kbatch.Job
var successfulJobs []*kbatch.Job
var failedJobs []*kbatch.Job
var mostRecentTime *time.Time // find the last run so we can update the
status

// isJobFinished (hidden) ◀

// getScheduledTimeForJob (hidden) ◀
for i, job := range childJobs.Items {
_, finishedType := isJobFinished(&job)
switch finishedType {
case "": // ongoing
activeJobs = append(activeJobs, &childJobs.Items[i])
case kbatch.JobFailed:
failedJobs = append(failedJobs, &childJobs.Items[i])
case kbatch.JobComplete:
successfulJobs = append(successfulJobs, &childJobs.Items[i])
}

// We'll store the launch time in an annotation, so we'll reconstitute


that from
// the active jobs themselves.
scheduledTimeForJob, err := getScheduledTimeForJob(&job)
if err != nil {
log.Error(err, "unable to parse schedule time for child job", "job",
&job)
continue
}
if scheduledTimeForJob != nil {
if mostRecentTime == nil {
mostRecentTime = scheduledTimeForJob
} else if mostRecentTime.Before(*scheduledTimeForJob) {
mostRecentTime = scheduledTimeForJob
}
}
}

if mostRecentTime != nil {
cronJob.Status.LastScheduleTime = &metav1.Time{Time: *mostRecentTime}
} else {
cronJob.Status.LastScheduleTime = nil
}
cronJob.Status.Active = nil
for _, activeJob := range activeJobs {
jobRef, err := ref.GetReference(r.Scheme, activeJob)
if err != nil {
log.Error(err, "unable to make reference to active job", "job",
activeJob)
continue
}
cronJob.Status.Active = append(cronJob.Status.Active, *jobRef)
}

Here, we’ll log how many jobs we observed at a slightly higher logging level, for debugging.
Notice how instead of using a format string, we use a fixed message, and attach key-value
pairs with the extra information. This makes it easier to filter and query log lines.

log.V(1).Info("job count", "active jobs", len(activeJobs), "successful jobs",


len(successfulJobs), "failed jobs", len(failedJobs))

Using the data we’ve gathered, we’ll update the status of our CRD. Just like before, we use our
client. To specifically update the status subresource, we’ll use the Status part of the client,
with the Update method.
The status subresource ignores changes to spec, so it’s less likely to conflict with any other
updates, and can have separate permissions.

if err := r.Status().Update(ctx, &cronJob); err != nil {


log.Error(err, "unable to update CronJob status")
return ctrl.Result{}, err
}

Once we’ve updated our status, we can move on to ensuring that the status of the world
matches what we want in our spec.

3: Clean up old jobs according to the history limit

First, we’ll try to clean up old jobs, so that we don’t leave too many lying around.
// NB: deleting these are "best effort" -- if we fail on a particular one,
// we won't requeue just to finish the deleting.
if cronJob.Spec.FailedJobsHistoryLimit != nil {
sort.Slice(failedJobs, func(i, j int) bool {
if failedJobs[i].Status.StartTime == nil {
return failedJobs[j].Status.StartTime != nil
}
return
failedJobs[i].Status.StartTime.Before(failedJobs[j].Status.StartTime)
})
for i, job := range failedJobs {
if int32(i) >= int32(len(failedJobs))-
*cronJob.Spec.FailedJobsHistoryLimit {
break
}
if err := r.Delete(ctx, job,
client.PropagationPolicy(metav1.DeletePropagationBackground));
client.IgnoreNotFound(err) != nil {
log.Error(err, "unable to delete old failed job", "job", job)
} else {
log.V(0).Info("deleted old failed job", "job", job)
}
}
}

if cronJob.Spec.SuccessfulJobsHistoryLimit != nil {
sort.Slice(successfulJobs, func(i, j int) bool {
if successfulJobs[i].Status.StartTime == nil {
return successfulJobs[j].Status.StartTime != nil
}
return
successfulJobs[i].Status.StartTime.Before(successfulJobs[j].Status.StartTime)
})
for i, job := range successfulJobs {
if int32(i) >= int32(len(successfulJobs))-
*cronJob.Spec.SuccessfulJobsHistoryLimit {
break
}
if err := r.Delete(ctx, job,
client.PropagationPolicy(metav1.DeletePropagationBackground)); (err) != nil {
log.Error(err, "unable to delete old successful job", "job", job)
} else {
log.V(0).Info("deleted old successful job", "job", job)
}
}
}

4: Check if we’re suspended

If this object is suspended, we don’t want to run any jobs, so we’ll stop now. This is useful if
something’s broken with the job we’re running and we want to pause runs to investigate or
putz with the cluster, without deleting the object.
if cronJob.Spec.Suspend != nil && *cronJob.Spec.Suspend {
log.V(1).Info("cronjob suspended, skipping")
return ctrl.Result{}, nil
}

5: Get the next scheduled run

If we’re not paused, we’ll need to calculate the next scheduled run, and whether or not we’ve
got a run that we haven’t processed yet.

// getNextSchedule (hidden) ◀

// figure out the next times that we need to create


// jobs at (or anything we missed).
missedRun, nextRun, err := getNextSchedule(&cronJob, r.Now())
if err != nil {
log.Error(err, "unable to figure out CronJob schedule")
// we don't really care about requeuing until we get an update that
// fixes the schedule, so don't return an error
return ctrl.Result{}, nil
}

We’ll prep our eventual request to requeue until the next job, and then figure out if we
actually need to run.

scheduledResult := ctrl.Result{RequeueAfter: nextRun.Sub(r.Now())} // save


this so we can re-use it elsewhere
log = log.WithValues("now", r.Now(), "next run", nextRun)

6: Run a new job if it’s on schedule, not past the deadline, and not blocked
by our concurrency policy

If we’ve missed a run, and we’re still within the deadline to start it, we’ll need to run a job.
if missedRun.IsZero() {
log.V(1).Info("no upcoming scheduled times, sleeping until next")
return scheduledResult, nil
}

// make sure we're not too late to start the run


log = log.WithValues("current run", missedRun)
tooLate := false
if cronJob.Spec.StartingDeadlineSeconds != nil {
tooLate =
missedRun.Add(time.Duration(*cronJob.Spec.StartingDeadlineSeconds) *
time.Second).Before(r.Now())
}
if tooLate {
log.V(1).Info("missed starting deadline for last run, sleeping till
next")
// TODO(directxman12): events
return scheduledResult, nil
}

If we actually have to run a job, we’ll need to either wait till existing ones finish, replace the
existing ones, or just add new ones. If our information is out of date due to cache delay, we’ll
get a requeue when we get up-to-date information.

// figure out how to run this job -- concurrency policy might forbid us from
running
// multiple at the same time...
if cronJob.Spec.ConcurrencyPolicy == batchv1.ForbidConcurrent &&
len(activeJobs) > 0 {
log.V(1).Info("concurrency policy blocks concurrent runs, skipping", "num
active", len(activeJobs))
return scheduledResult, nil
}

// ...or instruct us to replace existing ones...


if cronJob.Spec.ConcurrencyPolicy == batchv1.ReplaceConcurrent {
for _, activeJob := range activeJobs {
// we don't care if the job was already deleted
if err := r.Delete(ctx, activeJob,
client.PropagationPolicy(metav1.DeletePropagationBackground));
client.IgnoreNotFound(err) != nil {
log.Error(err, "unable to delete active job", "job", activeJob)
return ctrl.Result{}, err
}
}
}

Once we’ve figured out what to do with existing jobs, we’ll actually create our desired job

// constructJobForCronJob (hidden) ◀
// actually make the job...
job, err := constructJobForCronJob(&cronJob, missedRun)
if err != nil {
log.Error(err, "unable to construct job from template")
// don't bother requeuing until we get a change to the spec
return scheduledResult, nil
}

// ...and create it on the cluster


if err := r.Create(ctx, job); err != nil {
log.Error(err, "unable to create Job for CronJob", "job", job)
return ctrl.Result{}, err
}

log.V(1).Info("created Job for CronJob run", "job", job)

7: Requeue when we either see a running job or it’s time for the next
scheduled run

Finally, we’ll return the result that we prepped above, that says we want to requeue when our
next run would need to occur. This is taken as a maximum deadline -- if something else
changes in between, like our job starts or finishes, we get modified, etc, we might reconcile
again sooner.

// we'll requeue once we see the running job, and update our status
return scheduledResult, nil
}

Setup

Finally, we’ll update our setup. In order to allow our reconciler to quickly look up Jobs by their
owner, we’ll need an index. We declare an index key that we can later use with the client as a
pseudo-field name, and then describe how to extract the indexed value from the Job object.
The indexer will automatically take care of namespaces for us, so we just have to extract the
owner name if the Job has a CronJob owner.

Additionally, we’ll inform the manager that this controller owns some Jobs, so that it will
automatically call Reconcile on the underlying CronJob when a Job changes, is deleted, etc.
var (
jobOwnerKey = ".metadata.controller"
apiGVStr = batchv1.GroupVersion.String()
)

// SetupWithManager sets up the controller with the Manager.


func (r *CronJobReconciler) SetupWithManager(mgr ctrl.Manager) error {
// set up a real clock, since we're not in a test
if r.Clock == nil {
r.Clock = realClock{}
}

if err := mgr.GetFieldIndexer().IndexField(context.Background(),
&kbatch.Job{}, jobOwnerKey, func(rawObj client.Object) []string {
// grab the job object, extract the owner...
job := rawObj.(*kbatch.Job)
owner := metav1.GetControllerOf(job)
if owner == nil {
return nil
}
// ...make sure it's a CronJob...
if owner.APIVersion != apiGVStr || owner.Kind != "CronJob" {
return nil
}

// ...and if so, return it


return []string{owner.Name}
}); err != nil {
return err
}

return ctrl.NewControllerManagedBy(mgr).
For(&batchv1.CronJob{}).
Owns(&kbatch.Job{}).
Complete(r)
}

That was a doozy, but now we’ve got a working controller. Let’s test against the cluster, then, if
we don’t have any issues, deploy it!

You said something about main?


But first, remember how we said we’d come back to main.go again? Let’s take a look and see
what’s changed, and what we need to add.

$ vim project/cmd/main.go

// Apache License (hidden) ◀

// Imports (hidden) ◀

The first difference to notice is that kubebuilder has added the new API group’s package
( batchv1 ) to our scheme. This means that we can use those objects in our controller.
If we would be using any other CRD we would have to add their scheme the same way. Builtin
types such as Job have their scheme added by clientgoscheme .

var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)

func init() {
utilruntime.Must(clientgoscheme.AddToScheme(scheme))

utilruntime.Must(batchv1.AddToScheme(scheme))
//+kubebuilder:scaffold:scheme
}

The other thing that’s changed is that kubebuilder has added a block calling our CronJob
controller’s SetupWithManager method.

func main() {

// old stuff (hidden) ◀

if err = (&controller.CronJobReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller",
"CronJob")
os.Exit(1)
}

// old stuff (hidden) ◀

Now we can implement our controller.

Implementing defaulting/validating
webhooks
If you want to implement admission webhooks for your CRD, the only thing you need to do is
to implement the Defaulter and (or) the Validator interface.

Kubebuilder takes care of the rest for you, such as

1. Creating the webhook server.


2. Ensuring the server has been added in the manager.
3. Creating handlers for your webhooks.
4. Registering each handler with a path in your server.
First, let’s scaffold the webhooks for our CRD (CronJob). We’ll need to run the following
command with the --defaulting and --programmatic-validation flags (since our test
project will use defaulting and validating webhooks):

kubebuilder create webhook --group batch --version v1 --kind CronJob --defaulting


--programmatic-validation

This will scaffold the webhook functions and register your webhook with the manager in your
main.go for you.

Supporting older cluster versions

The default WebhookConfiguration manifests created alongside your Go webhook


implementation use API version v1 . If your project intends to support Kubernetes cluster
versions older than v1.16, set --webhook-version v1beta1 . See the webhook reference
for more information.

$ vim project/api/v1/cronjob_webhook.go

// Apache License (hidden) ◀

// Go imports (hidden) ◀

Next, we’ll setup a logger for the webhooks.

var cronjoblog = logf.Log.WithName("cronjob-resource")

Then, we set up the webhook with the manager.

func (r *CronJob) SetupWebhookWithManager(mgr ctrl.Manager) error {


return ctrl.NewWebhookManagedBy(mgr).
For(r).
Complete()
}

Notice that we use kubebuilder markers to generate webhook manifests. This marker is
responsible for generating a mutating webhook manifest.

The meaning of each marker can be found here.

//+kubebuilder:webhook:path=/mutate-batch-tutorial-kubebuilder-io-v1-
cronjob,mutating=true,failurePolicy=fail,groups=batch.tutorial.kubebuilder.io,resou

We use the webhook.Defaulter interface to set defaults to our CRD. A webhook will
automatically be served that calls this defaulting.

The Default method is expected to mutate the receiver, setting the defaults.
var _ webhook.Defaulter = &CronJob{}

// Default implements webhook.Defaulter so a webhook will be registered for the


type
func (r *CronJob) Default() {
cronjoblog.Info("default", "name", r.Name)

if r.Spec.ConcurrencyPolicy == "" {
r.Spec.ConcurrencyPolicy = AllowConcurrent
}
if r.Spec.Suspend == nil {
r.Spec.Suspend = new(bool)
}
if r.Spec.SuccessfulJobsHistoryLimit == nil {
r.Spec.SuccessfulJobsHistoryLimit = new(int32)
*r.Spec.SuccessfulJobsHistoryLimit = 3
}
if r.Spec.FailedJobsHistoryLimit == nil {
r.Spec.FailedJobsHistoryLimit = new(int32)
*r.Spec.FailedJobsHistoryLimit = 1
}
}

This marker is responsible for generating a validating webhook manifest.

//+kubebuilder:webhook:verbs=create;update;delete,path=/validate-batch-tutorial-
kubebuilder-io-v1-
cronjob,mutating=false,failurePolicy=fail,groups=batch.tutorial.kubebuilder.io,reso

We can validate our CRD beyond what’s possible with declarative validation. Generally,
declarative validation should be sufficient, but sometimes more advanced use cases call for
complex validation.

For instance, we’ll see below that we use this to validate a well-formed cron schedule without
making up a long regular expression.

If webhook.Validator interface is implemented, a webhook will automatically be served that


calls the validation.

The ValidateCreate , ValidateUpdate and ValidateDelete methods are expected to


validate its receiver upon creation, update and deletion respectively. We separate out
ValidateCreate from ValidateUpdate to allow behavior like making certain fields immutable, so
that they can only be set on creation. ValidateDelete is also separated from ValidateUpdate to
allow different validation behavior on deletion. Here, however, we just use the same shared
validation for ValidateCreate and ValidateUpdate . And we do nothing in ValidateDelete ,
since we don’t need to validate anything on deletion.
var _ webhook.Validator = &CronJob{}

// ValidateCreate implements webhook.Validator so a webhook will be registered


for the type
func (r *CronJob) ValidateCreate() (admission.Warnings, error) {
cronjoblog.Info("validate create", "name", r.Name)

return nil, r.validateCronJob()


}

// ValidateUpdate implements webhook.Validator so a webhook will be registered


for the type
func (r *CronJob) ValidateUpdate(old runtime.Object) (admission.Warnings, error)
{
cronjoblog.Info("validate update", "name", r.Name)

return nil, r.validateCronJob()


}

// ValidateDelete implements webhook.Validator so a webhook will be registered


for the type
func (r *CronJob) ValidateDelete() (admission.Warnings, error) {
cronjoblog.Info("validate delete", "name", r.Name)

// TODO(user): fill in your validation logic upon object deletion.


return nil, nil
}

We validate the name and the spec of the CronJob.

func (r *CronJob) validateCronJob() error {


var allErrs field.ErrorList
if err := r.validateCronJobName(); err != nil {
allErrs = append(allErrs, err)
}
if err := r.validateCronJobSpec(); err != nil {
allErrs = append(allErrs, err)
}
if len(allErrs) == 0 {
return nil
}

return apierrors.NewInvalid(
schema.GroupKind{Group: "batch.tutorial.kubebuilder.io", Kind:
"CronJob"},
r.Name, allErrs)
}

Some fields are declaratively validated by OpenAPI schema. You can find kubebuilder
validation markers (prefixed with // +kubebuilder:validation ) in the Designing an API
section. You can find all of the kubebuilder supported markers for declaring validation by
running controller-gen crd -w , or here.
func (r *CronJob) validateCronJobSpec() *field.Error {
// The field helpers from the kubernetes API machinery help us return nicely
// structured validation errors.
return validateScheduleFormat(
r.Spec.Schedule,
field.NewPath("spec").Child("schedule"))
}

We’ll need to validate the cron schedule is well-formatted.

func validateScheduleFormat(schedule string, fldPath *field.Path) *field.Error {


if _, err := cron.ParseStandard(schedule); err != nil {
return field.Invalid(fldPath, schedule, err.Error())
}
return nil
}

// Validate object name (hidden) ◀

Running and deploying the controller

Optional

If opting to make any changes to the API definitions, then before proceeding, generate the
manifests like CRs or CRDs with

make manifests

To test out the controller, we can run it locally against the cluster. Before we do so, though,
we’ll need to install our CRDs, as per the quick start. This will automatically update the YAML
manifests using controller-tools, if needed:

make install

Now that we’ve installed our CRDs, we can run the controller against our cluster. This will use
whatever credentials that we connect to the cluster with, so we don’t need to worry about
RBAC just yet.

Running webhooks locally

If you want to run the webhooks locally, you’ll have to generate certificates for serving the
webhooks, and place them in the right directory ( /tmp/k8s-webhook-server/serving-
certs/tls.{crt,key} , by default).

If you’re not running a local API server, you’ll also need to figure out how to proxy traffic
from the remote cluster to your local webhook server. For this reason, we generally
recommend disabling webhooks when doing your local code-run-test cycle, as we do
below.

In a separate terminal, run

export ENABLE_WEBHOOKS=false
make run

You should see logs from the controller about starting up, but it won’t do anything just yet.

At this point, we need a CronJob to test with. Let’s write a sample to


config/samples/batch_v1_cronjob.yaml , and use that:

apiVersion: batch.tutorial.kubebuilder.io/v1
kind: CronJob
metadata:
labels:
app.kubernetes.io/name: cronjob
app.kubernetes.io/instance: cronjob-sample
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: project
name: cronjob-sample
spec:
schedule: "*/1 * * * *"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure

kubectl create -f config/samples/batch_v1_cronjob.yaml

At this point, you should see a flurry of activity. If you watch the changes, you should see your
cronjob running, and updating status:

kubectl get cronjob.batch.tutorial.kubebuilder.io -o yaml


kubectl get job

Now that we know it’s working, we can run it in the cluster. Stop the make run invocation, and
run
make docker-build docker-push IMG=<some-registry>/<project-name>:tag
make deploy IMG=<some-registry>/<project-name>:tag

registry permission

This image ought to be published in the personal registry you specified. And it is required
to have access to pull the image from the working environment. Make sure you have the
proper permission to the registry if the above commands don’t work.

If we list cronjobs again like we did before, we should see the controller functioning again!

Deploying cert-manager
We suggest using cert-manager for provisioning the certificates for the webhook server. Other
solutions should also work as long as they put the certificates in the desired location.

You can follow the cert-manager documentation to install it.

cert-manager also has a component called CA Injector, which is responsible for injecting the
CA bundle into the MutatingWebhookConfiguration / ValidatingWebhookConfiguration .

To accomplish that, you need to use an annotation with key cert-manager.io/inject-ca-


from in the MutatingWebhookConfiguration / ValidatingWebhookConfiguration objects. The
value of the annotation should point to an existing certificate request instance in the format
of <certificate-namespace>/<certificate-name> .

This is the kustomize patch we used for annotating the MutatingWebhookConfiguration /


ValidatingWebhookConfiguration objects.
# This patch add annotation to admission webhook config and
# CERTIFICATE_NAMESPACE and CERTIFICATE_NAME will be substituted by kustomize
apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/name: mutatingwebhookconfiguration
app.kubernetes.io/instance: mutating-webhook-configuration
app.kubernetes.io/component: webhook
app.kubernetes.io/created-by: project
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
name: mutating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: CERTIFICATE_NAMESPACE/CERTIFICATE_NAME
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/name: validatingwebhookconfiguration
app.kubernetes.io/instance: validating-webhook-configuration
app.kubernetes.io/component: webhook
app.kubernetes.io/created-by: project
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
name: validating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: CERTIFICATE_NAMESPACE/CERTIFICATE_NAME

Deploying Admission Webhooks

Kind Cluster
It is recommended to develop your webhook with a kind cluster for faster iteration. Why?

You can bring up a multi-node cluster locally within 1 minute.


You can tear it down in seconds.
You don’t need to push your images to remote registry.

cert-manager
You need to follow this to install the cert-manager bundle.
Build your image
Run the following command to build your image locally.

make docker-build docker-push IMG=<some-registry>/<project-name>:tag

You don’t need to push the image to a remote container registry if you are using a kind
cluster. You can directly load your local image to your specified kind cluster:

kind load docker-image <your-image-name>:tag --name <your-kind-cluster-name>

Deploy Webhooks
You need to enable the webhook and cert manager configuration through kustomize.
config/default/kustomization.yaml should now look like the following:
# Adds namespace to all resources.
namespace: project-system

# Value of this field is prepended to the


# names of all resources, e.g. a deployment named
# "wordpress" becomes "alices-wordpress".
# Note that it should also match with the prefix (text before '-') of the
namespace
# field above.
namePrefix: project-

# Labels to add to all resources and selectors.


#labels:
#- includeSelectors: true
# pairs:
# someName: someValue

resources:
- ../crd
- ../rbac
- ../manager
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix
including the one in
# crd/kustomization.yaml
- ../webhook
# [CERTMANAGER] To enable cert-manager, uncomment all sections with
'CERTMANAGER'. 'WEBHOOK' components are required.
- ../certmanager
# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with
'PROMETHEUS'.
- ../prometheus

patchesStrategicMerge:
# Protect the /metrics endpoint by putting it behind auth.
# If you want your controller-manager to expose the /metrics
# endpoint w/o any authn/z, please comment the following line.
- manager_auth_proxy_patch.yaml

# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix
including the one in
# crd/kustomization.yaml
- manager_webhook_patch.yaml

# [CERTMANAGER] To enable cert-manager, uncomment all sections with


'CERTMANAGER'.
# Uncomment 'CERTMANAGER' sections in crd/kustomization.yaml to enable the CA
injection in the admission webhooks.
# 'CERTMANAGER' needs to be enabled to use ca injection
- webhookcainjection_patch.yaml

# [CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'


prefix.
# Uncomment the following replacements to add the cert-manager CA injection
annotations
replacements:
- source: # Add cert-manager annotation to ValidatingWebhookConfiguration,
MutatingWebhookConfiguration and CRDs
kind: Certificate
group: cert-manager.io
version: v1
name: serving-cert # this name should match the one in certificate.yaml
fieldPath: .metadata.namespace # namespace of the certificate CR
targets:
- select:
kind: ValidatingWebhookConfiguration
fieldPaths:
- .metadata.annotations.[cert-manager.io/inject-ca-from]
options:
delimiter: '/'
index: 0
create: true
- select:
kind: MutatingWebhookConfiguration
fieldPaths:
- .metadata.annotations.[cert-manager.io/inject-ca-from]
options:
delimiter: '/'
index: 0
create: true
- select:
kind: CustomResourceDefinition
fieldPaths:
- .metadata.annotations.[cert-manager.io/inject-ca-from]
options:
delimiter: '/'
index: 0
create: true
- source:
kind: Certificate
group: cert-manager.io
version: v1
name: serving-cert # this name should match the one in certificate.yaml
fieldPath: .metadata.name
targets:
- select:
kind: ValidatingWebhookConfiguration
fieldPaths:
- .metadata.annotations.[cert-manager.io/inject-ca-from]
options:
delimiter: '/'
index: 1
create: true
- select:
kind: MutatingWebhookConfiguration
fieldPaths:
- .metadata.annotations.[cert-manager.io/inject-ca-from]
options:
delimiter: '/'
index: 1
create: true
- select:
kind: CustomResourceDefinition
fieldPaths:
- .metadata.annotations.[cert-manager.io/inject-ca-from]
options:
delimiter: '/'
index: 1
create: true
- source: # Add cert-manager annotation to the webhook Service
kind: Service
version: v1
name: webhook-service
fieldPath: .metadata.name # namespace of the service
targets:
- select:
kind: Certificate
group: cert-manager.io
version: v1
fieldPaths:
- .spec.dnsNames.0
- .spec.dnsNames.1
options:
delimiter: '.'
index: 0
create: true
- source:
kind: Service
version: v1
name: webhook-service
fieldPath: .metadata.namespace # namespace of the service
targets:
- select:
kind: Certificate
group: cert-manager.io
version: v1
fieldPaths:
- .spec.dnsNames.0
- .spec.dnsNames.1
options:
delimiter: '.'
index: 1
create: true

And config/crd/kustomization.yaml should now look like the following:


# This kustomization.yaml is not intended to be run by itself,
# since it depends on service name and namespace that are out of this kustomize
package.
# It should be run by config/default
resources:
- bases/batch.tutorial.kubebuilder.io_cronjobs.yaml
#+kubebuilder:scaffold:crdkustomizeresource

patches:
# [WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix.
# patches here are for enabling the conversion webhook for each CRD
- patches/webhook_in_cronjobs.yaml
#+kubebuilder:scaffold:crdkustomizewebhookpatch

# [CERTMANAGER] To enable cert-manager, uncomment all the sections with


[CERTMANAGER] prefix.
# patches here are for enabling the CA injection for each CRD
- patches/cainjection_in_cronjobs.yaml
#+kubebuilder:scaffold:crdkustomizecainjectionpatch

# the following config is for teaching kustomize how to do kustomization for


CRDs.
configurations:
- kustomizeconfig.yaml

Now you can deploy it to your cluster by

make deploy IMG=<some-registry>/<project-name>:tag

Wait a while till the webhook pod comes up and the certificates are provisioned. It usually
completes within 1 minute.

Now you can create a valid CronJob to test your webhooks. The creation should successfully
go through.

kubectl create -f config/samples/batch_v1_cronjob.yaml

You can also try to create an invalid CronJob (e.g. use an ill-formatted schedule field). You
should see a creation failure with a validation error.

! The Bootstrapping Problem

If you are deploying a webhook for pods in the same cluster, be careful about the
bootstrapping problem, since the creation request of the webhook pod would be sent to
the webhook pod itself, which hasn’t come up yet.

To make it work, you can either use namespaceSelector if your kubernetes version is 1.9+
or use objectSelector if your kubernetes version is 1.15+ to skip itself.
Writing controller tests
Testing Kubernetes controllers is a big subject, and the boilerplate testing files generated for
you by kubebuilder are fairly minimal.

To walk you through integration testing patterns for Kubebuilder-generated controllers, we


will revisit the CronJob we built in our first tutorial and write a simple test for it.

The basic approach is that, in your generated suite_test.go file, you will use envtest to
create a local Kubernetes API server, instantiate and run your controllers, and then write
additional *_test.go files to test it using Ginkgo.

If you want to tinker with how your envtest cluster is configured, see section Configuring
envtest for integration tests as well as the envtest docs .

Test Environment Setup


$ vim ../../cronjob-tutorial/testdata/project/internal/controller/suite_test.go

// Apache License (hidden) ◀

// Imports (hidden) ◀

Now, let’s go through the code generated.

var (
cfg *rest.Config
k8sClient client.Client // You'll be using this client in your tests.
testEnv *envtest.Environment
ctx context.Context
cancel context.CancelFunc
)

func TestControllers(t *testing.T) {


RegisterFailHandler(Fail)

RunSpecs(t, "Controller Suite")


}

var _ = BeforeSuite(func() {
logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true)))

ctx, cancel = context.WithCancel(context.TODO())

First, the envtest cluster is configured to read CRDs from the CRD directory Kubebuilder
scaffolds for you.
By("bootstrapping test environment")
testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "..", "config",
"crd", "bases")},
ErrorIfCRDPathMissing: true,
}

Then, we start the envtest cluster.

var err error


// cfg is defined in this file globally.
cfg, err = testEnv.Start()
Expect(err).NotTo(HaveOccurred())
Expect(cfg).NotTo(BeNil())

The autogenerated test code will add the CronJob Kind schema to the default client-go k8s
scheme. This ensures that the CronJob API/Kind will be used in our test controller.

err = batchv1.AddToScheme(scheme.Scheme)
Expect(err).NotTo(HaveOccurred())

After the schemas, you will see the following marker. This marker is what allows new schemas
to be added here automatically when a new API is added to the project.

//+kubebuilder:scaffold:scheme

A client is created for our test CRUD operations.

k8sClient, err = client.New(cfg, client.Options{Scheme: scheme.Scheme})


Expect(err).NotTo(HaveOccurred())
Expect(k8sClient).NotTo(BeNil())

One thing that this autogenerated file is missing, however, is a way to actually start your
controller. The code above will set up a client for interacting with your custom Kind, but will
not be able to test your controller behavior. If you want to test your custom controller logic,
you’ll need to add some familiar-looking manager logic to your BeforeSuite() function, so you
can register your custom controller to run on this test cluster.

You may notice that the code below runs your controller with nearly identical logic to your
CronJob project’s main.go! The only difference is that the manager is started in a separate
goroutine so it does not block the cleanup of envtest when you’re done running your tests.

Note that we set up both a “live” k8s client and a separate client from the manager. This is
because when making assertions in tests, you generally want to assert against the live state of
the API server. If you use the client from the manager ( k8sManager.GetClient ), you’d end up
asserting against the contents of the cache instead, which is slower and can introduce
flakiness into your tests. We could use the manager’s APIReader to accomplish the same
thing, but that would leave us with two clients in our test assertions and setup (one for
reading, one for writing), and it’d be easy to make mistakes.

Note that we keep the reconciler running against the manager’s cache client, though -- we
want our controller to behave as it would in production, and we use features of the cache (like
indicies) in our controller which aren’t available when talking directly to the API server.

k8sManager, err := ctrl.NewManager(cfg, ctrl.Options{


Scheme: scheme.Scheme,
})
Expect(err).ToNot(HaveOccurred())

err = (&CronJobReconciler{
Client: k8sManager.GetClient(),
Scheme: k8sManager.GetScheme(),
}).SetupWithManager(k8sManager)
Expect(err).ToNot(HaveOccurred())

go func() {
defer GinkgoRecover()
err = k8sManager.Start(ctx)
Expect(err).ToNot(HaveOccurred(), "failed to run manager")
}()

})

Kubebuilder also generates boilerplate functions for cleaning up envtest and actually running
your test files in your controllers/ directory. You won’t need to touch these.

var _ = AfterSuite(func() {
cancel()
By("tearing down the test environment")
err := testEnv.Stop()
Expect(err).NotTo(HaveOccurred())
})

Now that you have your controller running on a test cluster and a client ready to perform
operations on your CronJob, we can start writing integration tests!

Testing your Controller’s Behavior


$ vim ../../cronjob-tutorial/testdata/project/internal/controller/cronjob_controller_test.go

// Apache License (hidden) ◀

Ideally, we should have one <kind>_controller_test.go for each controller scaffolded and
called in the suite_test.go . So, let’s write our example test for the CronJob controller
( cronjob_controller_test.go. )

// Imports (hidden) ◀
The first step to writing a simple integration test is to actually create an instance of CronJob
you can run tests against. Note that to create a CronJob, you’ll need to create a stub CronJob
struct that contains your CronJob’s specifications.

Note that when we create a stub CronJob, the CronJob also needs stubs of its required
downstream objects. Without the stubbed Job template spec and the Pod template spec
below, the Kubernetes API will not be able to create the CronJob.
var _ = Describe("CronJob controller", func() {

// Define utility constants for object names and testing timeouts/durations


and intervals.
const (
CronjobName = "test-cronjob"
CronjobNamespace = "default"
JobName = "test-job"

timeout = time.Second * 10
duration = time.Second * 10
interval = time.Millisecond * 250
)

Context("When updating CronJob Status", func() {


It("Should increase CronJob Status.Active count when new Jobs are
created", func() {
By("By creating a new CronJob")
ctx := context.Background()
cronJob := &cronjobv1.CronJob{
TypeMeta: metav1.TypeMeta{
APIVersion: "batch.tutorial.kubebuilder.io/v1",
Kind: "CronJob",
},
ObjectMeta: metav1.ObjectMeta{
Name: CronjobName,
Namespace: CronjobNamespace,
},
Spec: cronjobv1.CronJobSpec{
Schedule: "1 * * * *",
JobTemplate: batchv1.JobTemplateSpec{
Spec: batchv1.JobSpec{
// For simplicity, we only fill out the required
fields.
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
// For simplicity, we only fill out the
required fields.
Containers: []v1.Container{
{
Name: "test-container",
Image: "test-image",
},
},
RestartPolicy: v1.RestartPolicyOnFailure,
},
},
},
},
},
}
Expect(k8sClient.Create(ctx, cronJob)).Should(Succeed())

After creating this CronJob, let’s check that the CronJob’s Spec fields match what we passed in.
Note that, because the k8s apiserver may not have finished creating a CronJob after our
Create() call from earlier, we will use Gomega’s Eventually() testing function instead of
Expect() to give the apiserver an opportunity to finish creating our CronJob.

Eventually() will repeatedly run the function provided as an argument every interval
seconds until (a) the function’s output matches what’s expected in the subsequent Should()
call, or (b) the number of attempts * interval period exceed the provided timeout value.

In the examples below, timeout and interval are Go Duration values of our choosing.

cronjobLookupKey := types.NamespacedName{Name: CronjobName,


Namespace: CronjobNamespace}
createdCronjob := &cronjobv1.CronJob{}

// We'll need to retry getting this newly created CronJob, given that
creation may not immediately happen.
Eventually(func() bool {
err := k8sClient.Get(ctx, cronjobLookupKey, createdCronjob)
if err != nil {
return false
}
return true
}, timeout, interval).Should(BeTrue())
// Let's make sure our Schedule string value was properly
converted/handled.
Expect(createdCronjob.Spec.Schedule).Should(Equal("1 * * * *"))

Now that we’ve created a CronJob in our test cluster, the next step is to write a test that
actually tests our CronJob controller’s behavior. Let’s test the CronJob controller’s logic
responsible for updating CronJob.Status.Active with actively running jobs. We’ll verify that
when a CronJob has a single active downstream Job, its CronJob.Status.Active field contains a
reference to this Job.

First, we should get the test CronJob we created earlier, and verify that it currently does not
have any active jobs. We use Gomega’s Consistently() check here to ensure that the active
job count remains 0 over a duration of time.

By("By checking the CronJob has zero active Jobs")


Consistently(func() (int, error) {
err := k8sClient.Get(ctx, cronjobLookupKey, createdCronjob)
if err != nil {
return -1, err
}
return len(createdCronjob.Status.Active), nil
}, duration, interval).Should(Equal(0))

Next, we actually create a stubbed Job that will belong to our CronJob, as well as its
downstream template specs. We set the Job’s status’s “Active” count to 2 to simulate the Job
running two pods, which means the Job is actively running.
We then take the stubbed Job and set its owner reference to point to our test CronJob. This
ensures that the test Job belongs to, and is tracked by, our test CronJob. Once that’s done, we
create our new Job instance.

By("By creating a new Job")


testJob := &batchv1.Job{
ObjectMeta: metav1.ObjectMeta{
Name: JobName,
Namespace: CronjobNamespace,
},
Spec: batchv1.JobSpec{
Template: v1.PodTemplateSpec{
Spec: v1.PodSpec{
// For simplicity, we only fill out the required
fields.
Containers: []v1.Container{
{
Name: "test-container",
Image: "test-image",
},
},
RestartPolicy: v1.RestartPolicyOnFailure,
},
},
},
Status: batchv1.JobStatus{
Active: 2,
},
}

// Note that your CronJob’s GroupVersionKind is required to set up


this owner reference.
kind := reflect.TypeOf(cronjobv1.CronJob{}).Name()
gvk := cronjobv1.GroupVersion.WithKind(kind)

controllerRef := metav1.NewControllerRef(createdCronjob, gvk)


testJob.SetOwnerReferences([]metav1.OwnerReference{*controllerRef})
Expect(k8sClient.Create(ctx, testJob)).Should(Succeed())

Adding this Job to our test CronJob should trigger our controller’s reconciler logic. After that,
we can write a test that evaluates whether our controller eventually updates our CronJob’s
Status field as expected!
By("By checking that the CronJob has one active Job")
Eventually(func() ([]string, error) {
err := k8sClient.Get(ctx, cronjobLookupKey, createdCronjob)
if err != nil {
return nil, err
}

names := []string{}
for _, job := range createdCronjob.Status.Active {
names = append(names, job.Name)
}
return names, nil
}, timeout, interval).Should(ConsistOf(JobName), "should list our
active job %s in the active jobs list in status", JobName)
})
})

})

After writing all this code, you can run go test ./... in your controllers/ directory again
to run your new test!

This Status update example above demonstrates a general testing strategy for a custom Kind
with downstream objects. By this point, you hopefully have learned the following methods for
testing your controller behavior:

Setting up your controller to run on an envtest cluster


Writing stubs for creating test objects
Isolating changes to an object to test specific controller behavior

Advanced Examples
There are more involved examples of using envtest to rigorously test controller behavior.
Examples include:

Azure Databricks Operator: see their fully fleshed-out suite_test.go as well as any
*_test.go file in that directory like this one.

Epilogue
By this point, we’ve got a pretty full-featured implementation of the CronJob controller, made
use of most of the features of Kubebuilder, and written tests for the controller using envtest.

If you want more, head over to the Multi-Version Tutorial to learn how to add new API
versions to a project.

Additionally, you can try the following steps on your own -- we’ll have a tutorial section on
them Soon™:
adding additional printer columns kubectl get

Tutorial: Multi-Version API


Most projects start out with an alpha API that changes release to release. However, eventually,
most projects will need to move to a more stable API. Once your API is stable though, you
can’t make breaking changes to it. That’s where API versions come into play.

Let’s make some changes to the CronJob API spec and make sure all the different versions
are supported by our CronJob project.

If you haven’t already, make sure you’ve gone through the base CronJob Tutorial.

Following Along vs Jumping Ahead

Note that most of this tutorial is generated from literate Go files that form a runnable
project, and live in the book source directory: docs/book/src/multiversion-
tutorial/testdata/project.

! Minimum Kubernetes Versions Incoming!

CRD conversion support was introduced as an alpha feature in Kubernetes 1.13 (which
means it’s not on by default, and needs to be enabled via a feature gate), and became
beta in Kubernetes 1.15 (which means it’s on by default).

If you’re on Kubernetes 1.13-1.14, make sure to enable the feature gate. If you’re on
Kubernetes 1.12 or below, you’ll need a new cluster to use conversion. Check out the kind
instructions for instructions on how to set up a all-in-one cluster.

Next, let’s figure out what changes we want to make...

Changing things up
A fairly common change in a Kubernetes API is to take some data that used to be
unstructured or stored in some special string format, and change it to structured data. Our
schedule field fits the bill quite nicely for this -- right now, in v1 , our schedules look like

schedule: "*/1 * * * *"

That’s a pretty textbook example of a special string format (it’s also pretty unreadable unless
you’re a Unix sysadmin).

Let’s make it a bit more structured. According to the our CronJob code, we support “standard”
Cron format.
In Kubernetes, all versions must be safely round-tripable through each other. This means
that if we convert from version 1 to version 2, and then back to version 1, we must not lose
information. Thus, any change we make to our API must be compatible with whatever we
supported in v1, and also need to make sure anything we add in v2 is supported in v1. In
some cases, this means we need to add new fields to v1, but in our case, we won’t have to,
since we’re not adding new functionality.

Keeping all that in mind, let’s convert our example above to be slightly more structured:

schedule:
minute: */1

Now, at least, we’ve got labels for each of our fields, but we can still easily support all the
different syntax for each field.

We’ll need a new API version for this change. Let’s call it v2:

kubebuilder create api --group batch --version v2 --kind CronJob

Press y for “Create Resource” and n for “Create Controller”.

Now, let’s copy over our existing types, and make the change:

$ vim project/api/v2/cronjob_types.go

// Apache License (hidden) ◀

Since we’re in a v2 package, controller-gen will assume this is for the v2 version automatically.
We could override that with the +versionName marker.

package v2

// Imports (hidden) ◀

We’ll leave our spec largely unchanged, except to change the schedule field to a new type.

// CronJobSpec defines the desired state of CronJob


type CronJobSpec struct {
// The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron.
Schedule CronSchedule `json:"schedule"`

// The rest of Spec (hidden) ◀

Next, we’ll need to define a type to hold our schedule. Based on our proposed YAML above,
it’ll have a field for each corresponding Cron “field”.
// describes a Cron schedule.
type CronSchedule struct {
// specifies the minute during which the job executes.
// +optional
Minute *CronField `json:"minute,omitempty"`
// specifies the hour during which the job executes.
// +optional
Hour *CronField `json:"hour,omitempty"`
// specifies the day of the month during which the job executes.
// +optional
DayOfMonth *CronField `json:"dayOfMonth,omitempty"`
// specifies the month during which the job executes.
// +optional
Month *CronField `json:"month,omitempty"`
// specifies the day of the week during which the job executes.
// +optional
DayOfWeek *CronField `json:"dayOfWeek,omitempty"`
}

Finally, we’ll define a wrapper type to represent a field. We could attach additional validation
to this field, but for now we’ll just use it for documentation purposes.

// represents a Cron field specifier.


type CronField string

// Other Types (hidden) ◀

Storage Versions
$ vim project/api/v1/cronjob_types.go

// Apache License (hidden) ◀

package v1

// Imports (hidden) ◀

// old stuff (hidden) ◀

Since we’ll have more than one version, we’ll need to mark a storage version. This is the
version that the Kubernetes API server uses to store our data. We’ll chose the v1 version for
our project.

We’ll use the +kubebuilder:storageversion to do this.

Note that multiple versions may exist in storage if they were written before the storage
version changes -- changing the storage version only affects how objects are created/updated
after the change.
//+kubebuilder:object:root=true
//+kubebuilder:subresource:status
//+kubebuilder:storageversion

// CronJob is the Schema for the cronjobs API


type CronJob struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec CronJobSpec `json:"spec,omitempty"`


Status CronJobStatus `json:"status,omitempty"`
}

// old stuff (hidden) ◀

Now that we’ve got our types in place, we’ll need to set up conversion...

Hubs, spokes, and other wheel metaphors


Since we now have two different versions, and users can request either version, we’ll have to
define a way to convert between our version. For CRDs, this is done using a webhook, similar
to the defaulting and validating webhooks we defined in the base tutorial. Like before,
controller-runtime will help us wire together the nitty-gritty bits, we just have to implement
the actual conversion.

Before we do that, though, we’ll need to understand how controller-runtime thinks about
versions. Namely:

Complete graphs are insufficiently nautical


A simple approach to defining conversion might be to define conversion functions to convert
between each of our versions. Then, whenever we need to convert, we’d look up the
appropriate function, and call it to run the conversion.

This works fine when we just have two versions, but what if we had 4 types? 8 types? That’d be
a lot of conversion functions.

Instead, controller-runtime models conversion in terms of a “hub and spoke” model -- we


mark one version as the “hub”, and all other versions just define conversion to and from the
hub:
becomes

Then, if we have to convert between two non-hub versions, we first convert to the hub
version, and then to our desired version:

This cuts down on the number of conversion functions that we have to define, and is modeled
off of what Kubernetes does internally.

What does that have to do with Webhooks?


When API clients, like kubectl or your controller, request a particular version of your resource,
the Kubernetes API server needs to return a result that’s of that version. However, that
version might not match the version stored by the API server.

In that case, the API server needs to know how to convert between the desired version and
the stored version. Since the conversions aren’t built in for CRDs, the Kubernetes API server
calls out to a webhook to do the conversion instead. For Kubebuilder, this webhook is
implemented by controller-runtime, and performs the hub-and-spoke conversions that we
discussed above.

Now that we have the model for conversion down pat, we can actually implement our
conversions.
Implementing conversion
With our model for conversion in place, it’s time to actually implement the conversion
functions. We’ll put them in a file called cronjob_conversion.go next to our
cronjob_types.go file, to avoid cluttering up our main types file with extra functions.

Hub...
First, we’ll implement the hub. We’ll choose the v1 version as the hub:

$ vim project/api/v1/cronjob_conversion.go

// Apache License (hidden) ◀

package v1

Implementing the hub method is pretty easy -- we just have to add an empty method called
Hub() to serve as a marker. We could also just put this inline in our cronjob_types.go file.

// Hub marks this type as a conversion hub.


func (*CronJob) Hub() {}

... and Spokes


Then, we’ll implement our spoke, the v2 version:

$ vim project/api/v2/cronjob_conversion.go

// Apache License (hidden) ◀

package v2

// Imports (hidden) ◀

Our “spoke” versions need to implement the Convertible interface. Namely, they’ll need
ConvertTo and ConvertFrom methods to convert to/from the hub version.

ConvertTo is expected to modify its argument to contain the converted object. Most of the
conversion is straightforward copying, except for converting our changed field.
// ConvertTo converts this CronJob to the Hub version (v1).
func (src *CronJob) ConvertTo(dstRaw conversion.Hub) error {
dst := dstRaw.(*v1.CronJob)

sched := src.Spec.Schedule
scheduleParts := []string{"*", "*", "*", "*", "*"}
if sched.Minute != nil {
scheduleParts[0] = string(*sched.Minute)
}
if sched.Hour != nil {
scheduleParts[1] = string(*sched.Hour)
}
if sched.DayOfMonth != nil {
scheduleParts[2] = string(*sched.DayOfMonth)
}
if sched.Month != nil {
scheduleParts[3] = string(*sched.Month)
}
if sched.DayOfWeek != nil {
scheduleParts[4] = string(*sched.DayOfWeek)
}
dst.Spec.Schedule = strings.Join(scheduleParts, " ")

// rote conversion (hidden) ◀

return nil
}

ConvertFrom is expected to modify its receiver to contain the converted object. Most of the
conversion is straightforward copying, except for converting our changed field.

// ConvertFrom converts from the Hub version (v1) to this version.


func (dst *CronJob) ConvertFrom(srcRaw conversion.Hub) error {
src := srcRaw.(*v1.CronJob)

schedParts := strings.Split(src.Spec.Schedule, " ")


if len(schedParts) != 5 {
return fmt.Errorf("invalid schedule: not a standard 5-field schedule")
}
partIfNeeded := func(raw string) *CronField {
if raw == "*" {
return nil
}
part := CronField(raw)
return &part
}
dst.Spec.Schedule.Minute = partIfNeeded(schedParts[0])
dst.Spec.Schedule.Hour = partIfNeeded(schedParts[1])
dst.Spec.Schedule.DayOfMonth = partIfNeeded(schedParts[2])
dst.Spec.Schedule.Month = partIfNeeded(schedParts[3])
dst.Spec.Schedule.DayOfWeek = partIfNeeded(schedParts[4])

// rote conversion (hidden) ◀

return nil
}
Now that we’ve got our conversions in place, all that we need to do is wire up our main to
serve the webhook!

Setting up the webhooks


Our conversion is in place, so all that’s left is to tell controller-runtime about our conversion.

Normally, we’d run

kubebuilder create webhook --group batch --version v1 --kind CronJob --conversion

to scaffold out the webhook setup. However, we’ve already got webhook setup, from when we
built our defaulting and validating webhooks!

Webhook setup...
$ vim project/api/v1/cronjob_webhook.go

// Apache License (hidden) ◀

// Go imports (hidden) ◀

var cronjoblog = logf.Log.WithName("cronjob-resource")

This setup doubles as setup for our conversion webhooks: as long as our types implement the
Hub and Convertible interfaces, a conversion webhook will be registered.

func (r *CronJob) SetupWebhookWithManager(mgr ctrl.Manager) error {


return ctrl.NewWebhookManagedBy(mgr).
For(r).
Complete()
}

// Existing Defaulting and Validation (hidden) ◀

...and main.go
Similarly, our existing main file is sufficient:

$ vim project/cmd/main.go

// Apache License (hidden) ◀

// Imports (hidden) ◀

// existing setup (hidden) ◀

func main() {
// existing setup (hidden) ◀

Our existing call to SetupWebhookWithManager registers our conversion webhooks with the
manager, too.

if os.Getenv("ENABLE_WEBHOOKS") != "false" {
if err = (&batchv1.CronJob{}).SetupWebhookWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create webhook", "webhook", "CronJob")
os.Exit(1)
}
if err = (&batchv2.CronJob{}).SetupWebhookWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create webhook", "webhook", "CronJob")
os.Exit(1)
}
}
//+kubebuilder:scaffold:builder

// existing setup (hidden) ◀

Everything’s set up and ready to go! All that’s left now is to test out our webhooks.

Deployment and Testing


Before we can test out our conversion, we’ll need to enable them conversion in our CRD:

Kubebuilder generates Kubernetes manifests under the config directory with webhook bits
disabled. To enable them, we need to:

Enable patches/webhook_in_<kind>.yaml and patches/cainjection_in_<kind>.yaml


in config/crd/kustomization.yaml file.

Enable ../certmanager and ../webhook directories under the bases section in


config/default/kustomization.yaml file.

Enable manager_webhook_patch.yaml and webhookcainjection_patch.yaml under the


patches section in config/default/kustomization.yaml file.

Enable all the vars under the CERTMANAGER section in


config/default/kustomization.yaml file.

Additionally, if present in our Makefile, we’ll need to set the CRD_OPTIONS variable to just
"crd" , removing the trivialVersions option (this ensures that we actually generate
validation for each version, instead of telling Kubernetes that they’re the same):

CRD_OPTIONS ?= "crd"

Now we have all our code changes and manifests in place, so let’s deploy it to the cluster and
test it out.
You’ll need cert-manager installed (version 0.9.0+ ) unless you’ve got some other certificate
management solution. The Kubebuilder team has tested the instructions in this tutorial with
0.9.0-alpha.0 release.

Once all our ducks are in a row with certificates, we can run make install deploy (as normal)
to deploy all the bits (CRD, controller-manager deployment) onto the cluster.

Testing
Once all of the bits are up and running on the cluster with conversion enabled, we can test out
our conversion by requesting different versions.

We’ll make a v2 version based on our v1 version (put it under config/samples )

apiVersion: batch.tutorial.kubebuilder.io/v2
kind: CronJob
metadata:
labels:
app.kubernetes.io/name: cronjob
app.kubernetes.io/instance: cronjob-sample
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: project
name: cronjob-sample
spec:
schedule:
minute: "*/1"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure

Then, we can create it on the cluster:

kubectl apply -f config/samples/batch_v2_cronjob.yaml

If we’ve done everything correctly, it should create successfully, and we should be able to
fetch it using both the v2 resource

kubectl get cronjobs.v2.batch.tutorial.kubebuilder.io -o yaml


apiVersion: batch.tutorial.kubebuilder.io/v2
kind: CronJob
metadata:
labels:
app.kubernetes.io/name: cronjob
app.kubernetes.io/instance: cronjob-sample
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: project
name: cronjob-sample
spec:
schedule:
minute: "*/1"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure

and the v1 resource

kubectl get cronjobs.v1.batch.tutorial.kubebuilder.io -o yaml


apiVersion: batch.tutorial.kubebuilder.io/v1
kind: CronJob
metadata:
labels:
app.kubernetes.io/name: cronjob
app.kubernetes.io/instance: cronjob-sample
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: project
name: cronjob-sample
spec:
schedule: "*/1 * * * *"
startingDeadlineSeconds: 60
concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure

Both should be filled out, and look equivalent to our v2 and v1 samples, respectively. Notice
that each has a different API version.

Finally, if we wait a bit, we should notice that our CronJob continues to reconcile, even though
our controller is written against our v1 API version.

kubectl and Preferred Versions

When we access our API types from Go code, we ask for a specific version by using that
version’s Go type (e.g. batchv2.CronJob ).

You might’ve noticed that the above invocations of kubectl looked a little different from
what we usually do -- namely, they specify a group-version-resource, instead of just a
resource.

When we write kubectl get cronjob , kubectl needs to figure out which group-version-
resource that maps to. To do this, it uses the discovery API to figure out the preferred
version of the cronjob resource. For CRDs, this is more-or-less the latest stable version
(see the CRD docs for specific details).

With our updates to CronJob, this means that kubectl get cronjob fetches the
batch/v2 group-version.

If we want to specify an exact version, we can use kubectl get


resource.version.group , as we do above.
You should always use fully-qualified group-version-resource syntax in scripts. kubectl
get resource is for humans, self-aware robots, and other sentient beings that can figure
out new versions. kubectl get resource.version.group is for everything else.

Troubleshooting
steps for troubleshooting

Tutorial: ComponentConfig
! Component Config is deprecated

The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.

Please, be aware that it will force Kubebuilder remove this option soon in future release.

Nearly every project that is built for Kubernetes will eventually need to support passing in
additional configurations into the controller. These could be to enable better logging, turn
on/off specific feature gates, set the sync period, or a myriad of other controls. Previously this
was commonly done using cli flags that your main.go would parse to make them accessible
within your program. While this works it’s not a future forward design and the Kubernetes
community has been migrating the core components away from this and toward using
versioned config files, referred to as “component configs”.

The rest of this tutorial will show you how to configure your kubebuilder project with the
component config type then moves on to implementing a custom type so that you can extend
this capability.

Following Along vs Jumping Ahead

Note that most of this tutorial is generated from literate Go files that form a runnable
project, and live in the book source directory: docs/book/src/component-config-
tutorial/testdata/project.
Resources
Versioned Component Configuration File Design

Config v1alpha1 Go Docs

Changing things up
! Component Config is deprecated

The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.

Please, be aware that it will force Kubebuilder remove this option soon in future release.

This tutorial will show you how to create a custom configuration file for your project by
modifying a project generated with the --component-config flag passed to the init
command. The full tutorial’s source can be found here. Make sure you’ve gone through the
installation steps before continuing.

New project:

# we'll use a domain of tutorial.kubebuilder.io,


# so all API groups will be <group>.tutorial.kubebuilder.io.
kubebuilder init --domain tutorial.kubebuilder.io --component-config

Setting up an existing project


If you’ve previously generated a project we can add support for parsing the config file by
making the following changes to main.go .

First, add a new flag to specify the path that the component config file should be loaded
from.
var configFile string
flag.StringVar(&configFile, "config", "",
"The controller will load its initial configuration from this file. "+
"Omit this flag to use the default configuration values. "+
"Command-line flags override configuration from this file.")

Now, we can setup the Options struct and check if the configFile is set, this allows
backwards compatibility, if it’s set we’ll then use the AndFrom function on Options to parse
and populate the Options from the config.

var err error


options := ctrl.Options{Scheme: scheme}
if configFile != "" {
options, err = options.AndFrom(ctrl.ConfigFile().AtPath(configFile))
if err != nil {
setupLog.Error(err, "unable to load the config file")
os.Exit(1)
}
}

! Your Options may have defaults from flags.

If you have previously allowed other flags like --metrics-bind-addr or --enable-


leader-election , you’ll want to set those on the Options before loading the config from
the file.

Lastly, we’ll change the NewManager call to use the options variable we defined above.

mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), options)

With that out of the way, we can get on to defining our new config!

Create the file /config/manager/controller_manager_config.yaml with the following


content:
apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: ecaf1259.tutorial.kubebuilder.io
# leaderElectionReleaseOnCancel defines if the leader should step down volume
# when the Manager ends. This requires the binary to immediately end when the
# Manager is stopped, otherwise, this setting is unsafe. Setting this
significantly
# speeds up voluntary leader transitions as the new leader don't have to wait
# LeaseDuration time first.
# In the default scaffold provided, the program ends immediately after
# the manager stops, so would be fine to enable this option. However,
# if you are doing or is intended to do any operation such as perform cleanups
# after the manager stops then its usage might be unsafe.
# leaderElectionReleaseOnCancel: true

Update the file /config/manager/kustomization.yaml by adding at the bottom the following


content:

generatorOptions:
disableNameSuffixHash: true

configMapGenerator:
- name: manager-config
files:
- controller_manager_config.yaml

Update the file default/kustomization.yaml by adding under the patchesStrategicMerge:


key the following patch:

patchesStrategicMerge:
# Mount the controller config file for loading manager configurations
# through a ComponentConfig type
- manager_config_patch.yaml

Update the file default/manager_config_patch.yaml by adding under the spec: key the
following patch:
spec:
template:
spec:
containers:
- name: manager
args:
- "--config=controller_manager_config.yaml"
volumeMounts:
- name: manager-config
mountPath: /controller_manager_config.yaml
subPath: controller_manager_config.yaml
volumes:
- name: manager-config
configMap:
name: manager-config

Defining your Config


! Component Config is deprecated

The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.

Please, be aware that it will force Kubebuilder remove this option soon in future release.

Now that you have a component config base project we need to customize the values that are
passed into the controller, to do this we can take a look at
config/manager/controller_manager_config.yaml .

$ vim controller_manager_config.yaml

apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: 80807133.tutorial.kubebuilder.io

To see all the available fields you can look at the v1alpha Controller Runtime config
ControllerManagerConfiguration
Using a Custom Type
! Component Config is deprecated

The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.

Please, be aware that it will force Kubebuilder remove this option soon in future release.

! Built-in vs Custom Type

If you don’t need to add custom fields to configure your project you can stop now and
move on, if you’d like to be able to pass additional information keep reading.

If your project needs to accept additional non-controller runtime specific configurations, e.g.
ClusterName , Region or anything serializable into yaml you can do this by using
kubebuilder to create a new type and then updating your main.go to setup the new type for
parsing.

The rest of this tutorial will walk through implementing a custom component config type.

Adding a new Config Type


! Component Config is deprecated

The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.

Please, be aware that it will force Kubebuilder remove this option soon in future release.

To scaffold out a new config Kind, we can use kubebuilder create api .

kubebuilder create api --group config --version v2 --kind ProjectConfig --


resource --controller=false --make=false

Then, run make build to implement the interface for your API type, which would generate the
file zz_generated.deepcopy.go .
Use --controller=false

You may notice this command from the CronJob tutorial although here we explicitly
setting --controller=false because ProjectConfig is not intended to be an API
extension and cannot be reconciled.

This will create a new type file in api/config/v2/ for the ProjectConfig kind. We’ll need to
change this file to embed the v1alpha1.ControllerManagerConfigurationSpec

$ vim projectconfig_types.go

// Apache License (hidden) ◀

We start out simply enough: we import the config/v1alpha1 API group, which is exposed
through ControllerRuntime.

package v2

import (
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
cfg "sigs.k8s.io/controller-runtime/pkg/config/v1alpha1"
)

// +kubebuilder:object:root=true

Next, we’ll remove the default ProjectConfigSpec and ProjectConfigList then we’ll embed
cfg.ControllerManagerConfigurationSpec in ProjectConfig .

// ProjectConfig is the Schema for the projectconfigs API


type ProjectConfig struct {
metav1.TypeMeta `json:",inline"`

// ControllerManagerConfigurationSpec returns the configurations for


controllers
cfg.ControllerManagerConfigurationSpec `json:",inline"`

ClusterName string `json:"clusterName,omitempty"`


}

If you haven’t, you’ll also need to remove the ProjectConfigList from the
SchemeBuilder.Register .

func init() {
SchemeBuilder.Register(&ProjectConfig{})
}

Lastly, we’ll change the main.go to reference this type for parsing the file.
Updating main
! Component Config is deprecated

The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.

Please, be aware that it will force Kubebuilder remove this option soon in future release.

Once you have defined your new custom component config type we need to make sure our
new config type has been imported and the types are registered with the scheme. If you used
kubebuilder create api this should have been automated.

import (
// ... other imports
configv2 "tutorial.kubebuilder.io/project/apis/config/v2"
// +kubebuilder:scaffold:imports
)

With the package imported we can confirm the types have been added.

func init() {
// ... other scheme registrations
utilruntime.Must(configv2.AddToScheme(scheme))
// +kubebuilder:scaffold:scheme
}

Lastly, we need to change the options parsing in main.go to use this new type. To do this we’ll
chain OfKind onto ctrl.ConfigFile() and pass in a pointer to the config kind.

var err error


ctrlConfig := configv2.ProjectConfig{}
options := ctrl.Options{Scheme: scheme}
if configFile != "" {
options, err =
options.AndFrom(ctrl.ConfigFile().AtPath(configFile).OfKind(&ctrlConfig))
if err != nil {
setupLog.Error(err, "unable to load the config file")
os.Exit(1)
}
}

Now if you need to use the .clusterName field we defined in our custom kind you can call
ctrlConfig.ClusterName which will be populated from the config file supplied.
Defining your Custom Config
! Component Config is deprecated

The ComponentConfig has been deprecated in the Controller-Runtime since its version
0.15.0. More info Moreover, it has undergone breaking changes and is no longer
functioning as intended. As a result, Kubebuilder, which heavily relies on the Controller
Runtime, has also deprecated this feature, no longer guaranteeing its functionality from
version 3.11.0 onwards. You can find additional details on this issue here.

Please, be aware that it will force Kubebuilder remove this option soon in future release.

Now that you have a custom component config we change the


config/manager/controller_manager_config.yaml to use the new GVK you defined.

$ vim project/config/manager/controller_manager_config.yaml

apiVersion: controller-runtime.sigs.k8s.io/v1alpha1
kind: ControllerManagerConfig
metadata:
labels:
app.kubernetes.io/name: controllermanagerconfig
app.kubernetes.io/instance: controller-manager-configuration
app.kubernetes.io/component: manager
app.kubernetes.io/created-by: project
app.kubernetes.io/part-of: project
app.kubernetes.io/managed-by: kustomize
health:
healthProbeBindAddress: :8081
metrics:
bindAddress: 127.0.0.1:8080
webhook:
port: 9443
leaderElection:
leaderElect: true
resourceName: 80807133.tutorial.kubebuilder.io
clusterName: example-test

This type uses the new ProjectConfig kind under the GVK
config.tutorial.kubebuilder.io/v2 , with these custom configs we can add any yaml
serializable fields that your controller needs and begin to reduce the reliance on flags to
configure your project.

Migrations
Migrating between project structures in Kubebuilder generally involves a bit of manual work.

This section details what’s required to migrate, between different versions of Kubebuilder
scaffolding, as well as to more complex project layout structures.
Migration guides from Legacy versions <
3.0.0
Follow the migration guides from the legacy Kubebuilder versions up the required latest v3x
version. Note that from v3, a new ecosystem using plugins is introduced for better
maintainability, reusability and user experience .

For more info, see the design docs of:

Extensible CLI and Scaffolding Plugins: phase 1


Extensible CLI and Scaffolding Plugins: phase 1.5
Extensible CLI and Scaffolding Plugins - Phase 2

Also, you can check the Plugins section.

Kubebuilder v1 vs v2 (Legacy v1.0.0+ to


v2.0.0 Kubebuilder CLI versions)
This document cover all breaking changes when migrating from v1 to v2.

The details of all changes (breaking or otherwise) can be found in controller-runtime,


controller-tools and kubebuilder release notes.

Common changes
V2 project uses go modules. But kubebuilder will continue to support dep until go 1.13 is out.

controller-runtime
Client.List now uses functional options ( List(ctx, list, ...option) ) instead of
List(ctx, ListOptions, list) .

Client.DeleteAllOf was added to the Client interface.

Metrics are on by default now.

A number of packages under pkg/runtime have been moved, with their old locations
deprecated. The old locations will be removed before controller-runtime v1.0.0. See the
godocs for more information.
Webhook-related

Automatic certificate generation for webhooks has been removed, and webhooks will no
longer self-register. Use controller-tools to generate a webhook configuration. If you
need certificate generation, we recommend using cert-manager. Kubebuilder v2 will
scaffold out cert manager configs for you to use -- see the Webhook Tutorial for more
details.

The builder package now has separate builders for controllers and webhooks, which
facilitates choosing which to run.

controller-tools
The generator framework has been rewritten in v2. It still works the same as before in many
cases, but be aware that there are some breaking changes. Please check marker
documentation for more details.

Kubebuilder
Kubebuilder v2 introduces a simplified project layout. You can find the design doc here.

In v1, the manager is deployed as a StatefulSet , while it’s deployed as a Deployment in


v2.

The kubebuilder create webhook command was added to scaffold


mutating/validating/conversion webhooks. It replaces the kubebuilder alpha webhook
command.

v2 uses distroless/static instead of Ubuntu as base image. This reduces image size
and attack surface.

v2 requires kustomize v3.1.0+.

Migration from v1 to v2
Make sure you understand the differences between Kubebuilder v1 and v2 before continuing

Please ensure you have followed the installation guide to install the required components.

The recommended way to migrate a v1 project is to create a new v2 project and copy over the
API and the reconciliation code. The conversion will end up with a project that looks like a
native v2 project. However, in some cases, it’s possible to do an in-place upgrade (i.e. reuse
the v1 project layout, upgrading controller-runtime and controller-tools.
Let’s take as example an V1 project and migrate it to Kubebuilder v2. At the end, we should
have something that looks like the example v2 project.

Preparation
We’ll need to figure out what the group, version, kind and domain are.

Let’s take a look at our current v1 project structure:

pkg/
├── apis
│ ├── addtoscheme_batch_v1.go
│ ├── apis.go
│ └── batch
│ ├── group.go
│ └── v1
│ ├── cronjob_types.go
│ ├── cronjob_types_test.go
│ ├── doc.go
│ ├── register.go
│ ├── v1_suite_test.go
│ └── zz_generated.deepcopy.go
├── controller
└── webhook

All of our API information is stored in pkg/apis/batch , so we can look there to find what we
need to know.

In cronjob_types.go , we can find

type CronJob struct {...}

In register.go , we can find

SchemeGroupVersion = schema.GroupVersion{Group: "batch.tutorial.kubebuilder.io",


Version: "v1"}

Putting that together, we get CronJob as the kind, and batch.tutorial.kubebuilder.io/v1


as the group-version

Initialize a v2 Project
Now, we need to initialize a v2 project. Before we do that, though, we’ll need to initialize a new
go module if we’re not on the gopath :
go mod init tutorial.kubebuilder.io/project

Then, we can finish initializing the project with kubebuilder:

kubebuilder init --domain tutorial.kubebuilder.io

Migrate APIs and Controllers


Next, we’ll re-scaffold out the API types and controllers. Since we want both, we’ll say yes to
both the API and controller prompts when asked what parts we want to scaffold:

kubebuilder create api --group batch --version v1 --kind CronJob

If you’re using multiple groups, some manual work is required to migrate. Please follow this
for more details.

Migrate the APIs

Now, let’s copy the API definition from pkg/apis/batch/v1/cronjob_types.go to


api/v1/cronjob_types.go . We only need to copy the implementation of the Spec and
Status fields.

We can replace the +k8s:deepcopy-gen:interfaces=... marker (which is deprecated in


kubebuilder) with +kubebuilder:object:root=true .

We don’t need the following markers any more (they’re not used anymore, and are relics from
much older versions of Kubebuilder):

// +genclient
// +k8s:openapi-gen=true

Our API types should look like the following:

// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// CronJob is the Schema for the cronjobs API
type CronJob struct {...}

// +kubebuilder:object:root=true

// CronJobList contains a list of CronJob


type CronJobList struct {...}
Migrate the Controllers

Now, let’s migrate the controller reconciler code from


pkg/controller/cronjob/cronjob_controller.go to controllers/cronjob_controller.go .

We’ll need to copy

the fields from the ReconcileCronJob struct to CronJobReconciler


the contents of the Reconcile function
the rbac related markers to the new file.
the code under func add(mgr manager.Manager, r reconcile.Reconciler) error to
func SetupWithManager

Migrate the Webhooks


If you don’t have a webhook, you can skip this section.

Webhooks for Core Types and External CRDs

If you are using webhooks for Kubernetes core types (e.g. Pods), or for an external CRD that is
not owned by you, you can refer the controller-runtime example for builtin types and do
something similar. Kubebuilder doesn’t scaffold much for these cases, but you can use the
library in controller-runtime.

Scaffold Webhooks for our CRDs

Now let’s scaffold the webhooks for our CRD (CronJob). We’ll need to run the following
command with the --defaulting and --programmatic-validation flags (since our test
project uses defaulting and validating webhooks):

kubebuilder create webhook --group batch --version v1 --kind CronJob --defaulting


--programmatic-validation

Depending on how many CRDs need webhooks, we may need to run the above command
multiple times with different Group-Version-Kinds.

Now, we’ll need to copy the logic for each webhook. For validating webhooks, we can copy the
contents from func validatingCronJobFn in
pkg/default_server/cronjob/validating/cronjob_create_handler.go to func
ValidateCreate in api/v1/cronjob_webhook.go and then the same for update .

Similarly, we’ll copy from func mutatingCronJobFn to func Default .


Webhook Markers

When scaffolding webhooks, Kubebuilder v2 adds the following markers:

// These are v2 markers

// This is for the mutating webhook


// +kubebuilder:webhook:path=/mutate-batch-tutorial-kubebuilder-io-v1-
cronjob,mutating=true,failurePolicy=fail,groups=batch.tutorial.kubebuilder.io,resou

...

// This is for the validating webhook


// +kubebuilder:webhook:path=/validate-batch-tutorial-kubebuilder-io-v1-
cronjob,mutating=false,failurePolicy=fail,groups=batch.tutorial.kubebuilder.io,reso

The default verbs are verbs=create;update . We need to ensure verbs matches what we
need. For example, if we only want to validate creation, then we would change it to
verbs=create .

We also need to ensure failure-policy is still the same.

Markers like the following are no longer needed (since they deal with self-deploying certificate
configuration, which was removed in v2):

// v1 markers
// +kubebuilder:webhook:port=9876,cert-dir=/tmp/cert
// +kubebuilder:webhook:service=test-system:webhook-service,selector=app:webhook-
server
// +kubebuilder:webhook:secret=test-system:webhook-server-secret
// +kubebuilder:webhook:mutating-webhook-config-name=test-mutating-webhook-cfg
// +kubebuilder:webhook:validating-webhook-config-name=test-validating-webhook-
cfg

In v1, a single webhook marker may be split into multiple ones in the same paragraph. In v2,
each webhook must be represented by a single marker.

Others
If there are any manual updates in main.go in v1, we need to port the changes to the new
main.go . We’ll also need to ensure all of the needed schemes have been registered.

If there are additional manifests added under config directory, port them as well.

Change the image name in the Makefile if needed.


Verification
Finally, we can run make and make docker-build to ensure things are working fine.

Kubebuilder v2 vs v3 (Legacy Kubebuilder


v2.0.0+ layout to 3.0.0+)
This document covers all breaking changes when migrating from v2 to v3.

The details of all changes (breaking or otherwise) can be found in controller-runtime,


controller-tools and kb-releases release notes.

Common changes
v3 projects use Go modules and request Go 1.18+. Dep is no longer supported for
dependency management.

Kubebuilder
Preliminary support for plugins was added. For more info see the Extensible CLI and
Scaffolding Plugins: phase 1, the Extensible CLI and Scaffolding Plugins: phase 1.5 and
the Extensible CLI and Scaffolding Plugins - Phase 2 design docs. Also, you can check the
Plugins section.

The PROJECT file now has a new layout. It stores more information about what
resources are in use, to better enable plugins to make useful decisions when scaffolding.

Furthermore, the PROJECT file itself is now versioned: the version field corresponds to
the version of the PROJECT file itself, while the layout field indicates the scaffolding &
primary plugin version in use.

The version of the image gcr.io/kubebuilder/kube-rbac-proxy , which is an optional


component enabled by default to secure the request made against the manager, was
updated from 0.5.0 to 0.11.0 to address security concerns. The details of all changes
can be found in kube-rbac-proxy.

TL;DR of the New go/v3 Plugin


More details on this can be found at here, but for the highlights, check below
Default plugin
Projects scaffolded with Kubebuilder v3 will use the `go.kubebuilder.io/v3` plugin by default.

Scaffolded/Generated API version changes:

Use apiextensions/v1 for generated CRDs ( apiextensions/v1beta1 was


deprecated in Kubernetes 1.16 )
Use admissionregistration.k8s.io/v1 for generated webhooks
( admissionregistration.k8s.io/v1beta1 was deprecated in Kubernetes 1.16 )
Use cert-manager.io/v1 for the certificate manager when webhooks are used
( cert-manager.io/v1alpha2 was deprecated in Cert-Manager 0.14 . More info:
CertManager v1.0 docs)

Code changes:

The manager flags --metrics-addr and enable-leader-election now are named


--metrics-bind-address and --leader-elect to be more aligned with core
Kubernetes Components. More info: #1839
Liveness and Readiness probes are now added by default using healthz.Ping .
A new option to create the projects using ComponentConfig is introduced. For
more info see its enhancement proposal and the Component config tutorial
Manager manifests now use SecurityContext to address security concerns. More
info: #1637

Misc:

Support for controller-tools v0.9.0 (for go/v2 it is v0.3.0 and previously it was
v0.2.5 )
Support for controller-runtime v0.12.1 (for go/v2 it is v0.6.4 and previously it
was v0.5.0 )
Support for kustomize v3.8.7 (for go/v2 it is v3.5.4 and previously it was
v3.1.0 )
Required Envtest binaries are automatically downloaded
The minimum Go version is now 1.18 (previously it was 1.13 ).

! Project customizations

After using the CLI to create your project, you are free to customise how you see fit. Bear
in mind, that it is not recommended to deviate from the proposed layout unless you
know what you are doing.

For example, you should refrain from moving the scaffolded files, doing so will make it
difficult in upgrading your project in the future. You may also lose the ability to use some
of the CLI features and helpers. For further information on the project layout, see the doc
What’s in a basic project?
Migrating to Kubebuilder v3
So you want to upgrade your scaffolding to use the latest and greatest features then, follow
up the following guide which will cover the steps in the most straightforward way to allow you
to upgrade your project to get all latest changes and improvements.

! Apple Silicon (M1)

The current scaffold done by the CLI ( go/v3 ) uses kubernetes-sigs/kustomize v3 which
does not provide a valid binary for Apple Silicon ( darwin/arm64 ). Therefore, you can use
the go/v4 plugin instead which provides support for this platform:

kubebuilder init --domain my.domain --repo my.domain/guestbook --


plugins=go/v4

Migration Guide v2 to V3 (Recommended)

By updating the files manually

So you want to use the latest version of Kubebuilder CLI without changing your scaffolding
then, check the following guide which will describe the manually steps required for you to
upgrade only your PROJECT version and starts to use the plugins versions.

This way is more complex, susceptible to errors, and success cannot be assured. Also, by
following these steps you will not get the improvements and bug fixes in the default
generated project files.

You will check that you can still using the previous layout by using the go/v2 plugin which will
not upgrade the controller-runtime and controller-tools to the latest version used with go/v3
becuase of its breaking changes. By checking this guide you know also how to manually
change the files to use the go/v3 plugin and its dependencies versions.

Migrating to Kubebuilder v3 by updating the files manually

Migration from v2 to v3
Make sure you understand the differences between Kubebuilder v2 and v3 before continuing.

Please ensure you have followed the installation guide to install the required components.

The recommended way to migrate a v2 project is to create a new v3 project and copy over the
API and the reconciliation code. The conversion will end up with a project that looks like a
native v3 project. However, in some cases, it’s possible to do an in-place upgrade (i.e. reuse
the v2 project layout, upgrading controller-runtime and controller-tools).
Initialize a v3 Project

Project name

For the rest of this document, we are going to use migration-project as the project
name and tutorial.kubebuilder.io as the domain. Please, select and use appropriate
values for your case.

Create a new directory with the name of your project. Note that this name is used in the
scaffolds to create the name of your manager Pod and of the Namespace where the Manager
is deployed by default.

$ mkdir migration-project-name
$ cd migration-project-name

Now, we need to initialize a v3 project. Before we do that, though, we’ll need to initialize a new
go module if we’re not on the GOPATH . While technically this is not needed inside GOPATH , it is
still recommended.

go mod init tutorial.kubebuilder.io/migration-project

The module of your project can found in the in the `go.mod` file at the root of your
project:

module tutorial.kubebuilder.io/migration-project

Then, we can finish initializing the project with kubebuilder.

kubebuilder init --domain tutorial.kubebuilder.io

The domain of your project can be found in the PROJECT file:

...
domain: tutorial.kubebuilder.io
...

Migrate APIs and Controllers


Next, we’ll re-scaffold out the API types and controllers.
Scaffolding both the API types and controllers

For this example, we are going to consider that we need to scaffold both the API types
and the controllers, but remember that this depends on how you scaffolded them in your
original project.

kubebuilder create api --group batch --version v1 --kind CronJob

How to still keep `apiextensions.k8s.io/v1beta1` for CRDs?

From now on, the CRDs that will be created by controller-gen will be using the Kubernetes
API version apiextensions.k8s.io/v1 by default, instead of
apiextensions.k8s.io/v1beta1 .

The apiextensions.k8s.io/v1beta1 was deprecated in Kubernetes 1.16 and was


removed in Kubernetes 1.22 .

So, if you would like to keep using the previous version use the flag --crd-
version=v1beta1 in the above command which is only needed if you want your operator
to support Kubernetes 1.15 and earlier. However, it is no longer recommended.

Migrate the APIs

If you're using multiple groups

Please run kubebuilder edit --multigroup=true to enable multi-group support before


migrating the APIs and controllers. Please see this for more details.

Now, let’s copy the API definition from api/v1/<kind>_types.go in our old project to the new
one.

These files have not been modified by the new plugin, so you should be able to replace your
freshly scaffolded files by your old one. There may be some cosmetic changes. So you can
choose to only copy the types themselves.

Migrate the Controllers

Now, let’s migrate the controller code from controllers/cronjob_controller.go in our old
project to the new one. There is a breaking change and there may be some cosmetic changes.

The new Reconcile method receives the context as an argument now, instead of having to
create it with context.Background() . You can copy the rest of the code in your old controller
to the scaffolded methods replacing:

func (r *CronJobReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {


ctx := context.Background()
log := r.Log.WithValues("cronjob", req.NamespacedName)

With:

func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request)


(ctrl.Result, error) {
log := r.Log.WithValues("cronjob", req.NamespacedName)

! Controller-runtime version updated has breaking changes

Check sigs.k8s.io/controller-runtime release docs from 0.8.0+ version for breaking


changes.

Migrate the Webhooks

Skip

If you don’t have any webhooks, you can skip this section.

Now let’s scaffold the webhooks for our CRD (CronJob). We’ll need to run the following
command with the --defaulting and --programmatic-validation flags (since our test
project uses defaulting and validating webhooks):

kubebuilder create webhook --group batch --version v1 --kind CronJob --defaulting


--programmatic-validation

How to keep using `apiextensions.k8s.io/v1beta1` for Webhooks?

From now on, the Webhooks that will be created by Kubebuilder using by default the
Kubernetes API version admissionregistration.k8s.io/v1 instead of
admissionregistration.k8s.io/v1beta1 and the cert-manager.io/v1 to replace cert-
manager.io/v1alpha2 .

Note that apiextensions/v1beta1 and admissionregistration.k8s.io/v1beta1 were


deprecated in Kubernetes 1.16 and will be removed in Kubernetes 1.22 . If you use
apiextensions/v1 and admissionregistration.k8s.io/v1 then you need to use cert-
manager.io/v1 which will be the API adopted per Kubebuilder CLI by default in this case.
The API cert-manager.io/v1alpha2 is not compatible with the latest Kubernetes API
versions.

So, if you would like to keep using the previous version use the flag --webhook-
version=v1beta1 in the above command which is only needed if you want your operator
to support Kubernetes 1.15 and earlier.

Now, let’s copy the webhook definition from api/v1/<kind>_webhook.go from our old project
to the new one.

Others
If there are any manual updates in main.go in v2, we need to port the changes to the new
main.go . We’ll also need to ensure all of the needed schemes have been registered.

If there are additional manifests added under config directory, port them as well.

Change the image name in the Makefile if needed.

Verification
Finally, we can run make and make docker-build to ensure things are working fine.

Migration from v2 to v3 by updating the


files manually
Make sure you understand the differences between Kubebuilder v2 and v3 before continuing

Please ensure you have followed the installation guide to install the required components.

The following guide describes the manual steps required to upgrade your config version and
start using the plugin-enabled version.

This way is more complex, susceptible to errors, and success cannot be assured. Also, by
following these steps you will not get the improvements and bug fixes in the default
generated project files.

Usually you will only try to do it manually if you customized your project and deviated too
much from the proposed scaffold. Before continuing, ensure that you understand the note
about project customizations. Note that you might need to spend more effort to do this
process manually than organize your project customizations to follow up the proposed layout
and keep your project maintainable and upgradable with less effort in the future.
The recommended upgrade approach is to follow the Migration Guide v2 to V3 instead.

Migration from project config version “2” to “3”


Migrating between project configuration versions involves additions, removals, and/or
changes to fields in your project’s PROJECT file, which is created by running the init
command.

The PROJECT file now has a new layout. It stores more information about what resources are
in use, to better enable plugins to make useful decisions when scaffolding.

Furthermore, the PROJECT file itself is now versioned. The version field corresponds to the
version of the PROJECT file itself, while the layout field indicates the scaffolding and the
primary plugin version in use.

Steps to migrate

The following steps describe the manual changes required to bring the project configuration
file ( PROJECT ). These change will add the information that Kubebuilder would add when
generating the file. This file can be found in the root directory.

Add the projectName

The project name is the name of the project directory in lowercase:

...
projectName: example
...

Add the layout

The default plugin layout which is equivalent to the previous version is


go.kubebuilder.io/v2 :

...
layout:
- go.kubebuilder.io/v2
...

Update the version

The version field represents the version of project’s layout. Update this to "3" :
...
version: "3"
...

Add the resource data

The attribute resources represents the list of resources scaffolded in your project.

You will need to add the following data for each resource added to the project.

Add the Kubernetes API version by adding resources[entry].api.crdVersion: v1beta1:

...
resources:
- api:
...
crdVersion: v1beta1
domain: my.domain
group: webapp
kind: Guestbook
...

Add the scope used do scaffold the CRDs by adding resources[entry].api.namespaced: true unless
they were cluster-scoped:

...
resources:
- api:
...
namespaced: true
group: webapp
kind: Guestbook
...

If you have a controller scaffolded for the API then, add resources[entry].controller: true:

...
resources:
- api:
...
controller: true
group: webapp
kind: Guestbook
Add the resource domain such as resources[entry].domain: testproject.org which usually will be
the project domain unless the API scaffold is a core type and/or an external type:

...
resources:
- api:
...
domain: testproject.org
group: webapp
kind: Guestbook

Supportability

Kubebuilder only supports core types and the APIs scaffolded in the project by default
unless you manually change the files you will be unable to work with external-types.

For core types, the domain value will be k8s.io or empty.

However, for an external-type you might leave this attribute empty. We cannot suggest
what would be the best approach in this case until it become officially supported by the
tool. For further information check the issue #1999.

Note that you will only need to add the domain if your project has a scaffold for a core type
API which the Domain value is not empty in Kubernetes API group qualified scheme definition.
(For example, see here that for Kinds from the API apps it has not a domain when see here
that for Kinds from the API authentication its domain is k8s.io )

Check the following the list to know the core types supported and its domain:

Core Type Domain


admission “k8s.io”
admissionregistration “k8s.io”
apps empty
auditregistration “k8s.io”
apiextensions “k8s.io”
authentication “k8s.io”
authorization “k8s.io”
autoscaling empty
batch empty
certificates “k8s.io”
coordination “k8s.io”
core empty
events “k8s.io”
extensions empty
Core Type Domain
imagepolicy “k8s.io”
networking “k8s.io”
node “k8s.io”
metrics “k8s.io”
policy empty
rbac.authorization “k8s.io”
scheduling “k8s.io”
setting “k8s.io”
storage “k8s.io”

Following an example where a controller was scaffold for the core type Kind Deployment via
the command create api --group apps --version v1 --kind Deployment --
controller=true --resource=false --make=false :

- controller: true
group: apps
kind: Deployment
path: k8s.io/api/apps/v1
version: v1

Add the resources[entry].path with the import path for the api:

Path

If you did not scaffold an API but only generate a controller for the API(GKV) informed
then, you do not need to add the path. Note, that it usually happens when you add a
controller for an external or core type.

Kubebuilder only supports core types and the APIs scaffolded in the project by default
unless you manually change the files you will be unable to work with external-types.

The path will always be the import path used in your Go files to use the API.

...
resources:
- api:
...
...
group: webapp
kind: Guestbook
path: example/api/v1
If your project is using webhooks then, add resources[entry].webhooks.[type]: true for each type
generated and then, add resources[entry].webhooks.webhookVersion: v1beta1:

Webhooks

The valid types are: defaulting , validation and conversion . Use the webhook type
used to scaffold the project.

The Kubernetes API version used to do the webhooks scaffolds in Kubebuilder v2 is


v1beta1 . Then, you will add the webhookVersion: v1beta1 for all cases.

resources:
- api:
...
...
group: webapp
kind: Guestbook
webhooks:
defaulting: true
validation: true
webhookVersion: v1beta1

Check your PROJECT file

Now ensure that your PROJECT file has the same information when the manifests are
generated via Kubebuilder V3 CLI.

For the QuickStart example, the PROJECT file manually updated to use
go.kubebuilder.io/v2 would look like:

domain: my.domain
layout:
- go.kubebuilder.io/v2
projectName: example
repo: example
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: my.domain
group: webapp
kind: Guestbook
path: example/api/v1
version: v1
version: "3"

You can check the differences between the previous layout( version 2 ) and the current
format( version 3 ) with the go.kubebuilder.io/v2 by comparing an example scenario
which involves more than one API and webhook, see:
Example (Project version 2)

domain: testproject.org
repo: sigs.k8s.io/kubebuilder/example
resources:
- group: crew
kind: Captain
version: v1
- group: crew
kind: FirstMate
version: v1
- group: crew
kind: Admiral
version: v1
version: "2"

Example (Project version 3)


domain: testproject.org
layout:
- go.kubebuilder.io/v2
projectName: example
repo: sigs.k8s.io/kubebuilder/example
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: testproject.org
group: crew
kind: Captain
path: example/api/v1
version: v1
webhooks:
defaulting: true
validation: true
webhookVersion: v1
- api:
crdVersion: v1
namespaced: true
controller: true
domain: testproject.org
group: crew
kind: FirstMate
path: example/api/v1
version: v1
webhooks:
conversion: true
webhookVersion: v1
- api:
crdVersion: v1
controller: true
domain: testproject.org
group: crew
kind: Admiral
path: example/api/v1
plural: admirales
version: v1
webhooks:
defaulting: true
webhookVersion: v1
version: "3"

Verification

In the steps above, you updated only the PROJECT file which represents the project
configuration. This configuration is useful only for the CLI tool. It should not affect how your
project behaves.

There is no option to verify that you properly updated the configuration file. The best way to
ensure the configuration file has the correct V3+ fields is to initialize a project with the same
API(s), controller(s), and webhook(s) in order to compare generated configuration with the
manually changed configuration.

If you made mistakes in the above process, you will likely face issues using the CLI.

Update your project to use go/v3 plugin


Migrating between project plugins involves additions, removals, and/or changes to files
created by any plugin-supported command, e.g. init and create . A plugin supports one or
more project config versions; make sure you upgrade your project’s config version to the
latest supported by your target plugin version before upgrading plugin versions.

The following steps describe the manual changes required to modify the project’s layout
enabling your project to use the go/v3 plugin. These steps will not help you address all the
bug fixes of the already generated scaffolds.

! Deprecated APIs

The following steps will not migrate the API versions which are deprecated
apiextensions.k8s.io/v1beta1 , admissionregistration.k8s.io/v1beta1 , cert-
manager.io/v1alpha2 .

Steps to migrate

Update your plugin version into the PROJECT file

Before updating the layout , please ensure you have followed the above steps to upgrade
your Project version to 3 . Once you have upgraded the project version, update the layout to
the new plugin version go.kubebuilder.io/v3 as follows:

domain: my.domain
layout:
- go.kubebuilder.io/v3
...

Upgrade the Go version and its dependencies:

Ensure that your go.mod is using Go version 1.15 and the following dependency versions:
module example

go 1.18

require (
github.com/onsi/ginkgo/v2 v2.1.4
github.com/onsi/gomega v1.19.0
k8s.io/api v0.24.0
k8s.io/apimachinery v0.24.0
k8s.io/client-go v0.24.0
sigs.k8s.io/controller-runtime v0.12.1
)

Update the golang image

In the Dockerfile, replace:

# Build the manager binary


FROM golang:1.13 as builder

With:

# Build the manager binary


FROM golang:1.16 as builder

Update your Makefile

To allow controller-gen to scaffold the nw Kubernetes APIs

To allow controller-gen and the scaffolding tool to use the new API versions, replace:

CRD_OPTIONS ?= "crd:trivialVersions=true"

With:

CRD_OPTIONS ?= "crd"

To allow automatic downloads

To allow downloading the newer versions of the Kubernetes binaries required by Envtest into
the testbin/ directory of your project instead of the global setup, replace:

# Run tests
test: generate fmt vet manifests
go test ./... -coverprofile cover.out

With:
# Setting SHELL to bash allows bash commands to be executed by recipes.
# Options are set to exit when a recipe line exits non-zero or a piped command
fails.
SHELL = /usr/bin/env bash -o pipefail
.SHELLFLAGS = -ec

ENVTEST_ASSETS_DIR=$(shell pwd)/testbin
test: manifests generate fmt vet ## Run tests.
mkdir -p ${ENVTEST_ASSETS_DIR}
test -f ${ENVTEST_ASSETS_DIR}/setup-envtest.sh || curl -sSLo
${ENVTEST_ASSETS_DIR}/setup-envtest.sh
https://raw.githubusercontent.com/kubernetes-sigs/controller-
runtime/v0.8.3/hack/setup-envtest.sh
source ${ENVTEST_ASSETS_DIR}/setup-envtest.sh; fetch_envtest_tools
$(ENVTEST_ASSETS_DIR); setup_envtest_env $(ENVTEST_ASSETS_DIR); go test ./... -
coverprofile cover.out

Envtest binaries

The Kubernetes binaries that are required for the Envtest were upgraded from 1.16.4 to
1.22.1 . You can still install them globally by following these installation instructions.

To upgrade controller-gen and kustomize dependencies versions used

To upgrade the controller-gen and kustomize version used to generate the manifests
replace:

# find or download controller-gen


# download controller-gen if necessary
controller-gen:
ifeq (, $(shell which controller-gen))
@{ \
set -e ;\
CONTROLLER_GEN_TMP_DIR=$$(mktemp -d) ;\
cd $$CONTROLLER_GEN_TMP_DIR ;\
go mod init tmp ;\
go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.2.5 ;\
rm -rf $$CONTROLLER_GEN_TMP_DIR ;\
}
CONTROLLER_GEN=$(GOBIN)/controller-gen
else
CONTROLLER_GEN=$(shell which controller-gen)
endif

With:
##@ Build Dependencies

## Location to install dependencies to


LOCALBIN ?= $(shell pwd)/bin
$(LOCALBIN):
mkdir -p $(LOCALBIN)

## Tool Binaries
KUSTOMIZE ?= $(LOCALBIN)/kustomize
CONTROLLER_GEN ?= $(LOCALBIN)/controller-gen
ENVTEST ?= $(LOCALBIN)/setup-envtest

## Tool Versions
KUSTOMIZE_VERSION ?= v3.8.7
CONTROLLER_TOOLS_VERSION ?= v0.9.0

KUSTOMIZE_INSTALL_SCRIPT ?= "https://raw.githubusercontent.com/kubernetes-
sigs/kustomize/master/hack/install_kustomize.sh"
.PHONY: kustomize
kustomize: $(KUSTOMIZE) ## Download kustomize locally if necessary.
$(KUSTOMIZE): $(LOCALBIN)
test -s $(LOCALBIN)/kustomize || { curl -Ss $(KUSTOMIZE_INSTALL_SCRIPT) |
bash -s -- $(subst v,,$(KUSTOMIZE_VERSION)) $(LOCALBIN); }

.PHONY: controller-gen
controller-gen: $(CONTROLLER_GEN) ## Download controller-gen locally if
necessary.
$(CONTROLLER_GEN): $(LOCALBIN)
test -s $(LOCALBIN)/controller-gen || GOBIN=$(LOCALBIN) go install
sigs.k8s.io/controller-tools/cmd/controller-gen@$(CONTROLLER_TOOLS_VERSION)

.PHONY: envtest
envtest: $(ENVTEST) ## Download envtest-setup locally if necessary.
$(ENVTEST): $(LOCALBIN)
test -s $(LOCALBIN)/setup-envtest || GOBIN=$(LOCALBIN) go install
sigs.k8s.io/controller-runtime/tools/setup-envtest@latest

And then, to make your project use the kustomize version defined in the Makefile, replace all
usage of kustomize with $(KUSTOMIZE)

Makefile

You can check all changes applied to the Makefile by looking in the samples projects
generated in the testdata directory of the Kubebuilder repository or by just by creating
a new project with the Kubebuilder CLI.

Update your controllers

! Controller-runtime version updated has breaking changes

Check sigs.k8s.io/controller-runtime release docs from 0.7.0+ version for breaking


changes.

Replace:

func (r *<MyKind>Reconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {


ctx := context.Background()
log := r.Log.WithValues("cronjob", req.NamespacedName)

With:

func (r *<MyKind>Reconciler) Reconcile(ctx context.Context, req ctrl.Request)


(ctrl.Result, error) {
log := r.Log.WithValues("cronjob", req.NamespacedName)

Update your controller and webhook test suite

! Ginkgo V2 version update has breaking changes

Check Ginkgo V2 Migration Guide for breaking changes.

Replace:

. "github.com/onsi/ginkgo"

With:

. "github.com/onsi/ginkgo/v2"

Also, adjust your test suite.

For Controller Suite:

RunSpecsWithDefaultAndCustomReporters(t,
"Controller Suite",
[]Reporter{printer.NewlineReporter{}})

With:

RunSpecs(t, "Controller Suite")

For Webhook Suite:

RunSpecsWithDefaultAndCustomReporters(t,
"Webhook Suite",
[]Reporter{printer.NewlineReporter{}})

With:
RunSpecs(t, "Webhook Suite")

Last but not least, remove the timeout variable from the BeforeSuite blocks:

Replace:

var _ = BeforeSuite(func(done Done) {


....
}, 60)

With

var _ = BeforeSuite(func(done Done) {


....
})

Change Logger to use flag options

In the main.go file replace:

flag.Parse()

ctrl.SetLogger(zap.New(zap.UseDevMode(true)))

With:

opts := zap.Options{
Development: true,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()

ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))

Rename the manager flags

The manager flags --metrics-addr and enable-leader-election were renamed to --


metrics-bind-address and --leader-elect to be more aligned with core Kubernetes
Components. More info: #1839.

In your main.go file replace:


func main() {
var metricsAddr string
var enableLeaderElection bool
flag.StringVar(&metricsAddr, "metrics-addr", ":8080", "The address the metric
endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller
manager.")

With:

func main() {
var metricsAddr string
var enableLeaderElection bool
flag.StringVar(&metricsAddr, "metrics-bind-address", ":8080", "The address
the metric endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "leader-elect", false,
"Enable leader election for controller manager. "+
"Enabling this will ensure there is only one active controller
manager.")

And then, rename the flags in the config/default/manager_auth_proxy_patch.yaml and


config/default/manager.yaml :

- name: manager
args:
- "--health-probe-bind-address=:8081"
- "--metrics-bind-address=127.0.0.1:8080"
- "--leader-elect"

Verification

Finally, we can run make and make docker-build to ensure things are working fine.

Change your project to remove the Kubernetes


deprecated API versions usage

Before continuing

Make sure you understand Versions in CustomResourceDefinitions.

The following steps describe a workflow to upgrade your project to remove the deprecated
Kubernetes APIs: apiextensions.k8s.io/v1beta1 , admissionregistration.k8s.io/v1beta1 ,
cert-manager.io/v1alpha2 .
The Kubebuilder CLI tool does not support scaffolded resources for both Kubernetes API
versions such as; an API/CRD with apiextensions.k8s.io/v1beta1 and another one with
apiextensions.k8s.io/v1 .

Cert Manager API

If you scaffold a webhook using the Kubernetes API admissionregistration.k8s.io/v1


then, by default, it will use the API cert-manager.io/v1 in the manifests.

The first step is to update your PROJECT file by replacing the api.crdVersion:v1beta and
webhooks.WebhookVersion:v1beta with api.crdVersion:v1 and
webhooks.WebhookVersion:v1 which would look like:

domain: my.domain
layout: go.kubebuilder.io/v3
projectName: example
repo: example
resources:
- api:
crdVersion: v1
namespaced: true
group: webapp
kind: Guestbook
version: v1
webhooks:
defaulting: true
webhookVersion: v1
version: "3"

You can try to re-create the APIS(CRDs) and Webhooks manifests by using the --force flag.

! Before re-create

Note, however, that the tool will re-scaffold the files which means that you will lose their
content.

Before executing the commands ensure that you have the files content stored in another
place. An easy option is to use git to compare your local change with the previous
version to recover the contents.

Now, re-create the APIS(CRDs) and Webhooks manifests by running the kubebuilder create
api and kubebuilder create webhook for the same group, kind and versions with the flag --
force , respectively.
V3 - Plugins Layout Migration Guides
Following the migration guides from the plugins versions. Note that the plugins ecosystem
was introduced with Kubebuilder v3.0.0 release where the go/v3 version is the default layout
since 28 Apr 2021 .

Therefore, you can check here how to migrate the projects built from Kubebuilder 3.x with the
plugin go/v3 to the latest.

go/v3 vs go/v4
This document covers all breaking changes when migrating from projects built using the
plugin go/v3 (default for any scaffold done since 28 Apr 2021 ) to the next alpha version of
the Golang plugin go/v4 .

The details of all changes (breaking or otherwise) can be found in:

controller-runtime
controller-tools
kustomize
kb-releases release notes.

Common changes
go/v4 projects use Kustomize v5x (instead of v3x)
note that some manifests under config/ directory have been changed in order to no
longer use the deprecated Kustomize features such as env vars.
A kustomization.yaml is scaffolded under config/samples . This helps simply and
flexibly generate sample manifests: kustomize build config/samples .
adds support for Apple Silicon M1 (darwin/arm64)
remove support to CRD/WebHooks Kubernetes API v1beta1 version which are no longer
supported since k8s 1.22
no longer scaffold webhook test files with "k8s.io/api/admission/v1beta1" the k8s API
which is no longer served since k8s 1.25 . By default webhooks test files are scaffolding
using "k8s.io/api/admission/v1" which is support from k8s 1.20
no longer provide backwards compatible support with k8s versions < 1.16
change the layout to accommodate the community request to follow the Standard Go
Project Layout by moving the api(s) under a new directory called api , controller(s) under
a new directory called internal and the main.go under a new directory named cmd
TL;DR of the New `go/v4` Plugin

Further details can be found in the go/v4 plugin section

TL;DR of the New go/v4 Plugin


More details on this can be found at here, but for the highlights, check below

! Project customizations

After using the CLI to create your project, you are free to customize how you see fit. Bear
in mind, that it is not recommended to deviate from the proposed layout unless you
know what you are doing.

For example, you should refrain from moving the scaffolded files, doing so will make it
difficult in upgrading your project in the future. You may also lose the ability to use some
of the CLI features and helpers. For further information on the project layout, see the doc
[What’s in a basic project?][basic-project-doc]

Migrating to Kubebuilder go/v4


If you want to upgrade your scaffolding to use the latest and greatest features then, follow the
guide which will cover the steps in the most straightforward way to allow you to upgrade your
project to get all latest changes and improvements.

Migration Guide go/v3 to go/v4 (Recommended)

By updating the files manually

If you want to use the latest version of Kubebuilder CLI without changing your scaffolding
then, check the following guide which will describe the steps to be performed manually to
upgrade only your PROJECT version and start using the plugins versions.

This way is more complex, susceptible to errors, and success cannot be assured. Also, by
following these steps you will not get the improvements and bug fixes in the default
generated project files.

Migrating to go/v4 by updating the files manually


Migration from go/v3 to go/v4
Make sure you understand the differences between Kubebuilder go/v3 and go/v4 before
continuing.

Please ensure you have followed the installation guide to install the required components.

The recommended way to migrate a go/v3 project is to create a new go/v4 project and copy
over the API and the reconciliation code. The conversion will end up with a project that looks
like a native go/v4 project layout (latest version).

However, in some cases, it’s possible to do an in-place upgrade (i.e. reuse the go/v3 project
layout, upgrading the PROJECT file, and scaffolds manually). For further information see
Migration from go/v3 to go/v4 by updating the files manually

Initialize a go/v4 Project

Project name

For the rest of this document, we are going to use migration-project as the project
name and tutorial.kubebuilder.io as the domain. Please, select and use appropriate
values for your case.

Create a new directory with the name of your project. Note that this name is used in the
scaffolds to create the name of your manager Pod and of the Namespace where the Manager
is deployed by default.

$ mkdir migration-project-name
$ cd migration-project-name

Now, we need to initialize a go/v4 project. Before we do that, we’ll need to initialize a new go
module if we’re not on the GOPATH . While technically this is not needed inside GOPATH , it is
still recommended.

go mod init tutorial.kubebuilder.io/migration-project

The module of your project can found in the `go.mod` file at the root of your project:

module tutorial.kubebuilder.io/migration-project

Now, we can finish initializing the project with kubebuilder.


kubebuilder init --domain tutorial.kubebuilder.io --plugins=go/v4

The domain of your project can be found in the PROJECT file:

...
domain: tutorial.kubebuilder.io
...

Migrate APIs and Controllers


Next, we’ll re-scaffold out the API types and controllers.

Scaffolding both the API types and controllers

For this example, we are going to consider that we need to scaffold both the API types
and the controllers, but remember that this depends on how you scaffolded them in your
original project.

kubebuilder create api --group batch --version v1 --kind CronJob

Migrate the APIs

If you're using multiple groups

Please run kubebuilder edit --multigroup=true to enable multi-group support before


migrating the APIs and controllers. Please see this for more details.

Now, let’s copy the API definition from api/v1/<kind>_types.go in our old project to the new
one.

These files have not been modified by the new plugin, so you should be able to replace your
freshly scaffolded files by your old one. There may be some cosmetic changes. So you can
choose to only copy the types themselves.

Migrate the Controllers

Now, let’s migrate the controller code from controllers/cronjob_controller.go in our old
project to the new one.
Migrate the Webhooks

Skip

If you don’t have any webhooks, you can skip this section.

Now let’s scaffold the webhooks for our CRD (CronJob). We’ll need to run the following
command with the --defaulting and --programmatic-validation flags (since our test
project uses defaulting and validating webhooks):

kubebuilder create webhook --group batch --version v1 --kind CronJob --defaulting


--programmatic-validation

Now, let’s copy the webhook definition from api/v1/<kind>_webhook.go from our old project
to the new one.

Others
If there are any manual updates in main.go in v3, we need to port the changes to the new
main.go . We’ll also need to ensure all of needed controller-runtime schemes have been
registered.

If there are additional manifests added under config directory, port them as well. Please, be
aware that the new version go/v4 uses Kustomize v5x and no longer Kustomize v4. Therefore,
if added customized implementations in the config you need to ensure that them can work
with Kustomize v5 and/if not update/upgrade any breaking change that you might face.

In v4, installation of Kustomize has been changed from bash script to go get . Change the
kustomize dependency in Makefile to

.PHONY: kustomize
kustomize: $(KUSTOMIZE) ## Download kustomize locally if necessary. If wrong
version is installed, it will be removed before downloading.
$(KUSTOMIZE): $(LOCALBIN)
@if test -x $(LOCALBIN)/kustomize && ! $(LOCALBIN)/kustomize version | grep -
q $(KUSTOMIZE_VERSION); then \
echo "$(LOCALBIN)/kustomize version is not expected $(KUSTOMIZE_VERSION).
Removing it before installing."; \
rm -rf $(LOCALBIN)/kustomize; \
fi
test -s $(LOCALBIN)/kustomize || GOBIN=$(LOCALBIN) GO111MODULE=on go install
sigs.k8s.io/kustomize/kustomize/v5@$(KUSTOMIZE_VERSION)

Change the image name in the Makefile if needed.


Verification
Finally, we can run make and make docker-build to ensure things are working fine.

Migration from go/v3 to go/v4 by updating


the files manually
Make sure you understand the differences between Kubebuilder go/v3 and go/v4 before
continuing.

Please ensure you have followed the installation guide to install the required components.

The following guide describes the manual steps required to upgrade your PROJECT config file
to begin using go/v4 .

This way is more complex, susceptible to errors, and success cannot be assured. Also, by
following these steps you will not get the improvements and bug fixes in the default
generated project files.

Usually it is suggested to do it manually if you have customized your project and deviated too
much from the proposed scaffold. Before continuing, ensure that you understand the note
about [project customizations][project-customizations]. Note that you might need to spend
more effort to do this process manually than to organize your project customizations. The
proposed layout will keep your project maintainable and upgradable with less effort in the
future.

The recommended upgrade approach is to follow the Migration Guide go/v3 to go/v4 instead.

Migration from project config version “go/v3” to “go/v4”


Update the PROJECT file layout which stores information about the resources that are used to
enable plugins make useful decisions while scaffolding. The layout field indicates the
scaffolding and the primary plugin version in use.

Steps to migrate

Migrate the layout version into the PROJECT file

The following steps describe the manual changes required to bring the project configuration
file ( PROJECT ). These change will add the information that Kubebuilder would add when
generating the file. This file can be found in the root directory.
Update the PROJECT file by replacing:

layout:
- go.kubebuilder.io/v3

With:

layout:
- go.kubebuilder.io/v4

Changes to the layout

New layout:

The directory apis was renamed to api to follow the standard


The controller(s) directory has been moved under a new directory called internal
and renamed to singular as well controller
The main.go previously scaffolded in the root directory has been moved under a new
directory called cmd

Therefore, you can check the changes in the layout results into:

...
├── cmd
│ └── main.go
├── internal
│ └── controller
└── api

Migrating to the new layout:

Create a new directory cmd and move the main.go under it.
If your project support multi-group the APIs are scaffold under a directory called apis .
Rename this directory to api
Move the controllers directory under the internal and rename it for controller
Now ensure that the imports will be updated accordingly by:
Update the main.go imports to look for the new path of your controllers under the
pkg directory

Then, let’s update the scaffolds paths

Update the Dockerfile to ensure that you will have:

COPY cmd/main.go cmd/main.go


COPY api/ api/
COPY internal/controller/ internal/controller/

Then, replace:
RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o
manager main.go

With:

RUN CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -o


manager cmd/main.go

Update the Makefile targets to build and run the manager by replacing:

.PHONY: build
build: manifests generate fmt vet ## Build manager binary.
go build -o bin/manager main.go

.PHONY: run
run: manifests generate fmt vet ## Run a controller from your host.
go run ./main.go

With:

.PHONY: build
build: manifests generate fmt vet ## Build manager binary.
go build -o bin/manager cmd/main.go

.PHONY: run
run: manifests generate fmt vet ## Run a controller from your host.
go run ./cmd/main.go

Update the internal/controller/suite_test.go to set the path for the


CRDDirectoryPaths :

Replace:

CRDDirectoryPaths: []string{filepath.Join("..", "config", "crd", "bases")},

With:

CRDDirectoryPaths: []string{filepath.Join("..", "..", "config", "crd",


"bases")},

Note that if your project has multiple groups ( multigroup:true ) then the above update
should result into "..", "..", "..", instead of "..",".."

Now, let’s update the PATHs in the PROJECT file accordingly

The PROJECT tracks the paths of all APIs used in your project. Ensure that they now point to
api/... as the following example:

Before update:
group: crew
kind: Captain
path: sigs.k8s.io/kubebuilder/testdata/project-v4/apis/crew/v1

After Update:

group: crew
kind: Captain
path: sigs.k8s.io/kubebuilder/testdata/project-v4/api/crew/v1

Update kustomize manifests with the changes made so far

Update the manifest under config/ directory with all changes performed in the default
scaffold done with go/v4 plugin. (see for example testdata/project-v4/config/ ) to
get all changes in the default scaffolds to be applied on your project
Create config/samples/kustomization.yaml with all Custom Resources samples
specified into config/samples . (see for example testdata/project-
v4/config/samples/kustomization.yaml )

`config/` directory with changes into the scaffold files


Note that under the config/ directory you will find scaffolding changes since using go/v4
you will ensure that you are no longer using Kustomize v3x.

You can mainly compare the config/ directory from the samples scaffolded under the
testdata directory by checking the differences between the testdata/project-v3/config/
with testdata/project-v4/config/ which are samples created with the same commands
with the only difference being versions.

However, note that if you create your project with Kubebuilder CLI 3.0.0, its scaffolds might
change to accommodate changes up to the latest releases using go/v3 which are not
considered breaking for users and/or are forced by the changes introduced in the
dependencies used by the project such as controller-runtime and controller-tools.

If you have webhooks:

Replace the import admissionv1beta1 "k8s.io/api/admission/v1beta1" with admissionv1


"k8s.io/api/admission/v1" in the webhook test files

Makefile updates

Update the Makefile with the changes which can be found in the samples under testdata for
the release tag used. (see for example testdata/project-v4/Makefile )
Update the dependencies

Update the go.mod with the changes which can be found in the samples under testdata for
the release tag used. (see for example testdata/project-v4/go.mod ). Then, run go mod
tidy to ensure that you get the latest dependencies and your Golang code has no breaking
changes.

Verification

In the steps above, you updated your project manually with the goal of ensuring that it follows
the changes in the layout introduced with the go/v4 plugin that update the scaffolds.

There is no option to verify that you properly updated the PROJECT file of your project. The
best way to ensure that everything is updated correctly, would be to initialize a project using
the go/v4 plugin, (ie) using kubebuilder init --domain tutorial.kubebuilder.io
plugins=go/v4 and generating the same API(s), controller(s), and webhook(s) in order to
compare the generated configuration with the manually changed configuration.

Also, after all updates you would run the following commands:

make manifests (to re-generate the files using the latest version of the contrller-gen
after you update the Makefile)
make all (to ensure that you are able to build and perform all operations)

Single Group to Multi-Group


! Note

While Kubebuilder will not scaffold out a project structure compatible with multiple API
groups in the same repository by default, it’s possible to modify the default project
structure to support it.

Note that the process mainly is to ensure that your API(s) and controller(s) will be moved
under new directories with their respective group name.

Let’s migrate the CronJob example.

! Instructions vary per project layout

You can verify the version by looking at the PROJECT file. The currently default and
recommended version is go/v4.
The layout go/v3 is deprecated, if you are using go/v3 it is recommended that you
migrate to go/v4, however this documentation is still valid. Migration from go/v3 to go/v4.

To change the layout of your project to support Multi-Group run the command kubebuilder
edit --multigroup=true . Once you switch to a multi-group layout, the new Kinds will be
generated in the new layout but additional manual work is needed to move the old API groups
to the new layout.

Generally, we use the prefix for the API group as the directory name. We can check
api/v1/groupversion_info.go to find that out:

// +groupName=batch.tutorial.kubebuilder.io
package v1

Then, we’ll rename move our existing APIs into a new subdirectory, “batch”:

mkdir api/batch
mv api/* api/batch

After moving the APIs to a new directory, the same needs to be applied to the controllers. For
go/v4:

mkdir internal/controller/batch
mv internal/controller/* internal/controller/batch/

If you are using the deprecated layout go/v3


Then, your layout has not the internal directory. So, you will move the controller(s) under a
directory with the name of the API group which it is responsible for manage. ```bash mkdir
controller/batch mv controller/* controller/batch/ ```

Next, we’ll need to update all the references to the old package name. For CronJob, the
paths to be replaced would be main.go and
controllers/batch/cronjob_controller.go to their respective locations in the new
project structure.

If you’ve added additional files to your project, you’ll need to track down imports there as
well.

Finally, fix the PROJECT file manually, the command kubebuilder edit --
multigroup=true sets our project to multigroup, but it doesn’t fix the path of the existing
APIs. For each resource we need to modify the path.

For instance, for a file:


# Code generated by tool. DO NOT EDIT.
# This file is used to track the info used to scaffold your project
# and allow the plugins properly work.
# More info: https://book.kubebuilder.io/reference/project-config.html
domain: tutorial.kubebuilder.io
layout:
- go.kubebuilder.io/v4
multigroup: true
projectName: test
repo: tutorial.kubebuilder.io/project
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: tutorial.kubebuilder.io
group: batch
kind: CronJob
path: tutorial.kubebuilder.io/project/api/v1beta1
version: v1beta1
version: "3"

Replace path: tutorial.kubebuilder.io/project/api/v1beta1 for path:


tutorial.kubebuilder.io/project/api/batch/v1beta1

In this process, if the project is not new and has previous implemented APIs they would
still need to be modified as needed. Notice that with the multi-group project the Kind
API’s files are created under api/<group>/<version> instead of api/<version> . Also,
note that the controllers will be created under internal/controller/<group> instead of
internal/controller .

That is the reason why we moved the previously generated APIs to their respective
locations in the new structure. Remember to update the references in imports
accordingly.

For envtest to install CRDs correctly into the test environment, the relative path to the
CRD directory needs to be updated accordingly in each
internal/controller/<group>/suite_test.go file. We need to add additional ".." to
our CRD directory relative path as shown below.

By("bootstrapping test environment")


testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "..", "config",
"crd", "bases")},
}

The CronJob tutorial explains each of these changes in more detail (in the context of how
they’re generated by Kubebuilder for single-group projects).

Reference

Generating CRDs
Using Finalizers Finalizers are a mechanism to execute any custom logic related to a
resource before it gets deleted from Kubernetes cluster.

Watching Resources Watch resources in the Kubernetes cluster to be informed and


take actions on changes.

Resources Managed by the Operator


Externally Managed Resources Controller Runtime provides the ability to
watch additional resources relevant to the controlled ones.

Kind cluster

What’s a webhook? Webhooks are HTTP callbacks, there are 3 types of webhooks in
k8s: 1) admission webhook 2) CRD conversion webhook 3) authorization webhook

Admission webhook Admission webhooks are HTTP callbacks for mutating or


validating resources before the API server admit them.

Markers for Config/Code Generation

CRD Generation
CRD Validation
Webhook
Object/DeepCopy
RBAC

controller-gen CLI

completion

Artifacts

Platform Support

Writing controller tests

Metrics

Reference

Makefile Helpers

CLI plugins

Generating CRDs

Kubebuilder uses a tool called controller-gen to generate utility code and Kubernetes
object YAML, like CustomResourceDefinitions.

To do this, it makes use of special “marker comments” (comments that start with // + )
to indicate additional information about fields, types, and packages. In the case of CRDs,
these are generally pulled from your _types.go files. For more information on markers,
see the marker reference docs.

Kubebuilder provides a make target to run controller-gen and generate CRDs: make
manifests .

When you run make manifests , you should see CRDs generated under the
config/crd/bases directory. make manifests can generate a number of other artifacts
as well -- see the marker reference docs for more details.

Validation
CRDs support declarative validation using an OpenAPI v3 schema in the validation
section.

In general, validation markers may be attached to fields or to types. If you’re defining


complex validation, if you need to re-use validation, or if you need to validate slice
elements, it’s often best to define a new type to describe your validation.

For example:

type ToySpec struct {


// +kubebuilder:validation:MaxLength=15
// +kubebuilder:validation:MinLength=1
Name string `json:"name,omitempty"`

// +kubebuilder:validation:MaxItems=500
// +kubebuilder:validation:MinItems=1
// +kubebuilder:validation:UniqueItems=true
Knights []string `json:"knights,omitempty"`

Alias Alias `json:"alias,omitempty"`


Rank Rank `json:"rank"`
}

// +kubebuilder:validation:Enum=Lion;Wolf;Dragon
type Alias string

// +kubebuilder:validation:Minimum=1
// +kubebuilder:validation:Maximum=3
// +kubebuilder:validation:ExclusiveMaximum=false
type Rank int32
Additional Printer Columns
Starting with Kubernetes 1.11, kubectl get can ask the server what columns to display.
For CRDs, this can be used to provide useful, type-specific information with kubectl get ,
similar to the information provided for built-in types.

The information that gets displayed can be controlled with the additionalPrinterColumns
field on your CRD, which is controlled by the +kubebuilder:printcolumn marker on the
Go type for your CRD.

For instance, in the following example, we add fields to display information about the
knights, rank, and alias fields from the validation example:

// +kubebuilder:printcolumn:name="Alias",type=string,JSONPath=`.spec.alias`
// +kubebuilder:printcolumn:name="Rank",type=integer,JSONPath=`.spec.rank`
// +kubebuilder:printcolumn:name="Bravely Run
Away",type=boolean,JSONPath=`.spec.knights[?(@ == "Sir
Robin")]`,description="when danger rears its ugly head, he bravely turned his
tail and fled",priority=10
//
+kubebuilder:printcolumn:name="Age",type="date",JSONPath=".metadata.creationTim
type Toy struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec ToySpec `json:"spec,omitempty"`


Status ToyStatus `json:"status,omitempty"`
}

Subresources
CRDs can choose to implement the /status and /scale subresources as of Kubernetes
1.13.

It’s generally recommended that you make use of the /status subresource on all
resources that have a status field.

Both subresources have a corresponding marker.

Status

The status subresource is enabled via +kubebuilder:subresource:status . When


enabled, updates at the main resource will not change status. Similarly, updates to the
status subresource cannot change anything but the status field.
For example:

// +kubebuilder:subresource:status
type Toy struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec ToySpec `json:"spec,omitempty"`


Status ToyStatus `json:"status,omitempty"`
}

Scale

The scale subresource is enabled via +kubebuilder:subresource:scale . When enabled,


users will be able to use kubectl scale with your resource. If the selectorpath
argument pointed to the string form of a label selector, the HorizontalPodAutoscaler will
be able to autoscale your resource.

For example:

type CustomSetSpec struct {


Replicas *int32 `json:"replicas"`
}

type CustomSetStatus struct {


Replicas int32 `json:"replicas"`
Selector string `json:"selector"` // this must be the string form of the
selector
}

// +kubebuilder:subresource:status
//
+kubebuilder:subresource:scale:specpath=.spec.replicas,statuspath=.status.repli
type CustomSet struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`

Spec CustomSetSpec `json:"spec,omitempty"`


Status CustomSetStatus `json:"status,omitempty"`
}

Multiple Versions
As of Kubernetes 1.13, you can have multiple versions of your Kind defined in your CRD,
and use a webhook to convert between them.

For more details on this process, see the multiversion tutorial.


By default, Kubebuilder disables generating different validation for different versions of
the Kind in your CRD, to be compatible with older Kubernetes versions.

You’ll need to enable this by switching the line in your makefile that says CRD_OPTIONS ?=
"crd:trivialVersions=true,preserveUnknownFields=false to CRD_OPTIONS ?=
crd:preserveUnknownFields=false if using v1beta CRDs, and CRD_OPTIONS ?= crd if
using v1 (recommended).

Then, you can use the +kubebuilder:storageversion marker to indicate the GVK that
should be used to store data by the API server.

Supporting older cluster versions

By default, kubebuilder create api will create CRDs of API version v1 , a version
introduced in Kubernetes v1.16. If your project intends to support Kubernetes cluster
versions older than v1.16, you must use the v1beta1 API version:

kubebuilder create api --crd-version v1beta1 ...

To support Kubernetes clusters of version v1.14 or lower, you’ll also need to remove the
controller-gen option preserveUnknownFields=false from your Makefile. This is done by
switching the line that says CRD_OPTIONS ?=
"crd:trivialVersions=true,preserveUnknownFields=false to CRD_OPTIONS ?=
crd:trivialVersions=true

v1beta1 is deprecated and was removed in Kubernetes v1.22, so upgrading is


essential.

Under the hood


Kubebuilder scaffolds out make rules to run controller-gen . The rules will automatically
install controller-gen if it’s not on your path using go install with Go modules.

You can also run controller-gen directly, if you want to see what it’s doing.

Each controller-gen “generator” is controlled by an option to controller-gen, using the


same syntax as markers. controller-gen also supports different output “rules” to control
how and where output goes. Notice the manifests make rule (condensed slightly to only
generate CRDs):
# Generate manifests for CRDs
manifests: controller-gen
$(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..."
output:crd:artifacts:config=config/crd/bases

It uses the output:crd:artifacts output rule to indicate that CRD-related config (non-
code) artifacts should end up in config/crd/bases instead of config/crd .

To see all the options including generators for controller-gen , run

$ controller-gen -h

or, for more details:

$ controller-gen -hhh

Using Finalizers

Finalizers allow controllers to implement asynchronous pre-delete hooks. Let’s say you
create an external resource (such as a storage bucket) for each object of your API type,
and you want to delete the associated external resource on object’s deletion from
Kubernetes, you can use a finalizer to do that.

You can read more about the finalizers in the Kubernetes reference docs. The section
below demonstrates how to register and trigger pre-delete hooks in the Reconcile
method of a controller.

The key point to note is that a finalizer causes “delete” on the object to become an
“update” to set deletion timestamp. Presence of deletion timestamp on the object
indicates that it is being deleted. Otherwise, without finalizers, a delete shows up as a
reconcile where the object is missing from the cache.

Highlights:

If the object is not being deleted and does not have the finalizer registered, then
add the finalizer and update the object in Kubernetes.
If object is being deleted and the finalizer is still present in finalizers list, then
execute the pre-delete logic and remove the finalizer and update the object.
Ensure that the pre-delete logic is idempotent.

$ vim ../../cronjob-tutorial/testdata/finalizer_example.go

// Apache License (hidden) ◀

// Imports (hidden) ◀

By default, kubebuilder will include the RBAC rules necessary to update finalizers for
CronJobs.
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs,ver
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/sta
//+kubebuilder:rbac:groups=batch.tutorial.kubebuilder.io,resources=cronjobs/fin

The code snippet below shows skeleton code for implementing a finalizer.
func (r *CronJobReconciler) Reconcile(ctx context.Context, req ctrl.Request)
(ctrl.Result, error) {
log := r.Log.WithValues("cronjob", req.NamespacedName)

cronJob := &batchv1.CronJob{}
if err := r.Get(ctx, req.NamespacedName, cronJob); err != nil {
log.Error(err, "unable to fetch CronJob")
// we'll ignore not-found errors, since they can't be fixed by an
immediate
// requeue (we'll need to wait for a new notification), and we can
get them
// on deleted requests.
return ctrl.Result{}, client.IgnoreNotFound(err)
}

// name of our custom finalizer


myFinalizerName := "batch.tutorial.kubebuilder.io/finalizer"

// examine DeletionTimestamp to determine if object is under deletion


if cronJob.ObjectMeta.DeletionTimestamp.IsZero() {
// The object is not being deleted, so if it does not have our
finalizer,
// then lets add the finalizer and update the object. This is
equivalent
// registering our finalizer.
if !controllerutil.ContainsFinalizer(cronJob, myFinalizerName) {
controllerutil.AddFinalizer(cronJob, myFinalizerName)
if err := r.Update(ctx, cronJob); err != nil {
return ctrl.Result{}, err
}
}
} else {
// The object is being deleted
if controllerutil.ContainsFinalizer(cronJob, myFinalizerName) {
// our finalizer is present, so lets handle any external
dependency
if err := r.deleteExternalResources(cronJob); err != nil {
// if fail to delete the external dependency here, return
with error
// so that it can be retried
return ctrl.Result{}, err
}

// remove our finalizer from the list and update it.


controllerutil.RemoveFinalizer(cronJob, myFinalizerName)
if err := r.Update(ctx, cronJob); err != nil {
return ctrl.Result{}, err
}
}

// Stop reconciliation as the item is being deleted


return ctrl.Result{}, nil
}

// Your reconcile logic

return ctrl.Result{}, nil


}
func (r *Reconciler) deleteExternalResources(cronJob *batch.CronJob) error {
//
// delete any external resources associated with the cronJob
//
// Ensure that delete implementation is idempotent and safe to invoke
// multiple times for same object.
}

Creating Events

It is often useful to publish Event objects from the controller Reconcile function as they
allow users or any automated processes to see what is going on with a particular object
and respond to them.

Recent Events for an object can be viewed by running $ kubectl describe <resource
kind> <resource name> . Also, they can be checked by running $ kubectl get events .

Events should be raised in certain circumstances only

Be aware that it is not recommended to emit Events for all operations. If authors raise
too many events, it brings bad UX experiences for those consuming the solutions on the
cluster, and they may find it difficult to filter an actionable event from the clutter. For
more information, please take a look at the Kubernetes APIs convention.

Writing Events
Anatomy of an Event:

Event(object runtime.Object, eventtype, reason, message string)

object is the object this event is about.


eventtype is this event type, and is either Normal or Warning. (More info)
reason is the reason this event is generated. It should be short and unique with
UpperCamelCase format. The value could appear in switch statements by
automation. (More info)
message is intended to be consumed by humans. (More info)

Example Usage

Following is an example of a code implementation that raises an Event.


// The following implementation will raise an event
r.Recorder.Event(cr, "Warning", "Deleting",
fmt.Sprintf("Custom Resource %s is being deleted from the
namespace %s",
cr.Name,
cr.Namespace))

How to be able to raise Events?

Following are the steps with examples to help you raise events in your controller’s
reconciliations. Events are published from a Controller using an EventRecorder type
CorrelatorOptions struct , which can be created for a Controller by calling
GetRecorder(name string) on a Manager. See that we will change the implementation
scaffolded in cmd/main.go :

if err = (&controller.MyKindReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
// Note that we added the following line:
Recorder: mgr.GetEventRecorderFor("mykind-controller"),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller",
"MyKind")
os.Exit(1)
}

Allowing usage of EventRecorder on the Controller

To raise an event, you must have access to record.EventRecorder in the Controller.


Therefore, firstly let’s update the controller implementation:
import (
...
"k8s.io/client-go/tools/record"
...
)
// MyKindReconciler reconciles a MyKind object
type MyKindReconciler struct {
client.Client
Scheme *runtime.Scheme
// See that we added the following code to allow us to pass the
record.EventRecorder
Recorder record.EventRecorder
}
### Passing the EventRecorder to the Controller
Events are published from a Controller using an [EventRecorder]`type
CorrelatorOptions struct`,
which can be created for a Controller by calling `GetRecorder(name string)`
on a Manager. See that we will change the implementation scaffolded in
`cmd/main.go`:
```go
if err = (&controller.MyKindReconciler{
Client: mgr.GetClient(),
Scheme: mgr.GetScheme(),
// Note that we added the following line:
Recorder: mgr.GetEventRecorderFor("mykind-controller"),
}).SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller",
"MyKind")
os.Exit(1)
}

Granting the required permissions

You must also grant the RBAC rules permissions to allow your project to create Events.
Therefore, ensure that you add the RBAC into your controller:

...
//+kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
...
func (r *MyKindReconciler) Reconcile(ctx context.Context, req ctrl.Request)
(ctrl.Result, error) {

And then, run $ make manifests to update the rules under config/rbac/rule.yaml .

Watching Resources

Inside a Reconcile() control loop, you are looking to do a collection of operations until it
has the desired state on the cluster. Therefore, it can be necessary to know when a
resource that you care about is changed. In the case that there is an action (create,
update, edit, delete, etc.) on a watched resource, Reconcile() should be called for the
resources watching it.
Controller Runtime libraries provide many ways for resources to be managed and
watched. This ranges from the easy and obvious use cases, such as watching the
resources which were created and managed by the controller, to more unique and
advanced use cases.

See each subsection for explanations and examples of the different ways in which your
controller can Watch the resources it cares about.

Watching Operator Managed Resources - These resources are created and


managed by the same operator as the resource watching them. This section covers
both if they are managed by the same controller or separate controllers.
Watching Externally Managed Resources - These resources could be manually
created, or managed by other operators/controllers or the Kubernetes control
plane.

Watching Operator Managed Resources

Kubebuilder and the Controller Runtime libraries allow for controllers to implement the
logic of their CRD through easy management of Kubernetes resources.

Controlled & Owned Resources


Managing dependency resources is fundamental to a controller, and it’s not possible to
manage them without watching for changes to their state.

Deployments must know when the ReplicaSets that they manage are changed
ReplicaSets must know when their Pods are deleted, or change from healthy to
unhealthy.

Through the Owns() functionality, Controller Runtime provides an easy way to watch
dependency resources for changes.

As an example, we are going to create a SimpleDeployment resource. The


SimpleDeployment ‘s purpose is to manage a Deployment that users can change certain
aspects of, through the SimpleDeployment Spec. The SimpleDeployment controller’s
purpose is to make sure that it’s owned Deployment always uses the settings provided by
the user.

Provide basic templating in the Spec

$ vim owned-resource/api.go

// Apache License (hidden) ◀

// Imports (hidden) ◀
In this example the controller is doing basic management of a Deployment object.

The Spec here allows the user to customize the deployment created in various ways. For
example, the number of replicas it runs with.

// SimpleDeploymentSpec defines the desired state of SimpleDeployment


type SimpleDeploymentSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file

// The number of replicas that the deployment should have


// +optional
Replicas *int `json:"replicas,omitempty"`
}

The rest of the API configuration is covered in the CronJob tutorial.

// Remaining API Code (hidden) ◀

Manage the Owned Resource

$ vim owned-resource/controller.go

// Apache License (hidden) ◀

Along with the standard imports, we need additional controller-runtime and


apimachinery libraries. The extra imports are necessary for managing the objects that are
“Owned” by the controller.

package owned_resource

import (
"context"

"github.com/go-logr/logr"
kapps "k8s.io/api/apps/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller/controllerutil"

appsv1 "tutorial.kubebuilder.io/project/api/v1"
)

// Reconciler Declaration (hidden) ◀

In addition to the SimpleDeployment permissions, we will also need permissions to


manage Deployments . In order to fully manage the workflow of deployments, our app
will need to be able to use all verbs on a deployment as well as “get” it’s status.
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=simpledeploym
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=simpledeploym
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=simpledeploym
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;crea
//+kubebuilder:rbac:groups=apps,resources=deployments/status,verbs=get

Reconcile will be in charge of reconciling the state of SimpleDeployments .

In this basic example, SimpleDeployments are used to create and manage simple
Deployments that can be configured through the SimpleDeployment Spec.

// Reconcile is part of the main kubernetes reconciliation loop which aims to


// move the current state of the cluster closer to the desired state.
func (r *SimpleDeploymentReconciler) Reconcile(ctx context.Context, req
ctrl.Request) (ctrl.Result, error) {

// Begin the Reconcile (hidden) ◀

Build the deployment that we want to see exist within the cluster

deployment := &kapps.Deployment{}

// Set the information you care about


deployment.Spec.Replicas = simpleDeployment.Spec.Replicas

Set the controller reference, specifying that this Deployment is controlled by the
SimpleDeployment being reconciled.

This will allow for the SimpleDeployment to be reconciled when changes to the
Deployment are noticed.

if err := controllerutil.SetControllerReference(simpleDeployment,
deployment, r.scheme); err != nil {
return ctrl.Result{}, err
}

Manage your Deployment .

Create it if it doesn’t exist.


Update it if it is configured incorrectly.
foundDeployment := &kapps.Deployment{}
err := r.Get(ctx, types.NamespacedName{Name: deployment.Name, Namespace:
deployment.Namespace}, foundDeployment)
if err != nil && errors.IsNotFound(err) {
log.V(1).Info("Creating Deployment", "deployment", deployment.Name)
err = r.Create(ctx, deployment)
} else if err == nil {
if foundDeployment.Spec.Replicas != deployment.Spec.Replicas {
foundDeployment.Spec.Replicas = deployment.Spec.Replicas
log.V(1).Info("Updating Deployment", "deployment",
deployment.Name)
err = r.Update(ctx, foundDeployment)
}
}

return ctrl.Result{}, err


}

Finally, we add this reconciler to the manager, so that it gets started when the manager is
started.

Since we create dependency Deployments during the reconcile, we can specify that the
controller Owns Deployments . This will tell the manager that if a Deployment , or its
status, is updated, then the SimpleDeployment in its ownerRef field should be reconciled.

// SetupWithManager sets up the controller with the Manager.


func (r *SimpleDeploymentReconciler) SetupWithManager(mgr ctrl.Manager) error
{
return ctrl.NewControllerManagedBy(mgr).
For(&appsv1.SimpleDeployment{}).
Owns(&kapps.Deployment{}).
Complete(r)
}

Watching Externally Managed Resources

By default, Kubebuilder and the Controller Runtime libraries allow for controllers to easily
watch the resources that they manage as well as dependent resources that are Owned by
the controller. However, those are not always the only resources that need to be watched
in the cluster.

User Specified Resources


There are many examples of Resource Specs that allow users to reference external
resources.

Ingresses have references to Service objects


Pods have references to ConfigMaps, Secrets and Volumes
Deployments and Services have references to Pods
This same functionality can be added to CRDs and custom controllers. This will allow for
resources to be reconciled when another resource it references is changed.

As an example, we are going to create a ConfigDeployment resource. The


ConfigDeployment ‘s purpose is to manage a Deployment whose pods are always using
the latest version of a ConfigMap . While ConfigMaps are auto-updated within Pods,
applications may not always be able to auto-refresh config from the file system. Some
applications require restarts to apply configuration updates.

The ConfigDeployment CRD will hold a reference to a ConfigMap inside its Spec.
The ConfigDeployment controller will be in charge of creating a deployment with
Pods that use the ConfigMap. These pods should be updated anytime that the
referenced ConfigMap changes, therefore the ConfigDeployments will need to be
reconciled on changes to the referenced ConfigMap.

Allow for linking of resources in the Spec

$ vim external-indexed-field/api.go

// Apache License (hidden) ◀

// Imports (hidden) ◀

In our type’s Spec, we want to allow the user to pass in a reference to a configMap in the
same namespace. It’s also possible for this to be a namespaced reference, but in this
example we will assume that the referenced object lives in the same namespace.

This field does not need to be optional. If the field is required, the indexing code in the
controller will need to be modified.

// ConfigDeploymentSpec defines the desired state of ConfigDeployment


type ConfigDeploymentSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file

// Name of an existing ConfigMap in the same namespace, to add to the


deployment
// +optional
ConfigMap string `json:"configMap,omitempty"`
}

The rest of the API configuration is covered in the CronJob tutorial.

// Remaining API Code (hidden) ◀

Watch linked resources

$ vim external-indexed-field/controller.go
// Apache License (hidden) ◀

Along with the standard imports, we need additional controller-runtime and


apimachinery libraries. All additional libraries, necessary for Watching, have the comment
Required For Watching appended.

package external_indexed_field

import (
"context"

"github.com/go-logr/logr"
kapps "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/fields" // Required for Watching
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types" // Required for Watching
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/builder" // Required for Watching
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/handler" // Required for Watching
"sigs.k8s.io/controller-runtime/pkg/predicate" // Required for Watching
"sigs.k8s.io/controller-runtime/pkg/reconcile" // Required for Watching
"sigs.k8s.io/controller-runtime/pkg/source" // Required for Watching

appsv1 "tutorial.kubebuilder.io/project/api/v1"
)

Determine the path of the field in the ConfigDeployment CRD that we wish to use as the
“object reference”. This will be used in both the indexing and watching.

const (
configMapField = ".spec.configMap"
)

// Reconciler Declaration (hidden) ◀

There are two additional resources that the controller needs to have access to, other than
ConfigDeployments.

It needs to be able to fully manage Deployments, as well as check their status.


It also needs to be able to get, list and watch ConfigMaps. All 3 of these are
important, and you will see usages of each below.

//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=configdeploym
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=configdeploym
//+kubebuilder:rbac:groups=apps.tutorial.kubebuilder.io,resources=configdeploym
//+kubebuilder:rbac:groups=apps,resources=deployments,verbs=get;list;watch;crea
//+kubebuilder:rbac:groups=apps,resources=deployments/status,verbs=get
//+kubebuilder:rbac:groups="",resources=configmaps,verbs=get;list;watch
Reconcile will be in charge of reconciling the state of ConfigDeployments.
ConfigDeployments are used to manage Deployments whose pods are updated
whenever the configMap that they use is updated.

For that reason we need to add an annotation to the PodTemplate within the
Deployment we create. This annotation will keep track of the latest version of the data
within the referenced ConfigMap. Therefore when the version of the configMap is
changed, the PodTemplate in the Deployment will change. This will cause a rolling
upgrade of all Pods managed by the Deployment.

Skip down to the SetupWithManager function to see how we ensure that Reconcile is
called when the referenced ConfigMaps are updated.

// Reconcile is part of the main kubernetes reconciliation loop which aims to


// move the current state of the cluster closer to the desired state.
func (r *ConfigDeploymentReconciler) Reconcile(ctx context.Context, req
ctrl.Request) (ctrl.Result, error) {

// Begin the Reconcile (hidden) ◀

// your logic here

var configMapVersion string


if configDeployment.Spec.ConfigMap != "" {
configMapName := configDeployment.Spec.ConfigMap
foundConfigMap := &corev1.ConfigMap{}
err := r.Get(ctx, types.NamespacedName{Name: configMapName,
Namespace: configDeployment.Namespace}, foundConfigMap)
if err != nil {
// If a configMap name is provided, then it must exist
// You will likely want to create an Event for the user to
understand why their reconcile is failing.
return ctrl.Result{}, err
}

// Hash the data in some way, or just use the version of the Object
configMapVersion = foundConfigMap.ResourceVersion
}

// Logic here to add the configMapVersion as an annotation on your


Deployment Pods.

return ctrl.Result{}, nil


}

Finally, we add this reconciler to the manager, so that it gets started when the manager is
started.

Since we create dependency Deployments during the reconcile, we can specify that the
controller Owns Deployments.

However the ConfigMaps that we want to watch are not owned by the ConfigDeployment
object. Therefore we must specify a custom way of watching those objects. This watch
logic is complex, so we have split it into a separate method.

// SetupWithManager sets up the controller with the Manager.


func (r *ConfigDeploymentReconciler) SetupWithManager(mgr ctrl.Manager) error
{

The configMap field must be indexed by the manager, so that we will be able to lookup
ConfigDeployments by a referenced ConfigMap name. This will allow for quickly answer
the question:

If ConfigMap x is updated, which ConfigDeployments are affected?

if err := mgr.GetFieldIndexer().IndexField(context.Background(),
&appsv1.ConfigDeployment{}, configMapField, func(rawObj client.Object)
[]string {
// Extract the ConfigMap name from the ConfigDeployment Spec, if one
is provided
configDeployment := rawObj.(*appsv1.ConfigDeployment)
if configDeployment.Spec.ConfigMap == "" {
return nil
}
return []string{configDeployment.Spec.ConfigMap}
}); err != nil {
return err
}

As explained in the CronJob tutorial, the controller will first register the Type that it
manages, as well as the types of subresources that it controls. Since we also want to
watch ConfigMaps that are not controlled or managed by the controller, we will need to
use the Watches() functionality as well.

The Watches() function is a controller-runtime API that takes:

A Kind (i.e. ConfigMap )


A mapping function that converts a ConfigMap object to a list of reconcile requests
for ConfigDeployments . We have separated this out into a separate function.
A list of options for watching the ConfigMaps
In our case, we only want the watch to be triggered when the ResourceVersion
of the ConfigMap is changed.
return ctrl.NewControllerManagedBy(mgr).
For(&appsv1.ConfigDeployment{}).
Owns(&kapps.Deployment{}).
Watches(
&source.Kind{Type: &corev1.ConfigMap{}},
handler.EnqueueRequestsFromMapFunc(r.findObjectsForConfigMap),

builder.WithPredicates(predicate.ResourceVersionChangedPredicate{}),
).
Complete(r)
}

Because we have already created an index on the configMap reference field, this
mapping function is quite straight forward. We first need to list out all
ConfigDeployments that use ConfigMap given in the mapping function. This is done by
merely submitting a List request using our indexed field as the field selector.

When the list of ConfigDeployments that reference the ConfigMap is found, we just need
to loop through the list and create a reconcile request for each one. If an error occurs
fetching the list, or no ConfigDeployments are found, then no reconcile requests will be
returned.

func (r *ConfigDeploymentReconciler) findObjectsForConfigMap(configMap


client.Object) []reconcile.Request {
attachedConfigDeployments := &appsv1.ConfigDeploymentList{}
listOps := &client.ListOptions{
FieldSelector: fields.OneTermEqualSelector(configMapField,
configMap.GetName()),
Namespace: configMap.GetNamespace(),
}
err := r.List(context.TODO(), attachedConfigDeployments, listOps)
if err != nil {
return []reconcile.Request{}
}

requests := make([]reconcile.Request,
len(attachedConfigDeployments.Items))
for i, item := range attachedConfigDeployments.Items {
requests[i] = reconcile.Request{
NamespacedName: types.NamespacedName{
Name: item.GetName(),
Namespace: item.GetNamespace(),
},
}
}
return requests
}

Kind Cluster

This only cover the basics to use a kind cluster. You can find more details at kind
documentation.
Installation
You can follow this to install kind .

Create a Cluster
You can simply create a kind cluster by

kind create cluster

To customize your cluster, you can provide additional configuration. For example, the
following is a sample kind configuration.

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker

Using the configuration above, run the following command will give you a k8s v1.17.2
cluster with 1 control-plane node and 3 worker nodes.

kind create cluster --config hack/kind-config.yaml --


image=kindest/node:v1.17.2

You can use --image flag to specify the cluster version you want, e.g. --
image=kindest/node:v1.17.2 , the supported version are listed here.

Load Docker Image into the Cluster


When developing with a local kind cluster, loading docker images to the cluster is a very
useful feature. You can avoid using a container registry.

kind load docker-image your-image-name:your-tag

See Load a local image into a kind cluster for more information.
Delete a Cluster
kind delete cluster

Webhook

Webhooks are requests for information sent in a blocking fashion. A web application
implementing webhooks will send an HTTP request to other application when certain
event happens.

In the kubernetes world, there are 3 kinds of webhooks: admission webhook,


authorization webhook and CRD conversion webhook.

In controller-runtime libraries, we support admission webhooks and CRD conversion


webhooks.

Kubernetes supports these dynamic admission webhooks as of version 1.9 (when the
feature entered beta).

Kubernetes supports the conversion webhooks as of version 1.15 (when the feature
entered beta).

Supporting older cluster versions

By default, kubebuilder create webhook will create webhook configs of API version v1 ,
a version introduced in Kubernetes v1.16. If your project intends to support Kubernetes
cluster versions older than v1.16, you must use the v1beta1 API version:

kubebuilder create webhook --webhook-version v1beta1 ...

v1beta1 is deprecated and will be removed in a future Kubernetes release, so


upgrading is recommended.

Admission Webhooks

Admission webhooks are HTTP callbacks that receive admission requests, process them
and return admission responses.

Kubernetes provides the following types of admission webhooks:

Mutating Admission Webhook: These can mutate the object while it’s being
created or updated, before it gets stored. It can be used to default fields in a
resource requests, e.g. fields in Deployment that are not specified by the user. It can
be used to inject sidecar containers.
Validating Admission Webhook: These can validate the object while it’s being
created or updated, before it gets stored. It allows more complex validation than
pure schema-based validation. e.g. cross-field validation and pod image whitelisting.

The apiserver by default doesn’t authenticate itself to the webhooks. However, if you
want to authenticate the clients, you can configure the apiserver to use basic auth, bearer
token, or a cert to authenticate itself to the webhooks. You can find detailed steps here.

Admission Webhook for Core Types

It is very easy to build admission webhooks for CRDs, which has been covered in the
CronJob tutorial. Given that kubebuilder doesn’t support webhook scaffolding for core
types, you have to use the library from controller-runtime to handle it. There is an
example in controller-runtime.

It is suggested to use kubebuilder to initialize a project, and then you can follow the steps
below to add admission webhooks for core types.

Implement Your Handler


You need to have your handler implements the admission.Handler interface.

type podAnnotator struct {


Client client.Client
decoder *admission.Decoder
}

func (a *podAnnotator) Handle(ctx context.Context, req admission.Request)


admission.Response {
pod := &corev1.Pod{}
err := a.decoder.Decode(req, pod)
if err != nil {
return admission.Errored(http.StatusBadRequest, err)
}

// mutate the fields in pod

marshaledPod, err := json.Marshal(pod)


if err != nil {
return admission.Errored(http.StatusInternalServerError, err)
}
return admission.PatchResponseFromRaw(req.Object.Raw, marshaledPod)
}

If you need a client, just pass in the client at struct construction time.

If you add the InjectDecoder method for your handler, a decoder will be injected for
you.
func (a *podAnnotator) InjectDecoder(d *admission.Decoder) error {
a.decoder = d
return nil
}

Note: in order to have controller-gen generate the webhook configuration for you, you
need to add markers. For example, // +kubebuilder:webhook:path=/mutate-v1-
pod,mutating=true,failurePolicy=fail,groups="",resources=pods,verbs=create;update,

Update main.go
Now you need to register your handler in the webhook server.

mgr.GetWebhookServer().Register("/mutate-v1-pod", &webhook.Admission{Handler:
&podAnnotator{Client: mgr.GetClient()}})

You need to ensure the path here match the path in the marker.

Deploy
Deploying it is just like deploying a webhook server for CRD. You need to

1. provision the serving certificate


2. deploy the server

You can follow the tutorial.

Markers for Config/Code Generation

Kubebuilder makes use of a tool called controller-gen for generating utility code and
Kubernetes YAML. This code and config generation is controlled by the presence of
special “marker comments” in Go code.

Markers are single-line comments that start with a plus, followed by a marker name,
optionally followed by some marker specific configuration:

// +kubebuilder:validation:Optional
// +kubebuilder:validation:MaxItems=2
//
+kubebuilder:printcolumn:JSONPath=".status.replicas",name=Replicas,type=string
difference between // +optional and //
+kubebuilder:validation:Optional

Controller-gen supports both (see the output of controller-gen crd -www ).


+kubebuilder:validation:Optional and +optional can be applied to fields.

But +kubebuilder:validation:Optional can also be applied at the package level


such that it applies to every field in the package.

If you’re using controller-gen only then they’re redundant, but if you’re using other
generators or you want developers that need to build their own clients for your API,
you’ll want to also include +optional .

The most reliable way in 1.x to get +optional is omitempty .

See each subsection for information about different types of code and YAML generation.

Generating Code & Artifacts in Kubebuilder


Kubebuilder projects have two make targets that make use of controller-gen:

make manifests generates Kubernetes object YAML, like


CustomResourceDefinitions, WebhookConfigurations, and RBAC roles.

make generate generates code, like runtime.Object/DeepCopy implementations.

See Generating CRDs for a comprehensive overview.

Marker Syntax
Exact syntax is described in the godocs for controller-tools.

In general, markers may either be:

Empty ( +kubebuilder:validation:Optional ): empty markers are like boolean flags


on the command line -- just specifying them enables some behavior.

Anonymous ( +kubebuilder:validation:MaxItems=2 ): anonymous markers take a


single value as their argument.

Multi-option
( +kubebuilder:printcolumn:JSONPath=".status.replicas",name=Replicas,type=stri
multi-option markers take one or more named arguments. The first argument is
separated from the name by a colon, and latter arguments are comma-separated.
Order of arguments doesn’t matter. Some arguments may be optional.

Marker arguments may be strings, ints, bools, slices, or maps thereof. Strings, ints, and
bools follow their Go syntax:

// +kubebuilder:validation:ExclusiveMaximum=false
// +kubebuilder:validation:Format="date-time"
// +kubebuilder:validation:Maximum=42

For convenience, in simple cases the quotes may be omitted from strings, although this is
not encouraged for anything other than single-word strings:

// +kubebuilder:validation:Type=string

Slices may be specified either by surrounding them with curly braces and separating with
commas:

// +kubebuilder:webhooks:Enum={"crackers, Gromit, we forgot the


crackers!","not even wensleydale?"}

or, in simple cases, by separating with semicolons:

// +kubebuilder:validation:Enum=Wallace;Gromit;Chicken

Maps are specified with string keys and values of any type (effectively
map[string]interface{} ). A map is surrounded by curly braces ( {} ), each key and value
is separated by a colon ( : ), and each key-value pair is separated by a comma:

// +kubebuilder:default={magic: {numero: 42, stringified: forty-two}}

CRD Generation

These markers describe how to construct a custom resource definition from a series of
Go types and packages. Generation of the actual validation schema is described by the
validation markers.

See Generating CRDs for examples.

▶ Show Detailed Argument Help

// +kubebuilder:deprecatedversion:warning=‹string› on type
marks this version as deprecated.
// +kubebuilder:metadata:annotations=‹[]string›,labels=‹[]string› on type
configures the additional annotations or labels for this CRD. For example adding
annotation "api-approved.kubernetes.io" for a CRD with Kubernetes groups, or
annotation "cert-manager.io/inject-ca-from-secret" for a CRD that needs CA
injection.
// +kubebuilder:printcolumn
:JSONPath=‹string›,description=‹string›,format=‹string›,name=‹string›,priority=‹int›,type=
‹string›
on type
adds a column to "kubectl get" output for this CRD.
// +kubebuilder:resource
:categories=‹[]string›,path=‹string›,scope=‹string›,shortName=‹[]string›,singular=‹string›
on type
configures naming and scope for a CRD.
// +kubebuilder:skipversion on type
removes the particular version of the CRD from the CRDs spec.
// +kubebuilder:storageversion on type
marks this version as the "storage version" for the CRD for conversion.
// +kubebuilder:subresource:scale
:selectorpath=‹string›,specpath=‹string›,statuspath=‹string› on type
enables the "/scale" subresource on a CRD.
// +kubebuilder:subresource:status on type
enables the "/status" subresource on a CRD.
// +kubebuilder:unservedversion on type
does not serve this version.
// +groupName:=‹string› on package
specifies the API group name for this package.
// +kubebuilder:skip on package
don't consider this package as an API version.
// +versionName:=‹string› on package
overrides the API group version for this package (defaults to the package name).

CRD Validation

These markers modify how the CRD validation schema is produced for the types and
fields they modify. Each corresponds roughly to an OpenAPI/JSON schema option.

See Generating CRDs for examples.

▶ Show Detailed Argument Help

// +kubebuilder:default:=‹any› on field
sets the default value for this field.
// +kubebuilder:example:=‹any› on field
sets the example value for this field.
// +kubebuilder:validation:EmbeddedResource on field
EmbeddedResource marks a fields as an embedded resource with apiVersion,
kind and metadata fields.
// +kubebuilder:validation:Enum:=‹[]any› on field
specifies that this (scalar) field is restricted to the *exact* values specified here.
// +kubebuilder:validation:ExclusiveMaximum:=‹bool› on field
indicates that the maximum is "up to" but not including that value.
// +kubebuilder:validation:ExclusiveMinimum:=‹bool› on field
indicates that the minimum is "up to" but not including that value.
// +kubebuilder:validation:Format:=‹string› on field
specifies additional "complex" formatting for this field.
// +kubebuilder:validation:MaxItems:=‹int› on field
specifies the maximum length for this list.
// +kubebuilder:validation:MaxLength:=‹int› on field
specifies the maximum length for this string.
// +kubebuilder:validation:MaxProperties:=‹int› on field
restricts the number of keys in an object
// +kubebuilder:validation:Maximum:=‹› on field
specifies the maximum numeric value that this field can have.
// +kubebuilder:validation:MinItems:=‹int› on field
specifies the minimum length for this list.
// +kubebuilder:validation:MinLength:=‹int› on field
specifies the minimum length for this string.
// +kubebuilder:validation:MinProperties:=‹int› on field
restricts the number of keys in an object
// +kubebuilder:validation:Minimum:=‹› on field
specifies the minimum numeric value that this field can have. Negative numbers are
supported.
// +kubebuilder:validation:MultipleOf:=‹› on field
specifies that this field must have a numeric value that's a multiple of this one.
// +kubebuilder:validation:Optional on field
specifies that this field is optional, if fields are required by default.
// +kubebuilder:validation:Pattern:=‹string› on field
specifies that this string must match the given regular expression.
// +kubebuilder:validation:Required on field
specifies that this field is required, if fields are optional by default.
// +kubebuilder:validation:Schemaless on field
marks a field as being a schemaless object.
// +kubebuilder:validation:Type:=‹string› on field
overrides the type for this field (which defaults to the equivalent of the Go type).
// +kubebuilder:validation:UniqueItems:=‹bool› on field
specifies that all items in this list must be unique.
// +kubebuilder:validation:XEmbeddedResource on field
EmbeddedResource marks a fields as an embedded resource with apiVersion,
kind and metadata fields.
// +kubebuilder:validation:XIntOrString on field
IntOrString marks a fields as an IntOrString.
// +kubebuilder:validation:XValidation:message=‹string›,rule=‹string› on field
marks a field as requiring a value for which a given expression evaluates to true.
// +nullable on field
marks this field as allowing the "null" value.
// +optional on field
specifies that this field is optional, if fields are required by default.
// +kubebuilder:validation:Enum:=‹[]any› on type
specifies that this (scalar) field is restricted to the *exact* values specified here.
// +kubebuilder:validation:ExclusiveMaximum:=‹bool› on type
indicates that the maximum is "up to" but not including that value.
// +kubebuilder:validation:ExclusiveMinimum:=‹bool› on type
indicates that the minimum is "up to" but not including that value.
// +kubebuilder:validation:Format:=‹string› on type
specifies additional "complex" formatting for this field.
// +kubebuilder:validation:MaxItems:=‹int› on type
specifies the maximum length for this list.
// +kubebuilder:validation:MaxLength:=‹int› on type
specifies the maximum length for this string.
// +kubebuilder:validation:MaxProperties:=‹int› on type
restricts the number of keys in an object
// +kubebuilder:validation:Maximum:=‹› on type
specifies the maximum numeric value that this field can have.
// +kubebuilder:validation:MinItems:=‹int› on type
specifies the minimum length for this list.
// +kubebuilder:validation:MinLength:=‹int› on type
specifies the minimum length for this string.
// +kubebuilder:validation:MinProperties:=‹int› on type
restricts the number of keys in an object
// +kubebuilder:validation:Minimum:=‹› on type
specifies the minimum numeric value that this field can have. Negative numbers are
supported.
// +kubebuilder:validation:MultipleOf:=‹› on type
specifies that this field must have a numeric value that's a multiple of this one.
// +kubebuilder:validation:Pattern:=‹string› on type
specifies that this string must match the given regular expression.
// +kubebuilder:validation:Type:=‹string› on type
overrides the type for this field (which defaults to the equivalent of the Go type).
// +kubebuilder:validation:UniqueItems:=‹bool› on type
specifies that all items in this list must be unique.
// +kubebuilder:validation:XEmbeddedResource on type
EmbeddedResource marks a fields as an embedded resource with apiVersion,
kind and metadata fields.
// +kubebuilder:validation:XIntOrString on type
IntOrString marks a fields as an IntOrString.
// +kubebuilder:validation:XValidation:message=‹string›,rule=‹string› on type
marks a field as requiring a value for which a given expression evaluates to true.
// +kubebuilder:validation:Optional on package
specifies that all fields in this package are optional by default.
// +kubebuilder:validation:Required on package
specifies that all fields in this package are required by default.

CRD Processing

These markers help control how the Kubernetes API server processes API requests
involving your custom resources.

See Generating CRDs for examples.

▶ Show Detailed Argument Help

// +kubebuilder:pruning:PreserveUnknownFields on field
PreserveUnknownFields stops the apiserver from pruning fields which are not
specified.
// +kubebuilder:validation:XPreserveUnknownFields on field
PreserveUnknownFields stops the apiserver from pruning fields which are not
specified.
// +listMapKey:=‹string› on field
specifies the keys to map listTypes.
// +listType:=‹string› on field
specifies the type of data-structure that the list represents (map, set, atomic).
// +mapType:=‹string› on field
specifies the level of atomicity of the map; i.e. whether each item in the map is
independent of the others, or all fields are treated as a single unit.
// +structType:=‹string› on field
specifies the level of atomicity of the struct; i.e. whether each field in the struct is
independent of the others, or all fields are treated as a single unit.
// +kubebuilder:pruning:PreserveUnknownFields on type
PreserveUnknownFields stops the apiserver from pruning fields which are not
specified.
// +kubebuilder:validation:XPreserveUnknownFields on type
PreserveUnknownFields stops the apiserver from pruning fields which are not
specified.
// +listMapKey:=‹string› on type
specifies the keys to map listTypes.
// +listType:=‹string› on type
specifies the type of data-structure that the list represents (map, set, atomic).
// +mapType:=‹string› on type
specifies the level of atomicity of the map; i.e. whether each item in the map is
independent of the others, or all fields are treated as a single unit.
// +structType:=‹string› on type
specifies the level of atomicity of the struct; i.e. whether each field in the struct is
independent of the others, or all fields are treated as a single unit.
Webhook

These markers describe how webhook configuration is generated. Use these to keep the
description of your webhooks close to the code that implements them.

▶ Show Detailed Argument Help

// +kubebuilder:webhook
:admissionReviewVersions=‹[]string›,failurePolicy=‹string›,groups=‹[]string›,matchPolicy=
‹string›,mutating=‹bool›,name=‹string›,path=‹string›,reinvocationPolicy=‹string›,resources=
‹[]string›,sideEffects=‹string›,verbs=‹[]string›,versions=‹[]string›,webhookVersions=‹[]string›
on package
specifies how a webhook should be served.

Object/DeepCopy

These markers control when DeepCopy and runtime.Object implementation methods


are generated.

▶ Show Detailed Argument Help

// +kubebuilder:object:generate:=‹bool› on type
overrides enabling or disabling deepcopy generation for this type
// +kubebuilder:object:root:=‹bool› on type
enables object interface implementation generation for this type
// +kubebuilder:object:generate:=‹bool› on package
enables or disables object interface & deepcopy implementation generation for this
package
// +k8s:deepcopy-gen:=‹raw› use kubebuilder:object:generate (on package)
enables or disables object interface & deepcopy implementation generation for this
package
// +k8s:deepcopy-gen:=‹raw› use kubebuilder:object:generate (on type)
overrides enabling or disabling deepcopy generation for this type
// +k8s:deepcopy-gen:interfaces:=‹string› use kubebuilder:object:root (on type)
enables object interface implementation generation for this type

RBAC

These markers cause an RBAC ClusterRole to be generated. This allows you to describe
the permissions that your controller requires alongside the code that makes use of those
permissions.

▶ Show Detailed Argument Help

// +kubebuilder:rbac
:groups=‹[]string›,namespace=‹string›,resourceNames=‹[]string›,resources=‹[]string›,urls=
‹[]string›,verbs=‹[]string›
on package
specifies an RBAC rule to all access to some resources or non-resource URLs.

controller-gen CLI

Kubebuilder makes use of a tool called controller-gen for generating utility code and
Kubernetes YAML. This code and config generation is controlled by the presence of
special “marker comments” in Go code.

controller-gen is built out of different “generators” (which specify what to generate) and
“output rules” (which specify how and where to write the results).

Both are configured through command line options specified in marker format.

For instance, the following command:

controller-gen paths=./... crd:trivialVersions=true rbac:roleName=controller-


perms output:crd:artifacts:config=config/crd/bases

generates CRDs and RBAC, and specifically stores the generated CRD YAML in
config/crd/bases . For the RBAC, it uses the default output rules ( config/rbac ). It
considers every package in the current directory tree (as per the normal rules of the go
... wildcard).

Generators
Each different generator is configured through a CLI option. Multiple generators may be
used in a single invocation of controller-gen .

▶ Show Detailed Argument Help

// +webhook:headerFile=‹string›,year=‹string› on package
generates (partial) {Mutating,Validating}WebhookConfiguration objects.
// +schemapatch:generateEmbeddedObjectMeta=‹bool›,manifests=‹string›,maxDescLen=‹int›
on package
patches existing CRDs with new schemata.
// +rbac:headerFile=‹string›,roleName=‹string›,year=‹string› on package
generates ClusterRole objects.
// +object:headerFile=‹string›,year=‹string› on package
generates code containing DeepCopy, DeepCopyInto, and DeepCopyObject method
implementations.
// +crd
:allowDangerousTypes=‹bool›,crdVersions=‹[]string›,generateEmbeddedObjectMeta=‹bool›
,headerFile=‹string›,ignoreUnexportedFields=‹bool›,maxDescLen=‹int›,year=‹string›
on package
generates CustomResourceDefinition objects.
Output Rules
Output rules configure how a given generator outputs its results. There is always one
global “fallback” output rule (specified as output:<rule> ), plus per-generator overrides
(specified as output:<generator>:<rule> ).

Default Rules

When no fallback rule is specified manually, a set of default per-generator rules are
used which result in YAML going to config/<generator> , and code staying where it
belongs.

The default rules are equivalent to output:


<generator>:artifacts:config=config/<generator> for each generator.

When a “fallback” rule is specified, that’ll be used instead of the default rules.

For example, if you specify crd rbac:roleName=controller-perms


output:crd:stdout , you’ll get CRDs on standard out, and rbac in a file in
config/rbac . If you were to add in a global rule instead, like crd
rbac:roleName=controller-perms output:crd:stdout output:none , you’d get CRDs
to standard out, and everything else to /dev/null, because we’ve explicitly specified a
fallback.

For brevity, the per-generator output rules ( output:<generator>:<rule> ) are omitted


below. They are equivalent to the global fallback options listed here.

▶ Show Detailed Argument Help

// +output:artifacts:code=‹string›,config=‹string› on package
outputs artifacts to different locations, depending on whether they're package-
associated or not.
// +output:dir:=‹string› on package
outputs each artifact to the given directory, regardless of if it's package-associated
or not.
// +output:none on package
skips outputting anything.
// +output:stdout on package
outputs everything to standard-out, with no separation.

Other Options
▶ Show Detailed Argument Help
// +paths:=‹[]string› on package
represents paths and go-style path patterns to use as package roots.

Enabling shell autocompletion

The Kubebuilder completion script can be generated with the command kubebuilder
completion [bash|fish|powershell|zsh] . Note that sourcing the completion script in
your shell enables Kubebuilder autocompletion.

Prerequisites for Bash

The completion Bash script depends on bash-completion, which means that you
have to install this software first (you can test if you have bash-completion already
installed). Also, ensure that your Bash version is 4.1+.

Once installed, go ahead and add the path /usr/local/bin/bash in the


/etc/shells .

echo “/usr/local/bin/bash” > /etc/shells

Make sure to use installed shell by current user.

chsh -s /usr/local/bin/bash

Add following content in /.bash_profile or ~/.bashrc

# kubebuilder autocompletion
if [ -f /usr/local/share/bash-completion/bash_completion ]; then
. /usr/local/share/bash-completion/bash_completion
fi
. <(kubebuilder completion bash)

Restart terminal for the changes to be reflected.

Zsh

Follow a similar protocol for zsh completion.

Fish

source (kubebuilder completion fish | psub)

Artifacts

Kubebuilder publishes test binaries and container images in addition to the main binary
releases.
Test Binaries
You can find test binary tarballs for all Kubernetes versions and host platforms at
https://go.kubebuilder.io/test-tools . You can find a test binary tarball for a
particular Kubernetes version and host platform at https://go.kubebuilder.io/test-
tools/${version}/${os}/${arch} .

Container Images
You can find all container image versions for a particular platform at
https://go.kubebuilder.io/images/${os}/${arch} or at
gcr.io/kubebuilder/thirdparty-${os}-${arch} . You can find the container image for a
particular Kubernetes version and host platform at
https://go.kubebuilder.io/images/${os}/${arch}/${version} or at
gcr.io/kubebuilder/thirdparty-${os}-${arch}:${version} .

Platforms Supported

Kubebuilder produces solutions that by default can work on multiple platforms or specific
ones, depending on how you build and configure your workloads. This guide aims to help
you properly configure your projects according to your needs.

Overview
To provide support on specific or multiple platforms, you must ensure that all images
used in workloads are built to support the desired platforms. Note that may not be the
same as the platform where you develop your solutions and use KubeBuilder, but instead
the platform(s) where your solution should run and be distributed. It is recommended to
build solutions that work on multiple platforms so that your project works on any
Kubernetes cluster regardless of the underlying operating system and architecture.

How to define which platforms are supported


The following covers what you need to do to provide the support for one or more
platforms or architectures.
1) Build workload images to provide the support for other platform(s)

The images used in workloads such as in your Pods/Deployments will need to provide the
support for this other platform. You can inspect the images using a ManifestList of
supported platforms using the command docker manifest inspect , i.e.:

$ docker manifest inspect myresgystry/example/myimage:v0.0.1


{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 739,
"digest":
"sha256:a274a1a2af811a1daf3fd6b48ff3d08feb757c2c3f3e98c59c7f85e550a99a32",
"platform": {
"architecture": "arm64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 739,
"digest":
"sha256:d801c41875f12ffd8211fffef2b3a3d1a301d99f149488d31f245676fa8bc5d9",
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 739,
"digest":
"sha256:f4423c8667edb5372fb0eafb6ec599bae8212e75b87f67da3286f0291b4c8732",
"platform": {
"architecture": "s390x",
"os": "linux"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 739,
"digest":
"sha256:621288f6573c012d7cf6642f6d9ab20dbaa35de3be6ac2c7a718257ec3aff333",
"platform": {
"architecture": "ppc64le",
"os": "linux"
}
},
]
}
2) (Recommended as a Best Practice) Ensure that node affinity
expressions are set to match the supported platforms

Kubernetes provides a mechanism called nodeAffinity which can be used to limit the
possible node targets where a pod can be scheduled. This is especially important to
ensure correct scheduling behavior in clusters with nodes that span across multiple
platforms (i.e. heterogeneous clusters).

Kubernetes manifest example

affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64
- ppc64le
- s390x
- key: kubernetes.io/os
operator: In
values:
- linux

Golang Example
Template: corev1.PodTemplateSpec{
...
Spec: corev1.PodSpec{
Affinity: &corev1.Affinity{
NodeAffinity: &corev1.NodeAffinity{
RequiredDuringSchedulingIgnoredDuringExecution:
&corev1.NodeSelector{
NodeSelectorTerms: []corev1.NodeSelectorTerm{
{
MatchExpressions:
[]corev1.NodeSelectorRequirement{
{
Key: "kubernetes.io/arch",
Operator: "In",
Values: []string{"amd64"},
},
{
Key: "kubernetes.io/os",
Operator: "In",
Values: []string{"linux"},
},
},
},
},
},
},
},
SecurityContext: &corev1.PodSecurityContext{
...
},
Containers: []corev1.Container{{
...
}},
},

Example(s)

You can look for some code examples by checking the code which is generated via
the Deploy Image plugin. (More info)

Producing projects that support multiple platforms


You can use docker buildx to cross-compile via emulation (QEMU) to build the manager
image. See that projects scaffold with the latest versions of Kubebuilder have the
Makefile target docker-buildx .

Example of Usage

$ make docker-buildx IMG=myregistry/myoperator:v0.0.1


Note that you need to ensure that all images and workloads required and used by your
project will provide the same support as recommended above, and that you properly
configure the nodeAffinity for all your workloads. Therefore, ensure that you uncomment
the following code in the config/manager/manager.yaml file

# TODO(user): Uncomment the following code to configure the


nodeAffinity expression
# according to the platforms which are supported by your solution.
# It is considered best practice to support multiple architectures. You
can
# build your manager image using the makefile target docker-buildx.
# affinity:
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/arch
# operator: In
# values:
# - amd64
# - arm64
# - ppc64le
# - s390x
# - key: kubernetes.io/os
# operator: In
# values:
# - linux

Building images for releases

You will probably want to automate the releases of your projects to ensure that the
images are always built for the same platforms. Note that Goreleaser also supports
docker buildx. See its documentation for more detail.

Also, you may want to configure GitHub Actions, Prow jobs, or any other solution
that you use to build images to provide multi-platform support. Note that you can
also use other options like docker manifest create to customize your solutions to
achieve the same goals with other tools.

By using Docker and the target provided by default you should NOT change the
Dockerfile to use any specific GOOS and GOARCH to build the manager binary.
However, if you are looking for to customize the default scaffold and create your own
implementations you might want to give a look in the Golang doc to knows its
available options.

Which (workload) images are created by default?


Projects created with the Kubebuilder CLI have two workloads which are:
Manager

The container to run the manager implementation is configured in the


config/manager/manager.yaml file. This image is built with the Dockerfile file scaffolded
by default and contains the binary of the project
which will be built via the command go build -a -o manager main.go .

Note that when you run make docker-build OR make docker-build


IMG=myregistry/myprojectname:<tag> an image will be built from the client host (local
environment) and produce an image for the client os/arch, which is commonly
linux/amd64 or linux/arm64.

Mac Os

If you are running from an Mac Os environment then, Docker also will consider it as
linux/$arch. Be aware that when, for example, is running Kind on a Mac OS
operational system the nodes will end up labeled with kubernetes.io/os=linux

Kube RBAC Proxy

A workload will be created to run the image gcr.io/kubebuilder/kube-rbac-proxy: which is


configured in the config/default/manager_auth_proxy_patch.yaml manifest. It is a side-
car proxy whose purpose is to protect the manager from malicious attacks. You can learn
more about its motivations by looking at the README of this project
github.com/brancz/kube-rbac-proxy.

Kubebuilder has been building this image with support for multiple architectures by
default.( Check it here ). If you need to address any edge case scenario where you want to
produce a project that only provides support for a specific architecture platform, you can
customize your configuration manifests to use the specific architecture types built for this
image.

Configuring envtest for integration tests

The controller-runtime/pkg/envtest Go library helps write integration tests for your


controllers by setting up and starting an instance of etcd and the Kubernetes API server,
without kubelet, controller-manager or other components.

Installation
Installing the binaries is as a simple as running make envtest . envtest will download
the Kubernetes API server binaries to the bin/ folder in your project by default. make
test is the one-stop shop for downloading the binaries, setting up the test environment,
and running the tests.

The make targets require bash to run.

Installation in Air Gapped/disconnected environments


If you would like to download the tarball containing the binaries, to use in a disconnected
environment you can use setup-envtest to download the required binaries locally.
There are a lot of ways to configure setup-envtest to avoid talking to the internet you
can read about them here. The examples below will show how to install the Kubernetes
API binaries using mostly defaults set by setup-envtest .

Download the binaries

make envtest will download the setup-envtest binary to ./bin/ .

make envtest

Installing the binaries using setup-envtest stores the binary in OS specific locations, you
can read more about them here

./bin/setup-envtest use 1.21.2

Update the test make target

Once these binaries are installed, change the test make target to include a -i like
below. -i will only check for locally installed binaries and not reach out to remote
resources. You could also set the ENVTEST_INSTALLED_ONLY env variable.

test: manifests generate fmt vet


KUBEBUILDER_ASSETS="$(shell $(ENVTEST) use $(ENVTEST_K8S_VERSION) -i --
bin-dir $(LOCALBIN) -p path)" go test ./... -coverprofile cover.out

NOTE: The ENVTEST_K8S_VERSION needs to match the setup-envtest you downloaded


above. Otherwise, you will see an error like the below

no such version (1.24.5) exists on disk for this architecture (darwin/amd64)


-- try running `list -i` to see what's on disk
Kubernetes 1.20 and 1.21 binary issues
There have been many reports of the kube-apiserver or etcd binary hanging during
cleanup or misbehaving in other ways. We recommend using the 1.19.2 tools version to
circumvent such issues, which do not seem to arise in 1.22+. This is likely NOT the cause
of a fork/exec: permission denied or fork/exec: not found error, which is caused by
improper tools installation.

Writing tests
Using envtest in integration tests follows the general flow of:

import sigs.k8s.io/controller-runtime/pkg/envtest

//specify testEnv configuration


testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "config", "crd",
"bases")},
}

//start testEnv
cfg, err = testEnv.Start()

//write test logic

//stop testEnv
err = testEnv.Stop()

kubebuilder does the boilerplate setup and teardown of testEnv for you, in the ginkgo
test suite that it generates under the /controllers directory.

Logs from the test runs are prefixed with test-env .

Examples

You can use the plugin DeployImage to check examples. This plugin allows users to
scaffold API/Controllers to deploy and manage an Operand (image) on the cluster
following the guidelines and best practices. It abstracts the complexities of achieving
this goal while allowing users to customize the generated code.

Therefore, you can check that a test using ENV TEST will be generated for the
controller which has the purpose to ensure that the Deployment is created
successfully. You can see an example of its code implementation under the
testdata directory with the DeployImage samples here.
Configuring your test control plane

Controller-runtime’s envtest framework requires kubectl , kube-apiserver , and etcd


binaries be present locally to simulate the API portions of a real cluster.

The make test command will install these binaries to the bin/ directory and use them
when running tests that use envtest . Ie,

./bin/k8s/
└── 1.25.0-darwin-amd64
├── etcd
├── kube-apiserver
└── kubectl

1 directory, 3 files

You can use environment variables and/or flags to specify the kubectl , api-server and
etcd setup within your integration tests.

Environment Variables

Variable name Type When to us


Instead of s
a local contr
USE_EXISTING_CLUSTER boolean point to the
plane of an
cluster.
Point integr
tests to a di
KUBEBUILDER_ASSETS path to directory containing a
(api-server,
kubectl).
Similar to
KUBEBUILDE
but more gr
Point integr
tests to use
paths to,
other than t
TEST_ASSET_KUBE_APISERVER , respectively, api-
default ones
TEST_ASSET_ETCD , TEST_ASSET_KUBECTL server, etcd and
environmen
kubectl binaries
variables ca
used to ens
specific test
expected ve
these binari
Variable name Type When to us
Specify time
different fro
default for t
KUBEBUILDER_CONTROLPLANE_START_TIMEOUT durations in format
control plan
and supported by
(respectively
KUBEBUILDER_CONTROLPLANE_STOP_TIMEOUT time.ParseDuration
and stop; an
that exceed
will fail.
Set to true
the control
stdout and s
os.Stdout an
os.Stderr. T
KUBEBUILDER_ATTACH_CONTROL_PLANE_OUTPUT boolean
useful when
debugging t
failures, as o
include outp
the control

See that the test makefile target will ensure that all is properly setup when you are
using it. However, if you would like to run the tests without use the Makefile targets, for
example via an IDE, then you can set the environment variables directly in the code of
your suite_test.go :
var _ = BeforeSuite(func(done Done) {
Expect(os.Setenv("TEST_ASSET_KUBE_APISERVER", "../bin/k8s/1.25.0-darwin-
amd64/kube-apiserver")).To(Succeed())
Expect(os.Setenv("TEST_ASSET_ETCD", "../bin/k8s/1.25.0-darwin-
amd64/etcd")).To(Succeed())
Expect(os.Setenv("TEST_ASSET_KUBECTL", "../bin/k8s/1.25.0-darwin-
amd64/kubectl")).To(Succeed())
// OR
Expect(os.Setenv("KUBEBUILDER_ASSETS", "../bin/k8s/1.25.0-darwin-
amd64")).To(Succeed())

logf.SetLogger(zap.New(zap.WriteTo(GinkgoWriter), zap.UseDevMode(true)))
testenv = &envtest.Environment{}

_, err := testenv.Start()
Expect(err).NotTo(HaveOccurred())

close(done)
}, 60)

var _ = AfterSuite(func() {
Expect(testenv.Stop()).To(Succeed())

Expect(os.Unsetenv("TEST_ASSET_KUBE_APISERVER")).To(Succeed())
Expect(os.Unsetenv("TEST_ASSET_ETCD")).To(Succeed())
Expect(os.Unsetenv("TEST_ASSET_KUBECTL")).To(Succeed())

})

ENV TEST Config Options

You can look at the controller-runtime docs to know more about its configuration
options, see here. On top of that, if you are looking to use ENV TEST to test your
webhooks then you might want to give a look at its install options.

Flags

Here’s an example of modifying the flags with which to start the API server in your
integration tests, compared to the default values in
envtest.DefaultKubeAPIServerFlags :
customApiServerFlags := []string{
"--secure-port=6884",
"--admission-control=MutatingAdmissionWebhook",
}

apiServerFlags := append([]string(nil), envtest.DefaultKubeAPIServerFlags...)


apiServerFlags = append(apiServerFlags, customApiServerFlags...)

testEnv = &envtest.Environment{
CRDDirectoryPaths: []string{filepath.Join("..", "config", "crd",
"bases")},
KubeAPIServerFlags: apiServerFlags,
}

Testing considerations
Unless you’re using an existing cluster, keep in mind that no built-in controllers are
running in the test context. In some ways, the test control plane will behave differently
from “real” clusters, and that might have an impact on how you write tests. One common
example is garbage collection; because there are no controllers monitoring built-in
resources, objects do not get deleted, even if an OwnerReference is set up.

To test that the deletion lifecycle works, test the ownership instead of asserting on
existence. For example:

expectedOwnerReference := v1.OwnerReference{
Kind: "MyCoolCustomResource",
APIVersion: "my.api.example.com/v1beta1",
UID: "d9607e19-f88f-11e6-a518-42010a800195",
Name: "userSpecifiedResourceName",
}
Expect(deployment.ObjectMeta.OwnerReferences).To(ContainElement(expectedOwnerRe

Namespace usage limitation


EnvTest does not support namespace deletion. Deleting a namespace will seem to
succeed, but the namespace will just be put in a Terminating state, and never actually be
reclaimed. Trying to recreate the namespace will fail. This will cause your reconciler to
continue reconciling any objects left behind, unless they are deleted.

To overcome this limitation you can create a new namespace for each test. Even so, when
one test completes (e.g. in “namespace-1”) and another test starts (e.g. in “namespace-2”),
the controller will still be reconciling any active objects from “namespace-1”. This can be
avoided by ensuring that all tests clean up after themselves as part of the test teardown.
If teardown of a namespace is difficult, it may be possible to wire the reconciler in such a
way that it ignores reconcile requests that come from namespaces other than the one
being tested:

type MyCoolReconciler struct {


client.Client
...
Namespace string // restrict namespaces to reconcile
}
func (r *MyCoolReconciler) Reconcile(ctx context.Context, req ctrl.Request)
(ctrl.Result, error) {
_ = r.Log.WithValues("myreconciler", req.NamespacedName)
// Ignore requests for other namespaces, if specified
if r.Namespace != "" && req.Namespace != r.Namespace {
return ctrl.Result{}, nil
}

Whenever your tests create a new namespace, it can modify the value of
reconciler.Namespace. The reconciler will effectively ignore the previous namespace. For
further information see the issue raised in the controller-runtime controller-
runtime/issues/880 to add this support.

Cert-Manager and Prometheus options


Projects scaffolded with Kubebuilder can enable the metrics and the cert-manager
options. Note that when we are using the ENV TEST we are looking to test the controllers
and their reconciliation. It is considered an integrated test because the ENV TEST API will
do the test against a cluster and because of this the binaries are downloaded and used to
configure its pre-requirements, however, its purpose is mainly to unit test the
controllers.

Therefore, to test a reconciliation in common cases you do not need to care about these
options. However, if you would like to do tests with the Prometheus and the Cert-
manager installed you can add the required steps to install them before running the
tests. Following an example.
// Add the operations to install the Prometheus operator and the cert-
manager
// before the tests.
BeforeEach(func() {
By("installing prometheus operator")
Expect(utils.InstallPrometheusOperator()).To(Succeed())

By("installing the cert-manager")


Expect(utils.InstallCertManager()).To(Succeed())
}

// You can also remove them after the tests::


AfterEach(func() {
By("uninstalling the Prometheus manager bundle")
utils.UninstallPrometheusOperManager()

By("uninstalling the cert-manager bundle")


utils.UninstallCertManager()
})

Check the following example of how you can implement the above operations:
const (
prometheusOperatorVersion = "0.51"
prometheusOperatorURL =
"https://raw.githubusercontent.com/prometheus-operator/" + "prometheus-
operator/release-%s/bundle.yaml"
certmanagerVersion = "v1.5.3"
certmanagerURLTmpl = "https://github.com/jetstack/cert-
manager/releases/download/%s/cert-manager.yaml"
)

func warnError(err error) {


fmt.Fprintf(GinkgoWriter, "warning: %v\n", err)
}

// InstallPrometheusOperator installs the prometheus Operator to be used to


export the enabled metrics.
func InstallPrometheusOperator() error {
url := fmt.Sprintf(prometheusOperatorURL, prometheusOperatorVersion)
cmd := exec.Command("kubectl", "apply", "-f", url)
_, err := Run(cmd)
return err
}

// UninstallPrometheusOperator uninstalls the prometheus


func UninstallPrometheusOperator() {
url := fmt.Sprintf(prometheusOperatorURL, prometheusOperatorVersion)
cmd := exec.Command("kubectl", "delete", "-f", url)
if _, err := Run(cmd); err != nil {
warnError(err)
}
}

// UninstallCertManager uninstalls the cert manager


func UninstallCertManager() {
url := fmt.Sprintf(certmanagerURLTmpl, certmanagerVersion)
cmd := exec.Command("kubectl", "delete", "-f", url)
if _, err := Run(cmd); err != nil {
warnError(err)
}
}

// InstallCertManager installs the cert manager bundle.


func InstallCertManager() error {
url := fmt.Sprintf(certmanagerURLTmpl, certmanagerVersion)
cmd := exec.Command("kubectl", "apply", "-f", url)
if _, err := Run(cmd); err != nil {
return err
}
// Wait for cert-manager-webhook to be ready, which can take time if
cert-manager
//was re-installed after uninstalling on a cluster.
cmd = exec.Command("kubectl", "wait", "deployment.apps/cert-manager-
webhook",
"--for", "condition=Available",
"--namespace", "cert-manager",
"--timeout", "5m",
)
_, err := Run(cmd)
return err
}

// LoadImageToKindCluster loads a local docker image to the kind cluster


func LoadImageToKindClusterWithName(name string) error {
cluster := "kind"
if v, ok := os.LookupEnv("KIND_CLUSTER"); ok {
cluster = v
}

kindOptions := []string{"load", "docker-image", name, "--name", cluster}


cmd := exec.Command("kind", kindOptions...)
_, err := Run(cmd)
return err
}

However, see that tests for the metrics and cert-manager might fit better well as e2e tests
and not under the tests done using ENV TEST for the controllers. You might want to give a
look at the sample example implemented into Operator-SDK repository to know how you
can write your e2e tests to ensure the basic workflows of your project. Also, see that you
can run the tests against a cluster where you have some configurations in place they can
use the option to test using an existing cluster:

testEnv = &envtest.Environment{
UseExistingCluster: true,
}

Metrics

By default, controller-runtime builds a global prometheus registry and publishes a


collection of performance metrics for each controller.

Protecting the Metrics


These metrics are protected by kube-rbac-proxy by default if using kubebuilder.
Kubebuilder v2.2.0+ scaffold a clusterrole which can be found at
config/rbac/auth_proxy_client_clusterrole.yaml .

You will need to grant permissions to your Prometheus server so that it can scrape the
protected metrics. To achieve that, you can create a clusterRoleBinding to bind the
clusterRole to the service account that your Prometheus server uses. If you are using
kube-prometheus, this cluster binding already exists.

You can either run the following command, or apply the example yaml file provided
below to create clusterRoleBinding .
If using kubebuilder <project-prefix> is the namePrefix field in
config/default/kustomization.yaml .

kubectl create clusterrolebinding metrics --clusterrole=<project-prefix>-


metrics-reader --serviceaccount=<namespace>:<service-account-name>

You can also apply the following ClusterRoleBinding :

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: prometheus-k8s-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-k8s-role
subjects:
- kind: ServiceAccount
name: <prometheus-service-account>
namespace: <prometheus-service-account-namespace>

The prometheus-k8s-role referenced here should provide the necessary permissions to


allow prometheus scrape metrics from operator pods.

Exporting Metrics for Prometheus


Follow the steps below to export the metrics using the Prometheus Operator:

1. Install Prometheus and Prometheus Operator. We recommend using kube-


prometheus in production if you don’t have your own monitoring system. If you are
just experimenting, you can only install Prometheus and Prometheus Operator.

2. Uncomment the line - ../prometheus in the


config/default/kustomization.yaml . It creates the ServiceMonitor resource
which enables exporting the metrics.

# [PROMETHEUS] To enable prometheus monitor, uncomment all sections with


'PROMETHEUS'.
- ../prometheus

Note that, when you install your project in the cluster, it will create the ServiceMonitor
to export the metrics. To check the ServiceMonitor, run kubectl get ServiceMonitor -n
<project>-system . See an example:

$ kubectl get ServiceMonitor -n monitor-system


NAME AGE
monitor-controller-manager-metrics-monitor 2m8s
If you are using Prometheus Operator ensure that you
have the required permissions
If you are using Prometheus Operator, be aware that, by default, its RBAC rules are only
enabled for the default and kube-system namespaces . See its guide to know how to
configure kube-prometheus to monitor other namespaces using the .jsonnet file.

Alternatively, you can give the Prometheus Operator permissions to monitor other
namespaces using RBAC. See the Prometheus Operator Enable RBAC rules for
Prometheus pods documentation to know how to enable the permissions on the
namespace where the ServiceMonitor and manager exist.

Also, notice that the metrics are exported by default through port 8443 . In this way, you
are able to check the Prometheus metrics in its dashboard. To verify it, search for the
metrics exported from the namespace where the project is running {namespace="
<project>-system"} . See an example:

Publishing Additional Metrics


If you wish to publish additional metrics from your controllers, this can be easily achieved
by using the global registry from controller-runtime/pkg/metrics .

One way to achieve this is to declare your collectors as global variables and then register
them using init() in the controller’s package.

For example:
import (
"github.com/prometheus/client_golang/prometheus"
"sigs.k8s.io/controller-runtime/pkg/metrics"
)

var (
goobers = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "goobers_total",
Help: "Number of goobers proccessed",
},
)
gooberFailures = prometheus.NewCounter(
prometheus.CounterOpts{
Name: "goober_failures_total",
Help: "Number of failed goobers",
},
)
)

func init() {
// Register custom metrics with the global prometheus registry
metrics.Registry.MustRegister(goobers, gooberFailures)
}

You may then record metrics to those collectors from any part of your reconcile loop.
These metrics can be evaluated from anywhere in the operator code.

Enabling metrics in Prometheus UI

In order to publish metrics and view them on the Prometheus UI, the Prometheus
instance would have to be configured to select the Service Monitor instance based
on its labels.

Those metrics will be available for prometheus or other openmetrics systems to scrape.

Default Exported Metrics References

Following the metrics which are exported and provided by controller-runtime by default:

Metrics name Type Description


Current depth
workqueue_depth Gauge
of workqueue.
Metrics name Type Description
Total number
of adds
workqueue_adds_total Counter
handled by
workqueue.
How long in
seconds an
item stays in
workqueue_queue_duration_seconds Histogram
workqueue
before being
requested.
How long in
seconds
processing an
workqueue_work_duration_seconds Histogram
item from
workqueue
takes.
How many
seconds of
work has been
done that is in
progress and
hasn’t been
observed by
work_duration.
Large values
workqueue_unfinished_work_seconds Gauge
indicate stuck
threads. One
can deduce
the number of
stuck threads
by observing
the rate at
which this
increases.
How many
seconds has
the longest
workqueue_longest_running_processor_seconds Gauge running
processor for
workqueue
been running.
Total number
of retries
workqueue_retries_total Counter
handled by
workqueue.
Metrics name Type Description
Number of
HTTP requests,
partitioned by
rest_client_requests_total Counter
status code,
method, and
host.
Total number
of
controller_runtime_reconcile_total Counter
reconciliations
per controller.
Total number
of
controller_runtime_reconcile_errors_total Counter reconciliation
errors per
controller.
Length of time
per
controller_runtime_reconcile_time_seconds Histogram
reconciliation
per controller.
Maximum
number of
controller_runtime_max_concurrent_reconciles Gauge concurrent
reconciles per
controller.
Number of
currently used
controller_runtime_active_workers Gauge
workers per
controller.
Histogram of
the latency of
controller_runtime_webhook_latency_seconds Histogram processing
admission
requests.
Total number
of admission
controller_runtime_webhook_requests_total Counter requests by
HTTP status
code.
Current
number of
controller_runtime_webhook_requests_in_flight Gauge admission
requests being
served.
Makefile Helpers

By default, the projects are scaffolded with a Makefile . You can customize and update
this file as please you. Here, you will find some helpers that can be useful.

To debug with go-delve


The projects are built with Go and you have a lot of ways to do that. One of the options
would be use go-delve for it:

# Run with Delve for development purposes against the configured Kubernetes
cluster in ~/.kube/config
# Delve is a debugger for the Go programming language. More info:
https://github.com/go-delve/delve
run-delve: generate fmt vet manifests
go build -gcflags "all=-trimpath=$(shell go env GOPATH)" -o bin/manager
main.go
dlv --listen=:2345 --headless=true --api-version=2 --accept-multiclient
exec ./bin/manager

To change the version of CRDs


The controller-gen program (from controller-tools) generates CRDs for kubebuilder
projects, wrapped in the following make rule:

manifests: controller-gen
$(CONTROLLER_GEN) rbac:roleName=manager-role crd webhook paths="./..."
output:crd:artifacts:config=config/crd/bases

controller-gen lets you specify what CRD API version to generate (either “v1”, the
default, or “v1beta1”). You can direct it to generate a specific version by adding
crd:crdVersions={<version>} to your CRD_OPTIONS , found at the top of your Makefile:

CRD_OPTIONS ?= "crd:crdVersions={v1beta1},preserveUnknownFields=false"

manifests: controller-gen
$(CONTROLLER_GEN) rbac:roleName=manager-role $(CRD_OPTIONS) webhook
paths="./..." output:crd:artifacts:config=config/crd/bases

To get all the manifests without deploying


By adding make dry-run you can get the patched manifests in the dry-run folder, unlike
make depĺoy which runs kustomize and kubectl apply .
To accomplish this, add the following lines to the Makefile:

dry-run: manifests
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
mkdir -p dry-run
$(KUSTOMIZE) build config/default > dry-run/manifests.yaml

Project Config

Overview
The Project Config represents the configuration of a KubeBuilder project. All projects that
are scaffolded with the CLI (KB version 3.0 and higher) will generate the PROJECT file in
the projects’ root directory. Therefore, it will store all plugins and input data used to
generate the project and APIs to better enable plugins to make useful decisions when
scaffolding.

Example
Following is an example of a PROJECT config file which is the result of a project generated
with two APIs using the Deploy Image Plugin.
# Code generated by tool. DO NOT EDIT.
# This file is used to track the info used to scaffold your project
# and allow the plugins properly work.
# More info: https://book.kubebuilder.io/reference/project-config.html
domain: testproject.org
layout:
- go.kubebuilder.io/v4
plugins:
deploy-image.go.kubebuilder.io/v1-alpha:
resources:
- domain: testproject.org
group: example.com
kind: Memcached
options:
containerCommand: memcached,-m=64,-o,modern,-v
containerPort: "11211"
image: memcached:1.4.36-alpine
runAsUser: "1001"
version: v1alpha1
- domain: testproject.org
group: example.com
kind: Busybox
options:
image: busybox:1.28
version: v1alpha1
projectName: project-v4-with-deploy-image
repo: sigs.k8s.io/kubebuilder/testdata/project-v4-with-deploy-image
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: testproject.org
group: example.com
kind: Memcached
path: sigs.k8s.io/kubebuilder/testdata/project-v4-with-deploy-
image/api/v1alpha1
version: v1alpha1
webhooks:
validation: true
webhookVersion: v1
- api:
crdVersion: v1
namespaced: true
controller: true
domain: testproject.org
group: example.com
kind: Busybox
path: sigs.k8s.io/kubebuilder/testdata/project-v4-with-deploy-
image/api/v1alpha1
version: v1alpha1
version: "3"
Why do we need to store the plugins and data used?
Following some examples of motivations to track the input used:

check if a plugin can or cannot be scaffolded on top of an existing plugin (i.e.) plugin
compatibility while chaining multiple of them together.
what operations can or cannot be done such as verify if the layout allow API(s) for
different groups to be scaffolded for the current configuration or not.
verify what data can or not be used in the CLI operations such as to ensure that
WebHooks can only be created for pre-existent API(s)

Note that KubeBuilder is not only a CLI tool but can also be used as a library to allow
users to create their plugins/tools, provide helpers and customizations on top of their
existing projects - an example of which is Operator-SDK. SDK leverages KubeBuilder to
create plugins to allow users to work with other languages and provide helpers for their
users to integrate their projects with, for example, the Operator Framework
solutions/OLM. You can check the plugin’s documentation to know more about creating
custom plugins.

Additionally, another motivation for the PROJECT file is to help us to create a feature that
allows users to easily upgrade their projects by providing helpers that automatically re-
scaffold the project. By having all the required metadata regarding the APIs, their
configurations and versions in the PROJECT file. For example, it can be used to automate
the process of re-scaffolding while migrating between plugin versions. (More info).

Versioning
The Project config is versioned according to its layout. For further information see
Versioning.

Layout Definition
The PROJECT version 3 layout looks like:
domain: testproject.org
layout:
- go.kubebuilder.io/v3
plugins:
declarative.go.kubebuilder.io/v1:
resources:
- domain: testproject.org
group: crew
kind: FirstMate
version: v1
projectName: example
repo: sigs.k8s.io/kubebuilder/example
resources:
- api:
crdVersion: v1
namespaced: true
controller: true
domain: testproject.org
group: crew
kind: Captain
path: sigs.k8s.io/kubebuilder/example/api/v1
version: v1
webhooks:
defaulting: true
validation: true
webhookVersion: v1

Now let’s check its layout fields definition:

Field Description
Defines the global plugins, e.g. a project
init with --
plugins="go/v3,declarative" means
layout
that any sub-command used will always
call its implementation for both plugins
in a chain.
Store the domain of the project. This
information can be provided by the
domain user when the project is generate with
the init sub-command and the
domain flag.

Defines the plugins used to do custom


scaffolding, e.g. to use the optional
declarative plugin to do scaffolding
plugins
for just a specific api via the command
kubebuider create api [options] --
plugins=declarative/v1 .

projectName The name of the project. This will be


used to scaffold the manager data. By
default it is the name of the project
directory, however, it can be provided
Field Description
by the user in the init sub-command
via the --project-name flag.
The project repository which is the
Golang module, e.g
repo
github.com/example/myproject-
operator .

An array of all resources which were


resources
scaffolded in the project.
The API scaffolded in the project via the
resources.api
sub-command create api .
The Kubernetes API version
resources.api.crdVersion ( apiVersion ) used to do the scaffolding
for the CRD resource.
The API RBAC permissions which can be
resources.api.namespaced
namespaced or cluster scoped.
Indicates whether a controller was
resources.controller
scaffolded for the API.
The domain of the resource which is
resources.domain provided by the --domain flag when
the sub-command create api is used.
The GKV group of the resource which is
resources.group provided by the --group flag when the
sub-command create api is used.
The GKV version of the resource which
resources.version is provided by the --version flag when
the sub-command create api is used.
Store GKV Kind of the resource which is
resources.kind provided by the --kind flag when the
sub-command create api is used.
The import path for the API resource. It
will be <repo>/api/<kind> unless the
API added to the project is an external
resources.path
or core-type. For the core-types
scenarios, the paths used are mapped
here.
Store the webhooks data when the sub-
resources.webhooks
command create webhook is used.
The Kubernetes API version
resources.webhooks.webhookVersion ( apiVersion ) used to scaffold the
webhook resource.
resources.webhooks.conversion It is true when the webhook was
scaffold with the --conversion flag
Field Description
which means that is a conversion
webhook.
It is true when the webhook was
scaffold with the --defaulting flag
resources.webhooks.defaulting
which means that is a defaulting
webhook.
It is true when the webhook was
scaffold with the --programmatic-
resources.webhooks.validation
validation flag which means that is a
validation webhook.

Plugins

Since the 3.0.0 Kubebuilder version, preliminary support for plugins was added. You
can Extend the CLI and Scaffolds as well. See that when users run the CLI commands to
perform the scaffolds, the plugins are used:

To initialize a project with a chain of global plugins:

kubebuilder init --plugins=pluginA,pluginB

To perform an optional scaffold using custom plugins:

kubebuilder create api --plugins=pluginA,pluginB

This section details how to extend Kubebuilder and create your plugins following the
same layout structures.

Note

You can check the existing design proposal docs at Extensible CLI and Scaffolding
Plugins: phase 1 and Extensible CLI and Scaffolding Plugins: phase 1.5 to know more
on what is provided by Kubebuilder CLI and API currently.

What is about to came next?

To know more about Kubebuilder’s future vision of the Plugins architecture, see the
section Future vision for Kubebuilder Plugins.

Extending the CLI and Scaffolds


Creating your own plugins
Testing your plugins
Available plugins

This section describes the plugins supported and shipped in with the Kubebuilder project.

To scaffold the projects


The following plugins are useful to scaffold the whole project with the tool.

Plugin Key Description


Golang plugin responsible for scaffolding
go.kubebuilder.io/v2 -
go/v2 the legacy layout provided with
(Deprecated)
Kubebuilder CLI >= 2.0.0 and < 3.0.0 .
Default scaffold used for creating a project
go.kubebuilder.io/v3 -
when no plugin(s) are provided.
(Default scaffold with go/v3
Responsible for scaffolding Golang projects
Kubebuilder init)
and its configurations.
Scaffold composite by
go.kubebuilder.io/v4- base.go.kubebuilder.io/v3 and
alpha - (Add Apple go/v4 kustomize.common.kubebuilder.io/v2.
Sillicom Support) Responsible for scaffolding Golang projects
and its configurations.

To add optional features


The following plugins are useful to generate code and take advantage of optional
features

Plugin Key Description


Optional plugin used
to scaffold
APIs/controllers using
declarative.go.kubebuilder.io/v1 declarative/v1
the kubebuilder-
declarative-pattern
project.
Optional helper plugin
which can be used to
scaffold Grafana
grafana/v1-
grafana.kubebuilder.io/v1-alpha Manifests Dashboards
alpha
for the default metrics
which are exported by
controller-runtime.
Plugin Key Description
Optional helper plugin
which can be used to
deploy- scaffold APIs and
deploy-
image.go.kubebuilder.io/v1- controller with code
image/v1-alpha
alpha implementation to
Deploy and Manage an
Operand(image).

To help projects using Kubebuilder as Lib to composite


new solutions and plugins

You can also create your own plugins, see:

Creating your own plugins.

Then, see that you can use the kustomize plugin, which is responsible for to scaffold the
kustomize files under config/ , as the base language plugins which are responsible for to
scaffold the Golang files to create your own plugins to work with another languages (i.e.
Operator-SDK does to allow users work with Ansible/Helm) or to add helpers on top, such
as Operator-SDK does to add their features to integrate the projects with OLM.

Plugin Key Description


Responsible for
scaffolding all
manifests to
configure projects
with kustomize(v3).
kustomize/v1 (create and update
kustomize.common.kubebuilder.io/v1
(Deprecated) the config/
directory). This
plugin is used in the
composition to
create the plugin
( go/v3 ).
kustomize.common.kubebuilder.io/v2 kustomize/v2 It has the same
purpose of
kustomize/v1 .
However, it works
with kustomize
version v4 and
addresses the
required changes
Plugin Key Description
for future
kustomize
configurations. It
will probably be
used with the
future go/v4-
alpha plugin.

Responsible for
scaffolding all files
that specifically
require Golang.
base.go.kubebuilder.io/v3 base/v3
This plugin is used
in composition to
create the plugin
( go/v3 )
Responsible for
scaffolding all files
which specifically
requires Golang.
base.go.kubebuilder.io/v4 base/v4
This plugin is used
in the composition
to create the plugin
( go/v4 )

Plugins Versioning

ALPHA plugins can introduce breaking changes. For further info see Plugins
Versioning.

To scaffold the projects


The following plugins are useful to scaffold the whole project with the tool.

Plugin Key Description


Golang plugin responsible for scaffolding
go.kubebuilder.io/v2 -
go/v2 the legacy layout provided with
(Deprecated)
Kubebuilder CLI >= 2.0.0 and < 3.0.0 .
Default scaffold used for creating a project
go.kubebuilder.io/v3 -
when no plugin(s) are provided.
(Default scaffold with go/v3
Responsible for scaffolding Golang projects
Kubebuilder init)
and its configurations.
Plugin Key Description
Scaffold composite by
go.kubebuilder.io/v4- base.go.kubebuilder.io/v3 and
alpha - (Add Apple go/v4 kustomize.common.kubebuilder.io/v2.
Sillicom Support) Responsible for scaffolding Golang projects
and its configurations.

[Deprecated] go/v2 (go.kubebuilder.io/v2 - “Kubebuilder 2.x” layout)

! Deprecated

The go/v2 plugin cannot scaffold projects in which CRDs and/or Webhooks have a
v1 API version. The go/v2 plugin scaffolds with the v1beta1 API version which was
deprecated in Kubernetes 1.16 and removed in 1.22 . This plugin was kept to
ensure backwards compatibility with projects that were scaffolded with the old
"Kubebuilder 2.x" layout and does not work with the new plugin ecosystem that
was introduced with Kubebuilder 3.0.0 More info

Since 28 Apr 2021 , the default layout produced by Kubebuilder changed and is
done via the go/v3 . We encourage you migrate your project to the latest version if
your project was built with a Kubebuilder versions < 3.0.0 .

The recommended way to migrate a v2 project is to create a new v3 project and


copy over the API and the reconciliation code. The conversion will end up with a
project that looks like a native v3 project. For further information check the
Migration guide

The go/v2 plugin has the purpose to scaffold Golang projects to help users to build
projects with controllers and keep the backwards compatibility with the default scaffold
made using Kubebuilder CLI 2.x.z releases.

You can check samples using this plugin by looking at the project-v2-<options>
directories under the testdata projects on the root directory of the Kubebuilder
project.

When should I use this plugin ?


Only if you are looking to scaffold a project with the legacy layout. Otherwise, it is
recommended you to use the default Golang version plugin.
! Note

Be aware that this plugin version does not provide a scaffold compatible with the
latest versions of the dependencies used in order to keep its backwards
compatibility.

How to use it ?
To initialize a Golang project using the legacy layout and with this plugin run, e.g.:

kubebuilder init --domain tutorial.kubebuilder.io --repo


tutorial.kubebuilder.io/project --plugins=go/v2

Note

By creating a project with this plugin, the PROJECT file scaffold will be using the
previous schema (project version 2), so that Kubebuilder CLI knows what plugin
version was used and will call its subcommands such as create api and create
webhooks .

Note that further Golang plugins versions use the new Project file schema, which
tracks the information about what plugins and versions have been used so far.

Subcommands supported by the plugin ?


Init - kubebuilder init [OPTIONS]
Edit - kubebuilder edit [OPTIONS]
Create API - kubebuilder create api [OPTIONS]
Create Webhook - kubebuilder create webhook [OPTIONS]

Further resources
Check the code implementation of the go/v2 plugin.
[Deprecated] go/v3 (go.kubebuilder.io/v3)

! Deprecated

The go/v3 cannot fully support Kubernetes 1.25+ and work with Kustomize versions
> v3.

The recommended way to migrate a v3 project is to create a new v4 project and


copy over the API and the reconciliation code. The conversion will end up with a
project that looks like a native v4 project. For further information check the
Migration guide

Kubebuilder tool will scaffold the go/v3 plugin by default. This plugin is a composition of
the plugins kustomize.common.kubebuilder.io/v1 and base.go.kubebuilder.io/v3 .
By using you can scaffold the default project which is a helper to construct sets of
controllers.

It basically scaffolds all the boilerplate code required to create and design controllers.
Note that by following the quickstart you will be using this plugin.

Examples

Samples are provided under the testdata directory of the Kubebuilder project. You
can check samples using this plugin by looking at the project-v3-<options>
projects under the testdata directory on the root directory of the Kubebuilder
project.

When to use it ?
If you are looking to scaffold Golang projects to develop projects using controllers

How to use it ?
As go/v3 is the default plugin there is no need to explicitly mention to Kubebuilder to
use this plugin.

To create a new project with the go/v3 plugin the following command can be used:

kubebuilder init --plugins=`go/v3` --domain tutorial.kubebuilder.io --repo


tutorial.kubebuilder.io/project
All the other subcommands supported by the go/v3 plugin can be executed similarly.

Note

Also, if you need you can explicitly inform the plugin via the option provided --
plugins=go/v3 .

Subcommands supported by the plugin


Init - kubebuilder init [OPTIONS]
Edit - kubebuilder edit [OPTIONS]
Create API - kubebuilder create api [OPTIONS]
Create Webhook - kubebuilder create webhook [OPTIONS]

Further resources
To check how plugins are composited by looking at this definition in the main.go.
Check the code implementation of the base Golang plugin
base.go.kubebuilder.io/v3 .
Check the code implementation of the Kustomize/v1 plugin.
Check controller-runtime to know more about controllers.

[Default Scaffold] go/v4 (go.kubebuilder.io/v4)

Kubebuilder will scaffold using the go/v4 plugin only if specified when initializing the
project. This plugin is a composition of the plugins
kustomize.common.kubebuilder.io/v2 and base.go.kubebuilder.io/v4 . It scaffolds a
project template that helps in constructing sets of controllers.

It scaffolds boilerplate code to create and design controllers. Note that by following the
quickstart you will be using this plugin.

Examples

You can check samples using this plugin by looking at the project-v4-<options>
projects under the testdata directory on the root directory of the Kubebuilder
project.
When to use it ?
If you are looking to scaffold Golang projects to develop projects using controllers

Migration from `go/v3`

If you have a project created with go/v3 (default layout since 28 Apr 2021 and
Kubebuilder release version 3.0.0 ) to go/v4 then, see the migration guide
Migration from go/v3 to go/v4

How to use it ?
To create a new project with the go/v4 plugin the following command can be used:

kubebuilder init --domain tutorial.kubebuilder.io --repo


tutorial.kubebuilder.io/project --plugins=go/v4

Subcommands supported by the plugin


Init - kubebuilder init [OPTIONS]
Edit - kubebuilder edit [OPTIONS]
Create API - kubebuilder create api [OPTIONS]
Create Webhook - kubebuilder create webhook [OPTIONS]

Further resources
To see the composition of plugins, you can check the source code for the
Kubebuilder main.go.
Check the code implementation of the base Golang plugin
base.go.kubebuilder.io/v4 .
Check the code implementation of the Kustomize/v2 plugin.
Check controller-runtime to know more about controllers.

To add optional features


The following plugins are useful to generate code and take advantage of optional
features
Plugin Key Description
Optional plugin used
to scaffold
APIs/controllers using
the [kubebuilder-
declarative.go.kubebuilder.io/v1 declarative/v1
declarative-pattern]
[kubebuilder-
declarative-pattern]
project.
Optional helper plugin
which can be used to
scaffold Grafana
grafana/v1-
grafana.kubebuilder.io/v1-alpha Manifests Dashboards
alpha
for the default metrics
which are exported by
controller-runtime.
Optional helper plugin
which can be used to
deploy- scaffold APIs and
deploy-
image.go.kubebuilder.io/v1- controller with code
image/v1-alpha
alpha implementation to
Deploy and Manage an
Operand(image).

Declarative Plugin

The declarative plugin allows you to create controllers using the kubebuilder-declarative-
pattern. By using the declarative plugin, you can make the required changes on top of
what is scaffolded by default when you create a Go project with Kubebuilder and the
Golang plugins (i.e. go/v2, go/v3).

Examples

You can check samples using this plugin by looking at the “addon” samples inside the
testdata directory of the Kubebuilder project.

When to use it ?
If you are looking to scaffold one or more controllers following the pattern ( See an
e.g. of the reconcile method implemented here)
If you want to have manifests shipped inside your Manager container. The
declarative plugin works with channels, which allow you to push manifests. More
info
How to use it ?
The declarative plugin requires to be used with one of the available Golang plugins If you
want that any API(s) and its respective controller(s) generate to reconcile them of your
project adopt this partner then:

kubebuilder init --plugins=go/v3,declarative/v1 --domain example.org --repo


example.org/guestbook-operator

If you want to adopt this pattern for specific API(s) and its respective controller(s) (not for
any API/controller scaffold using Kubebuilder CLI) then:

kubebuilder create api --plugins=go/v3,declarative/v1 --version v1 --kind


Guestbook

Subcommands
The declarative plugin implements the following subcommands:

init ( $ kubebuilder init [OPTIONS] )


create api ( $ kubebuilder create api [OPTIONS] )

Affected files
The following scaffolds will be created or updated by this plugin:

controllers/*_controller.go
api/*_types.go
channels/packages/<packagename>/<version>/manifest.yaml
channels/stable
Dockerfile

Further resources
Read more about the declarative pattern
Watch the KubeCon 2018 Video Managing Addons with Operators
Check the plugin implementation
Grafana Plugin (grafana/v1-alpha)

The Grafana plugin is an optional plugin that can be used to scaffold Grafana Dashboards
to allow you to check out the default metrics which are exported by projects using
controller-runtime.

Examples

You can check its default scaffold by looking at the project-v3-with-metrics


projects under the testdata directory on the root directory of the Kubebuilder
project.

When to use it ?
If you are looking to observe the metrics exported by controller metrics and
collected by Prometheus via Grafana.

How to use it ?

Prerequisites:

Your project must be using controller-runtime to expose the metrics via the
controller default metrics and they need to be collected by Prometheus.
Access to Prometheus.
Prometheus should have an endpoint exposed. (For prometheus-operator ,
this is similar as: http://prometheus-k8s.monitoring.svc:9090 )
The endpoint is ready to/already become the datasource of your Grafana. See
Add a data source
Access to Grafana. Make sure you have:
Dashboard edit permission
Prometheus Data source

Check the metrics to know how to enable the metrics for your projects scaffold with
Kubebuilder.

See that in the config/prometheus you will find the ServiceMonitor to enable the
metrics in the default endpoint /metrics .

Basic Usage

The Grafana plugin is attached to the init subcommand and the edit subcommand:

# Initialize a new project with grafana plugin


kubebuilder init --plugins grafana.kubebuilder.io/v1-alpha

# Enable grafana plugin to an existing project


kubebuilder edit --plugins grafana.kubebuilder.io/v1-alpha

The plugin will create a new directory and scaffold the JSON files under it (i.e.
grafana/controller-runtime-metrics.json ).

Show case:

See an example of how to use the plugin in your project:


Now, let’s check how to use the Grafana dashboards

1. Copy the JSON file


2. Visit <your-grafana-url>/dashboard/import to import a new dashboard.
3. Paste the JSON content to Import via panel json , then press Load button
4. Select the data source for Prometheus metrics

5. Once the json is imported in Grafana, the dashboard is ready.

Grafana Dashboard

Controller Runtime Reconciliation total & errors

Metrics:
controller_runtime_reconcile_total
controller_runtime_reconcile_errors_total
Query:
sum(rate(controller_runtime_reconcile_total{job=”$job”}[5m])) by (instance,
pod)
sum(rate(controller_runtime_reconcile_errors_total{job=”$job”}[5m])) by
(instance, pod)
Description:
Per-second rate of total reconciliation as measured over the last 5 minutes
Per-second rate of reconciliation errors as measured over the last 5 minutes
Sample:

Controller CPU & Memory Usage

Metrics:
process_cpu_seconds_total
process_resident_memory_bytes
Query:
rate(process_cpu_seconds_total{job=”$job”, namespace=”$namespace”,
pod=”$pod”}[5m]) * 100
process_resident_memory_bytes{job=”$job”, namespace=”$namespace”,
pod=”$pod”}
Description:
Per-second rate of CPU usage as measured over the last 5 minutes
Allocated Memory for the running controller
Sample:

Seconds of P50/90/99 Items Stay in Work Queue

Metrics
workqueue_queue_duration_seconds_bucket
Query:
histogram_quantile(0.50,
sum(rate(workqueue_queue_duration_seconds_bucket{job=”$job”,
namespace=”$namespace”}[5m])) by (instance, name, le))
Description
Seconds an item stays in workqueue before being requested.
Sample:

Seconds of P50/90/99 Items Processed in Work Queue

Metrics
workqueue_work_duration_seconds_bucket
Query:
histogram_quantile(0.50,
sum(rate(workqueue_work_duration_seconds_bucket{job=”$job”,
namespace=”$namespace”}[5m])) by (instance, name, le))
Description
Seconds of processing an item from workqueue takes.
Sample:

Add Rate in Work Queue

Metrics
workqueue_adds_total
Query:
sum(rate(workqueue_adds_total{job=”$job”, namespace=”$namespace”}[5m]))
by (instance, name)
Description
Per-second rate of items added to work queue
Sample:

Retries Rate in Work Queue

Metrics
workqueue_retries_total
Query:
sum(rate(workqueue_retries_total{job=”$job”, namespace=”$namespace”}
[5m])) by (instance, name)
Description
Per-second rate of retries handled by workqueue
Sample:

Visualize Custom Metrics

The Grafana plugin supports scaffolding manifests for custom metrics.

Generate Config Template

When the plugin is triggered for the first time, grafana/custom-metrics/config.yaml is


generated.

---
customMetrics:
# - metric: # Raw custom metric (required)
# type: # Metric type: counter/gauge/histogram (required)
# expr: # Prom_ql for the metric (optional)
# unit: # Unit of measurement, examples: s,none,bytes,percent,etc.
(optional)
Add Custom Metrics to Config

You can enter multiple custom metrics in the file. For each element, you need to specify
the metric and its type . The Grafana plugin can automatically generate expr for
visualization. Alternatively, you can provide expr and the plugin will use the specified
one directly.

---
customMetrics:
- metric: memcached_operator_reconcile_total # Raw custom metric (required)
type: counter # Metric type: counter/gauge/histogram (required)
unit: none
- metric: memcached_operator_reconcile_time_seconds_bucket
type: histogram

Scaffold Manifest

Once config.yaml is configured, you can run kubebuilder edit --plugins


grafana.kubebuilder.io/v1-alpha again. This time, the plugin will generate
grafana/custom-metrics/custom-metrics-dashboard.json , which can be imported to
Grafana UI.

Show case:

See an example of how to visualize your custom metrics:


Subcommands
The Grafana plugin implements the following subcommands:

edit ( $ kubebuilder edit [OPTIONS] )

init ( $ kubebuilder init [OPTIONS] )

Affected files
The following scaffolds will be created or updated by this plugin:

grafana/*.json

Further resources
Check out video to show how it works
Checkout the video to show how the custom metrics feature works
Refer to a sample of servicemonitor provided by kustomize plugin
Check the plugin implementation
Grafana Docs of importing JSON file
The usage of servicemonitor by Prometheus Operator

Deploy Image Plugin (deploy-image/v1-alpha)

The deploy-image plugin allows users to create controllers and custom resources which
will deploy and manage an image on the cluster following the guidelines and best
practices. It abstracts the complexities to achieve this goal while allowing users to
improve and customize their projects.

By using this plugin you will have:

a controller implementation to Deploy and manage an Operand(image) on the


cluster
tests to check the reconciliation implemented using ENVTEST
the custom resources samples updated with the specs used
you will check that the Operand(image) will be added on the manager via
environment variables

Examples

See the “project-v3-with-deploy-image” directory under the testdata directory of the


Kubebuilder project to check an example of a scaffolding created using this plugin.

When to use it ?
This plugin is helpful for those who are getting started.
If you are looking to Deploy and Manage an image (Operand) using Operator
pattern and the tool the plugin will create an API/controller to be reconciled to
achieve this goal
If you are looking to speed up

How to use it ?
After you create a new project with kubebuilder init you can create APIs using this
plugin. Ensure that you have followed the quick start before trying to use it.

Then, by using this plugin you can create APIs informing the image (Operand) that you
would like to deploy on the cluster. Note that you can optionally specify the command
that could be used to initialize this container via the flag --image-container-command
and the port with --image-container-port flag. You can also specify the RunAsUser
value for the Security Context of the container via the flag --run-as-user ., i.e:

kubebuilder create api --group example.com --version v1alpha1 --kind


Memcached --image=memcached:1.6.15-alpine --image-container-
command="memcached,-m=64,modern,-v" --image-container-port="11211" --run-as-
user="1001" --plugins="deploy-image/v1-alpha"

Using make run

The make run will execute the main.go outside of the cluster to let you test the project
running it locally. Note that by using this plugin the Operand image informed will be
stored via an environment variable in the config/manager/manager.yaml manifest.

Therefore, before run make run you need to export any environment variable that you
might have. Example:

export MEMCACHED_IMAGE="memcached:1.4.36-alpine"

Subcommands
The deploy-image plugin implements the following subcommands:
create api ( $ kubebuilder create api [OPTIONS] )

Affected files
With the create api command of this plugin, in addition to the existing scaffolding, the
following files are affected:

controllers/*_controller.go (scaffold controller with reconciliation


implemented)
controllers/*_controller_test.go (scaffold the tests for the controller)
controllers/*_suite_test.go (scaffold/update the suite of tests)
api/<version>/*_types.go (scaffold the specs for the new api)
config/samples/*_.yaml (scaffold default values for its CR)
main.go (update to add controller setup)
config/manager/manager.yaml (update with envvar to store the image)

Further Resources:
Check out video to show how it works
See the desing proposal documentation

To help projects using Kubebuilder as Lib to composite


new solutions and plugins

You can also create your own plugins, see:

Creating your own plugins.

Then, see that you can use the kustomize plugin, which is responsible for to scaffold the
kustomize files under config/ , as the base language plugins which are responsible for to
scaffold the Golang files to create your own plugins to work with another languages (i.e.
Operator-SDK does to allow users work with Ansible/Helm) or to add helpers on top, such
as Operator-SDK does to add their features to integrate the projects with OLM.

Plugin Key Description


kustomize.common.kubebuilder.io/v1 kustomize/v1 Responsible for
(Deprecated) scaffolding all
manifests to
configure projects
Plugin Key Description
with kustomize(v3).
(create and update
the config/
directory). This
plugin is used in the
composition to
create the plugin
( go/v3 ).
It has the same
purpose of
kustomize/v1 .
However, it works
with kustomize
version v4 and
addresses the
kustomize.common.kubebuilder.io/v2 kustomize/v2 required changes
for future
kustomize
configurations. It
will probably be
used with the
future go/v4-
alpha plugin.

Responsible for
scaffolding all files
that specifically
require Golang.
base.go.kubebuilder.io/v3 base/v3
This plugin is used
in composition to
create the plugin
( go/v3 )
Responsible for
scaffolding all files
which specifically
requires Golang.
base.go.kubebuilder.io/v4 base/v4
This plugin is used
in the composition
to create the plugin
( go/v4 )
[Deprecated] Kustomize (kustomize/v1)

! Deprecated

The kustomize/v1 plugin is deprecated. If you are using this plugin, it is


recommended to migrate to the kustomize/v2 plugin which uses Kustomize v5 and
provides support for Apple Silicon (M1).

If you are using Golang projects scaffolded with go/v3 which uses this version
please, check the Migration guide to learn how to upgrade your projects.

The kustomize plugin allows you to scaffold all kustomize manifests used to work with
the language plugins such as go/v2 and go/v3 . By using the kustomize plugin, you can
create your own language plugins and ensure that you will have the same configurations
and features provided by it.

Supportability

This plugin uses kubernetes-sigs/kustomize v3 and the architectures supported are:

linux/amd64
linux/arm64
darwin/amd64

You might want to consider using kustomize/v2 if you are looking to scaffold projects
in other architecture environments. (i.e. if you are looking to scaffold projects with
Apple Silicon/M1 ( darwin/arm64 ) this plugin will not work, more info: kubernetes-
sigs/kustomize#4612).

Note that projects such as Operator-sdk consume the Kubebuilder project as a lib and
provide options to work with other languages like Ansible and Helm. The kustomize
plugin allows them to easily keep a maintained configuration and ensure that all
languages have the same configuration. It is also helpful if you are looking to provide nice
plugins which will perform changes on top of what is scaffolded by default. With this
approach we do not need to keep manually updating this configuration in all possible
language plugins which uses the same and we are also able to create “helper” plugins
which can work with many projects and languages.

Examples

You can check the kustomize content by looking at the config/ directory. Samples
are provided under the testdata directory of the Kubebuilder project.
When to use it ?
If you are looking to scaffold the kustomize configuration manifests for your own
language plugin

How to use it ?
If you are looking to define that your language plugin should use kustomize use the
Bundle Plugin to specify that your language plugin is a composition with your plugin
responsible for scaffold all that is language specific and kustomize for its configuration,
see:

// Bundle plugin which built the golang projects scaffold by Kubebuilder


go/v3
// The follow code is creating a new plugin with its name and version via
composition
// You can define that one plugin is composite by 1 or Many others
plugins
gov3Bundle, _ :=
plugin.NewBundle(plugin.WithName(golang.DefaultNameQualifier),
plugin.WithVersion(plugin.Version{Number: 3}),
plugin.WithPlugins(kustomizecommonv1.Plugin{}, golangv3.Plugin{}), //
scaffold the config/ directory and all kustomize files
// Scaffold the Golang files and all that specific for the language
e.g. go.mod, apis, controllers
)

Also, with Kubebuilder, you can use kustomize alone via:

kubebuilder init --plugins=kustomize/v1


$ ls -la
total 24
drwxr-xr-x 6 camilamacedo86 staff 192 31 Mar 09:56 .
drwxr-xr-x 11 camilamacedo86 staff 352 29 Mar 21:23 ..
-rw------- 1 camilamacedo86 staff 129 26 Mar 12:01 .dockerignore
-rw------- 1 camilamacedo86 staff 367 26 Mar 12:01 .gitignore
-rw------- 1 camilamacedo86 staff 94 31 Mar 09:56 PROJECT
drwx------ 6 camilamacedo86 staff 192 31 Mar 09:56 config

Or combined with the base language plugins:

# Provides the same scaffold of go/v3 plugin which is a composition


(kubebuilder init --plugins=go/v3)
kubebuilder init --plugins=kustomize/v1,base.go.kubebuilder.io/v3 --domain
example.org --repo example.org/guestbook-operator
Subcommands
The kustomize plugin implements the following subcommands:

init ( $ kubebuilder init [OPTIONS] )


create api ( $ kubebuilder create api [OPTIONS] )
create webhook ( $ kubebuilder create api [OPTIONS] )

Create API and Webhook

Its implementation for the subcommand create api will scaffold the kustomize
manifests which are specific for each API, see here. The same applies to its
implementation for create webhook.

Affected files
The following scaffolds will be created or updated by this plugin:

config/*

Further resources
Check the kustomize plugin implementation
Check the kustomize documentation
Check the kustomize repository

[Default Scaffold] Kustomize v2

The kustomize plugin allows you to scaffold all kustomize manifests used to work with
the language base plugin base.go.kubebuilder.io/v4 . This plugin is used to generate
the manifest under config/ directory for the projects build within the go/v4 plugin
(default scaffold).

Note that projects such as Operator-sdk consume the Kubebuilder project as a lib and
provide options to work with other languages like Ansible and Helm. The kustomize
plugin allows them to easily keep a maintained configuration and ensure that all
languages have the same configuration. It is also helpful if you are looking to provide nice
plugins which will perform changes on top of what is scaffolded by default. With this
approach we do not need to keep manually updating this configuration in all possible
language plugins which uses the same and we are also able to create “helper” plugins
which can work with many projects and languages.
Examples

You can check the kustomize content by looking at the config/ directory provide on
the sample project-v4-* under the testdata directory of the Kubebuilder project.

When to use it
If you are looking to scaffold the kustomize configuration manifests for your own
language plugin
If you are looking for support on Apple Silicon ( darwin/arm64 ). (Before kustomize
4.x the binary for this plataform is not provided)
If you are looking for to begin to try out the new syntax and features provide by
kustomize v4 (More info) and v5 (More info)
If you are NOT looking to build projects which will be used on Kubernetes cluster
versions < 1.22 (The new features provides by kustomize v4 are not officially supported
and might not work with kubectl < 1.22 )
If you are NOT looking to rely on special URLs in resource fields
If you want to use replacements since vars are deprecated and might be removed
soon

How to use it
If you are looking to define that your language plugin should use kustomize use the
Bundle Plugin to specify that your language plugin is a composition with your plugin
responsible for scaffold all that is language specific and kustomize for its configuration,
see:
import (
...
kustomizecommonv2alpha
"sigs.k8s.io/kubebuilder/v3/pkg/plugins/common/kustomize/v2"
golangv4 "sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang/v4"
...
)

// Bundle plugin which built the golang projects scaffold by Kubebuilder


go/v3
// The follow code is creating a new plugin with its name and version via
composition
// You can define that one plugin is composite by 1 or Many others
plugins
gov3Bundle, _ :=
plugin.NewBundle(plugin.WithName(golang.DefaultNameQualifier),
plugin.WithVersion(plugin.Version{Number: 3}),
plugin.WithPlugins(kustomizecommonv2.Plugin{}, golangv3.Plugin{}), //
scaffold the config/ directory and all kustomize files
// Scaffold the Golang files and all that specific for the language
e.g. go.mod, apis, controllers
)

Also, with Kubebuilder, you can use kustomize/v2 alone via:

kubebuilder init --plugins=kustomize/v2


$ ls -la
total 24
drwxr-xr-x 6 camilamacedo86 staff 192 31 Mar 09:56 .
drwxr-xr-x 11 camilamacedo86 staff 352 29 Mar 21:23 ..
-rw------- 1 camilamacedo86 staff 129 26 Mar 12:01 .dockerignore
-rw------- 1 camilamacedo86 staff 367 26 Mar 12:01 .gitignore
-rw------- 1 camilamacedo86 staff 94 31 Mar 09:56 PROJECT
drwx------ 6 camilamacedo86 staff 192 31 Mar 09:56 config

Or combined with the base language plugins:

# Provides the same scaffold of go/v3 plugin which is composition but with
kustomize/v2
kubebuilder init --plugins=kustomize/v2,base.go.kubebuilder.io/v4 --domain
example.org --repo example.org/guestbook-operator

Subcommands
The kustomize plugin implements the following subcommands:

init ( $ kubebuilder init [OPTIONS] )


create api ( $ kubebuilder create api [OPTIONS] )
create webhook ( $ kubebuilder create api [OPTIONS] )
Create API and Webhook

Its implementation for the subcommand create api will scaffold the kustomize
manifests which are specific for each API, see here. The same applies to its
implementation for create webhook.

Affected files
The following scaffolds will be created or updated by this plugin:

config/*

Further resources
Check the kustomize plugin implementation
Check the kustomize documentation
Check the kustomize repository
Check the release notes for Kustomize v5.0.0
Check the release notes for Kustomuze v4.0.0
Also, you can compare the config/ directory between the samples project-v3
and project-v4 to check the difference in the syntax of the manifests provided by
default

Extending the CLI and Scaffolds

Overview
You can extend Kubebuilder to allow your project to have the same CLI features and
provide the plugins scaffolds.

CLI system
Plugins are run using a CLI object, which maps a plugin type to a subcommand and calls
that plugin’s methods. For example, writing a program that injects an Init plugin into a
CLI then calling CLI.Run() will call the plugin’s SubcommandMetadata,
UpdatesMetadata and Run methods with information a user has passed to the program
in kubebuilder init . Following an example:
package cli

import (
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"

"sigs.k8s.io/kubebuilder/v3/pkg/cli"
cfgv3 "sigs.k8s.io/kubebuilder/v3/pkg/config/v3"
"sigs.k8s.io/kubebuilder/v3/pkg/plugin"
kustomizecommonv1
"sigs.k8s.io/kubebuilder/v3/pkg/plugins/common/kustomize/v1"
"sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang"
declarativev1
"sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang/declarative/v1"
golangv3 "sigs.k8s.io/kubebuilder/v3/pkg/plugins/golang/v3"

var (
// The following is an example of the commands
// that you might have in your own binary
commands = []*cobra.Command{
myExampleCommand.NewCmd(),
}
alphaCommands = []*cobra.Command{
myExampleAlphaCommand.NewCmd(),
}
)

// GetPluginsCLI returns the plugins based CLI configured to be used in your


CLI binary
func GetPluginsCLI() (*cli.CLI) {
// Bundle plugin which built the golang projects scaffold by Kubebuilder
go/v3
gov3Bundle, _ :=
plugin.NewBundle(plugin.WithName(golang.DefaultNameQualifier),
plugin.WithVersion(plugin.Version{Number: 3}),
plugin.WithPlugins(kustomizecommonv1.Plugin{}, golangv3.Plugin{}),
)

c, err := cli.New(
// Add the name of your CLI binary
cli.WithCommandName("example-cli"),

// Add the version of your CLI binary


cli.WithVersion(versionString()),

// Register the plugins options which can be used to do the scaffolds


via your CLI tool. See that we are using as example here the plugins which
are implemented and provided by Kubebuilder
cli.WithPlugins(
gov3Bundle,
&declarativev1.Plugin{},
),

// Defines what will be the default plugin used by your binary. It


means that will be the plugin used if no info be provided such as when the
user runs `kubebuilder init`
cli.WithDefaultPlugins(cfgv3.Version, gov3Bundle),

// Define the default project configuration version which will be


used by the CLI when none is informed by --project-version flag.
cli.WithDefaultProjectVersion(cfgv3.Version),

// Adds your own commands to the CLI


cli.WithExtraCommands(commands...),

// Add your own alpha commands to the CLI


cli.WithExtraAlphaCommands(alphaCommands...),

// Adds the completion option for your CLI


cli.WithCompletion(),
)
if err != nil {
log.Fatal(err)
}

return c
}

// versionString returns the CLI version


func versionString() string {
// return your binary project version
}

This program can then be built and run in the following ways:

Default behavior:

# Initialize a project with the default Init plugin, "go.example.com/v1".


# This key is automatically written to a PROJECT config file.
$ my-bin-builder init
# Create an API and webhook with "go.example.com/v1" CreateAPI and
# CreateWebhook plugin methods. This key was read from the config file.
$ my-bin-builder create api [flags]
$ my-bin-builder create webhook [flags]

Selecting a plugin using --plugins :

# Initialize a project with the "ansible.example.com/v1" Init plugin.


# Like above, this key is written to a config file.
$ my-bin-builder init --plugins ansible
# Create an API and webhook with "ansible.example.com/v1" CreateAPI
# and CreateWebhook plugin methods. This key was read from the config file.
$ my-bin-builder create api [flags]
$ my-bin-builder create webhook [flags]
CLI manages the PROJECT file

The CLI is responsible for managing the PROJECT file config, representing the
configuration of the projects that are scaffold by the CLI tool.

Plugins
Kubebuilder provides scaffolding options via plugins. Plugins are responsible for
implementing the code that will be executed when the sub-commands are called. You can
create a new plugin by implementing the Plugin interface.

On top of being a Base , a plugin should also implement the SubcommandMetadata


interface so it can be run with a CLI. It optionally to set custom help text for the target
command; this method can be a no-op, which will preserve the default help text set by
the cobra command constructors.

Kubebuilder CLI plugins wrap scaffolding and CLI features in conveniently packaged Go
types that are executed by the kubebuilder binary, or any binary which imports them.
More specifically, a plugin configures the execution of one of the following CLI
commands:

init : project initialization.


create api : scaffold Kubernetes API definitions.
create webhook : scaffold Kubernetes webhooks.

Plugins are identified by a key of the form <name>/<version> . There are two ways to
specify a plugin to run:

Setting kubebuilder init --plugins=<plugin key> , which will initialize a project


configured for plugin with key <plugin key> .

A layout: <plugin key> in the scaffolded PROJECT configuration file. Commands


(except for init , which scaffolds this file) will look at this value before running to
choose which plugin to run.

By default, <plugin key> will be go.kubebuilder.io/vX , where X is some integer.

For a full implementation example, check out Kubebuilder’s native go.kubebuilder.io


plugin.

Plugin naming

Plugin names must be DNS1123 labels and should be fully qualified, i.e. they have a suffix
like .example.com . For example, the base Go scaffold used with kubebuilder
commands has name go.kubebuilder.io . Qualified names prevent conflicts between
plugin names; both go.kubebuilder.io and go.example.com can both scaffold Go code
and can be specified by a user.

Plugin versioning

A plugin’s Version() method returns a plugin.Version object containing an integer


value and optionally a stage string of either “alpha” or “beta”. The integer denotes the
current version of a plugin. Two different integer values between versions of plugins
indicate that the two plugins are incompatible. The stage string denotes plugin stability:

alpha : should be used for plugins that are frequently changed and may break
between uses.
beta : should be used for plugins that are only changed in minor ways, ex. bug
fixes.

Breaking changes

Any change that will break a project scaffolded by the previous plugin version is a
breaking change.

Plugins Deprecation

Once a plugin is deprecated, have it implement a Deprecated interface so a deprecation


warning will be printed when it is used.

Bundle Plugins
Bundle Plugins allow you to create a plugin that is a composition of many plugins:

// see that will be like myplugin.example/v1`


myPluginBundle, _ := plugin.NewBundle(plugin.WithName(`<plugin-name>`),
plugin.WithVersion(`<plugin-version>`),
plugin.WithPlugins(pluginA.Plugin{}, pluginB.Plugin{},
pluginC.Plugin{}),
)

Note that it means that when a user of your CLI calls this plugin, the execution of the sub-
commands will be sorted by the order to which they were added in a chain:

sub-command of plugin A ➔ sub-command of plugin B ➔ sub-command of plugin C


Then, to initialize using this “Plugin Bundle” which will run the chain of plugins:

kubebuider init --plugins=myplugin.example/v1

Runs init sub-command of the plugin A


And then, runs init sub-command of the plugin B
And then, runs init sub-command of the plugin C

Creating your own plugins

Overview
You can extend the Kubebuilder API to create your own plugins. If extending the CLI, your
plugin will be implemented in your project and registered to the CLI as has been done by
the SDK project. See its CLI code as an example.

When it is useful?
If you are looking to create plugins which support and work with another language.
If you would like to create helpers and integrations on top of the scaffolds done by
the plugins provided by Kubebuiler.
If you would like to have customized layouts according to your needs.

How the plugins can be used?


Kubebuilder provides a set of plugins to scaffold the projects, to help you extend and re-
use its implementation to provide additional features. For further information see
Available Plugins.

Therefore, if you have a need you might want to propose a solution by adding a new
plugin which would be shipped with Kubebuilder by default.

However, you might also want to have your own tool to address your specific scenarios
and by taking advantage of what is provided by Kubebuilder as a library. That way, you
can focus on addressing your needs and keep your solutions easier to maintain.

Note that by using Kubebuilder as a library, you can import its plugins and then create
your own plugins that do customizations on top. For instance, Operator-SDK does with
the plugins manifest and scorecard to add its features. Also see here.
Another option implemented with the Extensible CLI and Scaffolding Plugins - Phase 2 is
to extend Kibebuilder as a LIB to create only a specific plugin that can be called and used
with Kubebuilder as well.

Plugins proposal docs

You can check the proposal documentation for better understanding its motivations.
See the Extensible CLI and Scaffolding Plugins: phase 1, the Extensible CLI and
Scaffolding Plugins: phase 1.5 and the Extensible CLI and Scaffolding Plugins - Phase
2 design docs. Also, you can check the Plugins section.

Language-based Plugins
Kubebuilder offers the Golang-based operator plugins, which will help its CLI tool users
create projects following the Operator Pattern.

The SDK project, for example, has language plugins for Ansible and Helm, which are
similar options but for users who would like to work with these respective languages and
stacks instead of Golang.

Note that Kubebuilder provides the kustomize.common.kubebuilder.io to help in these


efforts. This plugin will scaffold the common base without any specific language scaffold
file to allow you to extend the Kubebuilder style for your plugins.

In this way, currently, you can Extend the CLI and use the Bundle Plugin to create your
language plugins such as:

mylanguagev1Bundle, _ :=
plugin.NewBundle(plugin.WithName(language.DefaultNameQualifier),
plugin.WithVersion(plugin.Version{Number: 1}),
plugin.WithPlugins(kustomizecommonv1.Plugin{},
mylanguagev1.Plugin{}), // extend the common base from Kubebuilder
// your plugin language which will do the scaffolds for the specific
language on top of the common base
)

If you do not want to develop your plugin using Golang, you can follow its standard by
using the binary as follows:

kubebuilder init --plugins=kustomize

Then you can, for example, create your implementations for the sub-commands create
api and create webhook using your language of preference.
Why use the Kubebuilder style?

Kubebuilder and SDK are both broadly adopted projects which leverage the
controller-runtime project. They both allow users to build solutions using the
Operator Pattern and follow common standards.

Adopting these standards can bring significant benefits, such as joining forces on
maintaining the common standards as the features provided by Kubebuilder and
take advantage of the contributions made by the community. This allows you to
focus on the specific needs and requirements for your plugin and use-case.

And then, you will also be able to use custom plugins and options currently or in the
future which might to be provided by these projects as any other which decides to
persuade the same standards.

Custom Plugins
Note that users are also able to use plugins to customize their scaffolds and address
specific needs.

See that Kubebuilder provides the deploy-image plugin that allows the user to create the
controller & CRs which will deploy and manage an image on the cluster:

kubebuilder create api --group example.com --version v1alpha1 --kind


Memcached --image=memcached:1.6.15-alpine --image-container-
command="memcached,-m=64,modern,-v" --image-container-port="11211" --run-as-
user="1001" --plugins="deploy-image/v1-alpha"

This plugin will perform a custom scaffold following the Operator Pattern.

Another example is the grafana plugin that scaffolds a new folder container manifests
to visualize operator status on Grafana Web UI:

kubebuilder edit --plugins="grafana.kubebuilder.io/v1-alpha"

In this way, by Extending the Kubebuilder CLI, you can also create custom plugins such
this one.

Feel free to check the implementation under:

deploy-image: https://github.com/kubernetes-
sigs/kubebuilder/tree/v3.7.0/pkg/plugins/golang/deploy-image/v1alpha1
grafana: https://github.com/kubernetes-
sigs/kubebuilder/tree/v3.7.0/pkg/plugins/optional/grafana/v1alpha
Plugin Scaffolding
Your plugin may add code on top of what is scaffolded by default with Kubebuilder sub-
commands( init , create , ...). This is common as you may expect your plugin to:

Create API
Update controller manager logic
Generate corresponding manifests

Boilerplates

The Kubebuilder internal plugins use boilerplates to generate the files of code.

For instance, the go/v3 scaffolds the main.go file by defining an object that implements
the machinery interface. In the implementation of Template.SetTemplateDefaults , the
raw template is set to the body. Such object that implements the machinery interface will
later pass to the execution of scaffold.

Similar, you may also design your code of plugin implementation by such reference. You
can also view the other parts of the code file given by the links above.

If your plugin is expected to modify part of the existing files with its scaffold, you may use
functions provided by sigs.k8s.io/kubebuilder/v3/pkg/plugin/util. See example of deploy-
image. In brief, the util package helps you customize your scaffold in a lower level.

Use Kubebuilder Machinery Lib

Notice that Kubebuilder also provides machinery pkg where you can:

Define file I/O behavior.


Add markers to the scaffolded file.
Define the template for scaffolding.

Overwrite A File

You might want for example to overwrite a scaffold done by using the option:

f.IfExistsAction = machinery.OverwriteFile

Let’s imagine that you would like to have a helper plugin that would be called in a chain
with go/v4 to add customizations on top. Therefore after we generate the code calling
the subcommand to init from go/v4 we would like to overwrite the Makefile to change
this scaffold via our plugin. In this way, we would implement the Bollerplate for our
Makefile and then use this option to ensure that it would be overwritten.
See example of deploy-image.

A Combination of Multiple Plugins

Since your plugin may work frequently with other plugins, the executing command for
scaffolding may become cumbersome, e.g:

kubebuilder create api --plugins=go/v3,kustomize/v1,yourplugin/v1

You can probably define a method to your scaffolder that calls the plugin scaffolding
method in order. See example of deploy-image.

Define Plugin Bundles

Alternatively, you can create a plugin bundle to include the target plugins. For instance:

mylanguagev1Bundle, _ :=
plugin.NewBundle(plugin.WithName(language.DefaultNameQualifier),
plugin.WithVersion(plugin.Version{Number: 1}),
plugin.WithPlugins(kustomizecommonv1.Plugin{},
mylanguagev1.Plugin{}), // extend the common base from Kuebebuilder
// your plugin language which will do the scaffolds for the specific
language on top of the common base
)

Test Your Plugins

You can test your plugin in two dimension:

1. Validate your plugin behavior through E2E tests


2. Generate sample projects based on your plugin that can be placed in ./testdata/

Write E2E Tests


You can check Kubebuilder/v3/test/e2e/utils package that offers TestContext of rich
methods:

NewTestContext helps define:


Temporary folder for testing projects
Temporary controller-manager image
Kubectl execution method
The cli executable ( kubebuilder , operator-sdk , OR your extended-cli)

Once defined, you can use TestContext to:


1. Setup testing environment, e.g:
Clean up the environment, create temp dir. See Prepare
Install prerequisites CRDs: See InstallCertManager, InstallPrometheusManager
2. Validate the plugin behavior, e.g:
Trigger the plugin’s bound subcommands. See Init, CreateAPI
Use PluginUtil to verify the scaffolded outputs. See InsertCode, ReplaceInFile,
UncommendCode
3. Further make sure the scaffolded output works, e.g:
Execute commands in your Makefile . See Make
Temporary load image of the testing controller. See LoadImageToKindCluster
Call Kubectl to validate running resources. See utils.Kubectl
4. Delete temporary resources after testing exited, e.g:
Uninstall prerequisites CRDs: See UninstallPrometheusOperManager
Delete temp dir. See Destroy

References: operator-sdk e2e tests, kubebuiler e2e tests

Generate Test Samples


It can be straightforward to view content of sample projects generated by your plugin.

For example, Kubebuilder generate sample projects based on different plugins to validate
the layouts.

Simply, you can also use TextContext to generate folders of scaffolded projects from
your plugin. The commands are very similar as mentioned in creating-plugins.

Following is a general workflow to create a sample by the plugin go/v3 : ( kbc is an


instance of TextContext )

To initialized a project:

By("initializing a project")
err = kbc.Init(
"--plugins", "go/v3",
"--project-version", "3",
"--domain", kbc.Domain,
"--fetch-deps=false",
"--component-config=true",
)
ExpectWithOffset(1, err).NotTo(HaveOccurred())

To define API:
By("creating API definition")
err = kbc.CreateAPI(
"--group", kbc.Group,
"--version", kbc.Version,
"--kind", kbc.Kind,
"--namespaced",
"--resource",
"--controller",
"--make=false",
)
ExpectWithOffset(1, err).NotTo(HaveOccurred())

To scaffold webhook configurations:

By("scaffolding mutating and validating webhooks")


err = kbc.CreateWebhook(
"--group", kbc.Group,
"--version", kbc.Version,
"--kind", kbc.Kind,
"--defaulting",
"--programmatic-validation",
)
ExpectWithOffset(1, err).NotTo(HaveOccurred())

Plugins Versioning

Name Example Description


Tagged versions of the Kubebuilder project,
v2.2.0 ,
Kubebuilder representing changes to the source code in this
v2.3.0 ,
version repository. See the releases page for binary
v2.3.1
releases.
Project version defines the scheme of a
Project "1" , "2" ,
PROJECT configuration file. This version is
version "3"
defined in a PROJECT file’s version .
Represents the version of an individual plugin,
as well as the corresponding scaffolding that it
Plugin
v2 , v3 generates. This version is defined in a plugin
version
key, ex. go.kubebuilder.io/v2 . See the design
doc for more details.

Incrementing versions

For more information on how Kubebuilder release versions work, see the semver
documentation.
Project versions should only be increased if a breaking change is introduced in the
PROJECT file scheme itself. Changes to the Go scaffolding or the Kubebuilder CLI do not
affect project version.

Similarly, the introduction of a new plugin version might only lead to a new minor version
release of Kubebuilder, since no breaking change is being made to the CLI itself. It’d only
be a breaking change to Kubebuilder if we remove support for an older plugin version.
See the plugins design doc versioning section for more details on plugin versioning.

Why go/2 is different?

The scheme for project version "2" was defined before the concept of plugins was
introduced, so plugin go.kubebuilder.io/v2 is implicitly used for those project
types. Schema for project versions "3" and beyond define a layout key that
informs the plugin system of which plugin to use.

Introducing changes to plugins


Changes made to plugins only require a plugin version increase if and only if a change is
made to a plugin that breaks projects scaffolded with the previous plugin version. Once a
plugin version vX is stabilized (it doesn’t have an “alpha” or “beta” suffix), a new plugin
package should be created containing a new plugin with version v(X+1)-alpha . Typically
this is done by (semantically) cp -r pkg/plugins/golang/vX
pkg/plugins/golang/v(X+1) then updating version numbers and paths. All further
breaking changes to the plugin should be made in this package; the vX plugin would
then be frozen to breaking changes.

You must also add a migration guide to the migrations section of the Kubebuilder book in
your PR. It should detail the steps required for users to upgrade their projects from vX to
v(X+1)-alpha .

Example

Kubebuilder scaffolds projects with plugin go.kubebuilder.io/v3 by default.

You create a feature that adds a new marker to the file main.go scaffolded by init
that create api will use to update that file. The changes introduced in your feature
would cause errors if used with projects built with plugins go.kubebuilder.io/v2
without users manually updating their projects. Thus, your changes introduce a
breaking change to plugin go.kubebuilder.io , and can only be merged into plugin
version v3-alpha . This plugin’s package should exist already.
FAQ

How does the value informed via the domain flag (i.e.
kubebuilder init --domain example.com) when
we init a project?
After creating a project, usually you will want to extend the Kubernetes APIs and define
new APIs which will be owned by your project. Therefore, the domain value is tracked in
the PROJECT file which defines the config of your project and will be used as a domain to
create the endpoints of your API(s). Please, ensure that you understand the Groups and
Versions and Kinds, oh my!.

The domain is for the group suffix, to explicitly show the resource group category. For
example, if set --domain=example.com :

kubebuilder init --domain example.com --repo xxx --plugins=go/v4


kubebuilder create api --group mygroup --version v1beta1 --kind Mykind

Then the result resource group will be mygroup.example.com .

If domain field not set, the default value is my.domain .

I’d like to customize my project to use klog instead of


the zap provided by controller-runtime. How to use
klog or other loggers as the project logger?
In the main.go you can replace:

opts := zap.Options{
Development: true,
}
opts.BindFlags(flag.CommandLine)
flag.Parse()

ctrl.SetLogger(zap.New(zap.UseFlagOptions(&opts)))

with:

flag.Parse()
ctrl.SetLogger(klog.NewKlogr())
After make run, I see errors like “unable to find leader
election namespace: not running in-cluster...”
You can enable the leader election. However, if you are testing the project locally using
the make run target which will run the manager outside of the cluster then, you might
also need to set the namespace the leader election resource will be created, as follows:

mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{


Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionID: "14be1926.testproject.org",
LeaderElectionNamespace: "<project-name>-system",

If you are running the project on the cluster with make deploy target then, you might not
want to add this option. So, you might want to customize this behaviour using
environment variables to only add this option for development purposes, such as:

leaderElectionNS := ""
if os.Getenv("ENABLE_LEADER_ELECATION_NAMESPACE") != "false" {
leaderElectionNS = "<project-name>-system"
}

mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{


Scheme: scheme,
MetricsBindAddress: metricsAddr,
Port: 9443,
HealthProbeBindAddress: probeAddr,
LeaderElection: enableLeaderElection,
LeaderElectionNamespace: leaderElectionNS,
LeaderElectionID: "14be1926.testproject.org",
...

I am facing the error “open


/var/run/secrets/kubernetes.io/serviceaccount/token:
permission denied” when I deploy my project against
Kubernetes old versions. How to sort it out?
If you are facing the error:
1.6656687258729894e+09 ERROR controller-runtime.client.config
unable to get kubeconfig {"error": "open
/var/run/secrets/kubernetes.io/serviceaccount/token: permission denied"}
sigs.k8s.io/controller-runtime/pkg/client/config.GetConfigOrDie
/go/pkg/mod/sigs.k8s.io/controller-
runtime@v0.13.0/pkg/client/config/config.go:153
main.main
/workspace/main.go:68
runtime.main
/usr/local/go/src/runtime/proc.go:250

when you are running the project against a Kubernetes old version (maybe <= 1.21) , it
might be caused by the issue , the reason is the mounted token file set to 0600 , see
solution here. Then, the workaround is:

Add fsGroup in the manager.yaml

securityContext:
runAsNonRoot: true
fsGroup: 65532 # add this fsGroup to make the token file readable

However, note that this problem is fixed and will not occur if you deploy the project in
high versions (maybe >= 1.22).

TODO

If you’re seeing this page, it’s probably because something’s not done in the book yet, or
you stumbled upon an old link. Go see if anyone else has found this or bug the
maintainers.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy