How to add Kubernetes-powered leader election to your Go apps

The Kubernetes standard library is full of gems, hidden away in many of the various subpackages that are part of the ecosystem. One such example that I discovered recently k8s.io/client-go/tools/leaderelection, which can be used to add a leader election protocol to any application running inside a Kubernetes cluster. This article will discuss what leader election is, how it's implemented in this Kubernetes package, and provide an example of how we can use this library in our own applications.

Leader Election

Leader election is a distributed systems concept that is a core building block of highly-available software. It allows for multiple concurrent processes to coordinate amongst each other and elect a single "leader" process, which is then responsible for performing synchronous actions like writing to a data store.

This is useful in systems like distributed databases or caches, where multiple processes are running to create redundancy against hardware or network failures, but can't write to storage simultaneously to ensure data consistency. If the leader process becomes unresponsive at some point in the future, the remaining processes will kick off a new leader election, eventually picking a new process to act as the leader.

Using this concept, we can create highly-available software with a single leader and multiple standby replicas.

In Kubernetes, the controller-runtime package uses leader election to make controllers highly-available. In a controller deployment, resource reconciliation only occurs when a process is the leader, and other replicas are waiting on standby. If the leader pod becomes unresponsive, the remaining replicas will elect a new leader to perform subsequent reconciliations and resume normal operation.

Kubernetes Leases

This library uses a Kubernetes Lease, or distributed lock, that can be obtained by a process. Leases are native Kubernetes resources that are held by a single identity, for a given duration, with a renewal option. Here's an example spec from the docs:

apiVersion: coordination.k8s.io/v1
kind: Lease
metadata:
  labels:
	apiserver.kubernetes.io/identity: kube-apiserver
	kubernetes.io/hostname: master-1
  name: apiserver-07a5ea9b9b072c4a5f3d1c3702
  namespace: kube-system
spec:
  holderIdentity: apiserver-07a5ea9b9b072c4a5f3d1c3702_0c8914f7-0f35-440e-8676-7844977d3a05
  leaseDurationSeconds: 3600
  renewTime: "2023-07-04T21:58:48.065888Z"

Leases are used by the k8s ecosystem in three ways:

  1. Node Heartbeats: Every Node has a corresponding Lease resource and updates its renewTime field on an ongoing basis. If a Lease's renewTime hasn't been updated in a while, the Node will be tainted as not available and no more Pods will be scheduled to it.
  2. Leader Election: In this case, a Lease is used to coordinate among multiple processes by having a leader update the Lease's holderIdentity. Standby replicas, with different identities, are stuck waiting for the Lease to expire. If the Lease does expire, and is not renewed by the leader, a new election takes place in which the remaining replicas attempt to take ownership of the Lease by updating its holderIdentity with their own. Since the Kubernetes API server disallows updates to stale objects, only a single standby node will successfully be able to update the Lease, at which point it will continue execution as the new leader.
  3. API Server Identity: Starting in v1.26, as a beta feature, each kube-apiserver replica will publish its identity by creating a dedicated Lease. Since this is a relatively slim, new feature, there's not much else that can be derived from the Lease object aside from how many API servers are running. But this does leave room to add more metadata to these Leases in future k8s versions.

Now let's explore this second use case of Leases by writing a sample program to demonstrate how you can use them in leader election scenarios.

Example Program

In this code example, we are using the leaderelection package to handle the leader election and Lease manipulation specifics.

package main

import (
	"context"
	"fmt"
	"os"
	"time"

	"k8s.io/client-go/tools/leaderelection"
	rl "k8s.io/client-go/tools/leaderelection/resourcelock"
	ctrl "sigs.k8s.io/controller-runtime"
)

var (
	// lockName and lockNamespace need to be shared across all running instances
	lockName      = "my-lock"
	lockNamespace = "default"

	// identity is unique to the individual process. This will not work for anything,
	// outside of a toy example, since processes running in different containers or
	// computers can share the same pid.
	identity      = fmt.Sprintf("%d", os.Getpid())
)

func main() {
	// Get the active kubernetes context
	cfg, err := ctrl.GetConfig()
	if err != nil {
		panic(err.Error())
	}

	// Create a new lock. This will be used to create a Lease resource in the cluster.
	l, err := rl.NewFromKubeconfig(
		rl.LeasesResourceLock,
		lockNamespace,
		lockName,
		rl.ResourceLockConfig{
			Identity: identity,
		},
		cfg,
		time.Second*10,
	)
	if err != nil {
		panic(err)
	}

	// Create a new leader election configuration with a 15 second lease duration.
	// Visit https://pkg.go.dev/k8s.io/client-go/tools/leaderelection#LeaderElectionConfig
	// for more information on the LeaderElectionConfig struct fields
	el, err := leaderelection.NewLeaderElector(leaderelection.LeaderElectionConfig{
		Lock:          l,
		LeaseDuration: time.Second * 15,
		RenewDeadline: time.Second * 10,
		RetryPeriod:   time.Second * 2,
		Name:          lockName,
		Callbacks: leaderelection.LeaderCallbacks{
			OnStartedLeading: func(ctx context.Context) { println("I am the leader!") },
			OnStoppedLeading: func() { println("I am not the leader anymore!") },
			OnNewLeader:      func(identity string) { fmt.Printf("the leader is %s\n", identity) },
		},
	})
	if err != nil {
		panic(err)
	}

	// Begin the leader election process. This will block.
	el.Run(context.Background())

}

What's nice about the leaderelection package is that it provides a callback-based framework for handling leader elections. This way, you can act on specific state changes in a granular way and properly release resources when a new leader is elected. By running these callbacks in separate goroutines, the package takes advantage of Go's strong concurrency support to efficiently utilize machine resources.

Testing it out

To test this, lets spin up a test cluster using kind.

$ kind create cluster

Copy the sample code into main.go, create a new module (go mod init leaderelectiontest) and tidy it (go mod tidy) to install its dependencies. Once you run go run main.go, you should see output like this:

$ go run main.go
I0716 11:43:50.337947     138 leaderelection.go:250] attempting to acquire leader lease default/my-lock...
I0716 11:43:50.351264     138 leaderelection.go:260] successfully acquired lease default/my-lock
the leader is 138
I am the leader!

The exact leader identity will be different from what's in the example (138), since this is just the PID of the process that was running on my computer at the time of writing.

And here's the Lease that was created in the test cluster:

$ kubectl describe lease/my-lock
Name:         my-lock
Namespace:    default
Labels:       <none>
Annotations:  <none>
API Version:  coordination.k8s.io/v1
Kind:         Lease
Metadata:
  Creation Timestamp:  2024-07-16T15:43:50Z
  Resource Version:    613
  UID:                 1d978362-69c5-43e9-af13-7b319dd452a6
Spec:
  Acquire Time:            2024-07-16T15:43:50.338049Z
  Holder Identity:         138
  Lease Duration Seconds:  15
  Lease Transitions:       0
  Renew Time:              2024-07-16T15:45:31.122956Z
Events:                    <none>

See that the "Holder Identity" is the same as the process's PID, 138.

Now, let's open up another terminal and run the same main.go file in a separate process:

$ go run main.go
I0716 11:48:34.489953     604 leaderelection.go:250] attempting to acquire leader lease default/my-lock...
the leader is 138

This second process will wait forever, until the first one is not responsive. Let's kill the first process and wait around 15 seconds. Now that the first process is not renewing its claim on the Lease, the .spec.renewTime field won't be updated anymore. This will eventually cause the second process to trigger a new leader election, since the Lease's renew time is older than its duration. Because this process is the only one now running, it will elect itself as the new leader.

the leader is 604
I0716 11:48:51.904732     604 leaderelection.go:260] successfully acquired lease default/my-lock
I am the leader!

If there were multiple processes still running after the initial leader exited, the first process to acquire the Lease would be the new leader, and the rest would continue to be on standby.

No single-leader guarantees

This package is not foolproof, in that it "does not guarantee that only one client is acting as a leader (a.k.a. fencing)". For example, if a leader is paused and lets its Lease expire, another standby replica will acquire the Lease. Then, once the original leader resumes execution, it will think that it's still the leader and continue doing work alongside the newly-elected leader. In this way, you can end up with two leaders running simultaneously.

To fix this, a fencing token which references the Lease needs to be included in each request to the server. A fencing token is effectively an integer that increases by 1 every time a Lease changes hands. So a client with an old fencing token will have its requests rejected by the server. In this scenario, if an old leader wakes up from sleep and a new leader has already incremented the fencing token, all of the old leader's requests would be rejected because it is sending an older (smaller) token than what the server has seen from the newer leader.

Implementing fencing in Kubernetes would be difficult without modifying the core API server to account for corresponding fencing tokens for each Lease. However, the risk of having multiple leader controllers is somewhat mitigated by the k8s API server itself. Because updates to stale objects are rejected, only controllers with the most up-to-date version of an object can modify it. So while we could have multiple controller leaders running, a resource's state would never regress to older versions if a controller misses a change made by another leader. Instead, reconciliation time would increase as both leaders need to refresh their own internal states of resources to ensure that they are acting on the most recent versions.

Still, if you're using this package to implement leader election using a different data store, this is an important caveat to be aware of.

Conclusion

Leader election and distributed locking are critical building blocks of distributed systems. When trying to build fault-tolerant and highly-available applications, having tools like these at your disposal is critical. The Kubernetes standard library gives us a battle-tested wrapper around its primitives to allow application developers to easily build leader election into their own applications.

While use of this particular library does limit you to deploying your application on Kubernetes, that seems to be the way the world is going recently. If in fact that is a dealbreaker, you can of course fork the library and modify it to work against any ACID-compliant and highly-available datastore.

Stay tuned for more k8s source deep dives!

How do Kubernetes Operators Handle Concurrency?

By default, operators built using Kubebuilder and controller-runtime process a single reconcile request at a time. This is a sensible setting, since it's easier for operator developers to reason about and debug the logic in their applications. It also constrains throughput from the controller to core Kubernetes resources like ectd and the API server.

But what if your work queue starts backing up and average reconciliation times increase due to requests that are left sitting in the queue, waiting to be processed? Luckily for us, a controller-runtime Controller struct includes a MaxConcurrentReconciles field (as I previously mentioned in my Kubebuilder Tips article). This option allows you to set the number of concurrent reconcile loops that are running in a single controller. So with a value above 1, you can reconcile multiple Kubernetes resources simultaneously.

Early in my operator journey, one question that I had was how could we guarantee that the same resource isn't being reconciled at the same time by 2 or more goroutines? With MaxConcurrentReconciles set above 1, this could lead to all sorts of race conditions and undesireable behavior, as the state of an object inside a reconciliation loop could change via a side-effect from an external source (a reconciliation loop running in a different thread).

I thought about this for a while, and even implemented a sync.Map-based approach that would allow a goroutine to acquire a lock for a given resource (based on its namespace/name).

It turns out that all of this effort was for naught, since I recently learned (in a k8s slack channel) that the controller workqueue already includes this feature! Albeit with a simpler implementation.

This is a quick story about how a k8s controller's workqueue guarantees that unique resources are reconciled sequentially. So even if MaxConcurrentReconciles is set above 1, you can be confident that only a single reconciliation function is acting on any given resource at a time.

client-go/util

Controller-runtime uses the client-go/util/workqueue library to implement its underlying reconciliation queue. In the package's doc.go file, a comment states that the workqueue supports these properties:

  • Fair: items processed in the order in which they are added.
  • Stingy: a single item will not be processed multiple times concurrently, and if an item is added multiple times before it can be processed, it will only be processed once.
  • Multiple consumers and producers. In particular, it is allowed for an item to be reenqueued while it is being processed.
  • Shutdown notifications.

Wait a second... My answer is right here in the second bullet, the "Stingy" property! According to these docs, the queue will automatically handle this concurrency issue for me, without having to write a single line of code. Let's run through the implementation.

How does the workqueue work?

The workqueue struct has 3 main methods, Add, Get, and Done. Inside a controller, an informer would Add reconcile requests (namespaced-names of generic k8s resources) to the workqueue. A reconcile loop running in a separate goroutine would then Get the next request from the queue (blocking if it is empty). The loop would perform whatever custom logic is written in the controller, and then the controller would call Done on the queue, passing in the reconcile request as an argument. This would start the process over again, and the reconcile loop would call Get to retrieve the next work item.

This is similar to processing messages in RabbitMQ, where a worker pops an item off the queue, processes it, and then sends an "Ack" back to the message broker indicating that processing has completed and it's safe to remove the item from the queue.

Still, I have an operator running in production that powers QuestDB Cloud's infrastructure, and wanted to be sure that the workqueue works as advertised. So a wrote a quick test to validate its behavior.

A little test

Here is a simple test that validates the "Stingy" property:

package main_test

import (
    "testing"

    "github.com/stretchr/testify/assert"

    "k8s.io/client-go/util/workqueue"
)

func TestWorkqueueStingyProperty(t *testing.T) {

    type Request int

    // Create a new workqueue and add a request
    wq := workqueue.New()
    wq.Add(Request(1))
    assert.Equal(t, wq.Len(), 1)

    // Subsequent adds of an identical object
    // should still result in a single queued one
    wq.Add(Request(1))
    wq.Add(Request(1))
    assert.Equal(t, wq.Len(), 1)

    // Getting the object should remove it from the queue
    // At this point, the controller is processing the request
    obj, _ := wq.Get()
    req := obj.(Request)
    assert.Equal(t, wq.Len(), 0)

    // But re-adding an identical request before it is marked as "Done"
    // should be a no-op, since we don't want to process it simultaneously
    // with the first one
    wq.Add(Request(1))
    assert.Equal(t, wq.Len(), 0)

    // Once the original request is marked as Done, the second
    // instance of the object will be now available for processing
    wq.Done(req)
    assert.Equal(t, wq.Len(), 1)

    // And since it is available for processing, it will be
    // returned by a Get call
    wq.Get()
    assert.Equal(t, wq.Len(), 0)
}

Since the workqueue uses a mutex under the hood, this behavior is threadsafe. So even if I wrote more tests that used multiple goroutines simultaneously reading and writing from the queue at high speeds in an attempt to break it, the workqueue's actual behavior would be the same as that of our single-threaded test.

All is not lost

Kubernetes did it

There are a lot of little gems like this hiding in the Kubernetes standard libraries, some of which are in not-so-obvious places (like a controller-runtime workqueue found in the go client package). Despite this discovery, and others like it that I've made in the past, I still feel that my previous attempts at solving these issues are not complete time-wasters. They force you to think critically about fundamental problems in distributed systems computing, and help you to understand more of what is going on under the hood. So that by the time I've discovered that "Kubernetes did it", I'm relieved that I can simplify my codebase and perhaps remove some unnecessary unit tests.

An Introduction to Custom Resource Definitions and Custom Resources (Operators 101: Part 2)

Check out Part 1 of this series for an introduction to Kubernetes operators.

Now that we've covered the basics of what a Kubernetes operator is, we can start to dig into the details of each operator component. Remember that a Kubernetes operator consists of 3 parts: a Custom Resource Definition, a Custom Resource, and a Controller. Before we can focus on the Controller, which is where an operator's automation takes place, we first need to define our Custom Resource Definition and use that to create a Custom Resource.

Example: RSS Feed Reader Application

Throughout this series, I will be using a distributed RSS feed reader application (henceforth known as "FeedReader") to demonstrate basic Kubernetes operator concepts. Let's start with a quick overview of the FeedReader design.

FeedReader Architecture

The architecture for our FeedReader application is split into 3 main components:

  1. A StatefulSet running our feed database. This holds both the URLs of feeds that we wish to scrape, as well as their scraped contents
  2. A CronJob to perform the feed scraping job at a regular interval
  3. A Deployment that provides a frontend to query the feed database and display RSS feed contents to the end user

FeedReader Architecture Diagram

The FeedReader application runs as a single golang binary that has 3 modes: web server, feed scraper, and embedded feed database. Each of these modes is designed to run in a Deployment, CronJob, and StatefulSet respectively.

Back to CRDs: Starting At the End

Just to recap, a Custom Resource Definition is the schema of a new Kubernetes resource type that you define. But instead of thinking in terms of an abstract schema, I believe that it's easier to start from your desired result (a concrete Custom Resource) and work backwards from there.

My very basic rule of thumb is that whenever you need a variable to describe an option in your application, you should consider defining a new field and setting sample value in a Custom Resource to represent it. Once you have a Custom Resource with fields and values that clearly explain your application's desired behavior, you can then start to generalize these fields into an abstract schema, or CRD.

Let's perform this exercise with the FeedReader.

FeedReader CRD

To keep the article at a readable length, I'm just going focus on the most critical aspects of the FeedReader. I've extracted two key variables that are required for the application to function at its most basic level:

  1. A list of RSS feed urls to scrape
  2. A time interval that defines how often to scrape the urls in the above list

Thus, I'll add 2 fields to my FeedReader CR:

  1. A list of strings that hold my feed urls
  2. A Golang duration string that specifies the scrape interval

Based on these two requirements, here's an example of a FeedReader Custom Resource that I would like to define:

---
apiVersion: crd.sklar.rocks/v1alpha1
kind: FeedReader
metadata:
  name: feed-reader-app
spec:
  feeds:
    - https://sklar.rocks/atom.xml
  scrapeInterval: 60m

Once created, the above resource should deploy a new instance of the FeedReader app, add one feed (https://sklar.rocks/atom.xml) to the database, and scrape that url every 60 minutes for new posts that are not currently in the feed database.

Note that I could use something like crontab syntax to define the scrape interval, but I find the golang human-readable intervals easier to work with, since I don't always have to cross-check with crontab.guru to make sure that I've defined the correct schedule.

Exercise

Can you think of any more options that the FeedReader would need? Personally, I would start with things like feed retention, logging, and scraping retry behavior, but the possibilities are endless!

From Instance to Schema

But before we can kubectl apply on the above CR yaml, we first need to define and apply a schema in the form of a Custom Resource Definition (CRD) for us to create an object of this type in our Kubernetes cluster. Once the CRD is known to our cluster, we can then create and modify new instances of that CRD.

If you've ever worked with OpenAPI, Swagger, XML Schema or any other schema definition, you probably know that in many cases, the schema of an object can be incredibly more verbose than an instance of the object itself. In Kubernetes, to define a simple Custom Resource Definition with the two fields that I outlined above, the yaml would look something like this:

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: feedreaders.crd.sklar.rocks
spec:
  group: crd.sklar.rocks
  names:
    kind: FeedReader
    listKind: FeedReaderList
    plural: feedreaders
    singular: feedreader
  scope: Namespaced
  versions:
    name: v1alpha1
    schema:
      openAPIV3Schema:
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation
              of an object. Servers should convert recognized schemas to the latest
              internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this
              object represents. Servers may infer this from the endpoint the client
              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            properties:
              feeds:
                description: List of feed urls to scrape at a regular interval
                items:
                  type: string
                type: array
              scrapeInterval:
                description: 'Interval at which to scrape feeds, in the form of a
                  golang duration string (see https://pkg.go.dev/time#ParseDuration for valid formats)'
                type: string
            type: object
          status:
            properties:
              lastScrape:
                description: Time of the last scrape
                format: date-time
                type: string
            type: object
        type: object
    served: true
    storage: true
    subresources:
      status: {}

Now that's a lot of yaml! This schema includes metadata like the resource's name, API group, and version, as well as the formal definitions of our custom fields and standard Kubernetes fields like "apiVersion" and "kind".

If you're thinking "do I actually have to write something like this for all of my CRDs?", the answer is no! This is where we can leverage the Kubernetes standard library and golang tooling when defining CRDs. For example, I didn't have to write a single line of the yaml in the example above! Here's how I did it.

Controller-gen to the Rescue

The Kubernetes community has developed a tool to generate CRD yaml files directly from Go structs, called controller-gen. This tool parses Go source code for structs and special comments known as "markers" that begin with //+kubebuilder. It then uses this parsed data to generate a variety of operator-related code and yaml manifests. One such manifest that it can create is a CRD.

Using controller-gen, here's the code that I used to generate the above schema. It's certainly much less verbose than the resulting yaml file!

package v1alpha1

import metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"

type FeedReaderSpec struct {
	// List of feed urls to scrape at a regular interval
	Feeds []string `json:"feeds,omitempty"`
	// Interval at which to scrape feeds, in the form of a golang duration string
	// (see https://pkg.go.dev/time#ParseDuration for valid formats)
	ScrapeInterval string `json:"scrapeInterval,omitempty"`
}

type FeedReaderStatus struct {
	// Time of the last scrape
	LastScrape metav1.Time `json:"lastScrape,omitempty"`
}

//+kubebuilder:object:root=true
//+kubebuilder:subresource:status

type FeedReader struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Spec   FeedReaderSpec   `json:"spec,omitempty"`
	Status FeedReaderStatus `json:"status,omitempty"`
}

//+kubebuilder:object:root=true

type FeedReaderList struct {
	metav1.TypeMeta `json:",inline"`
	metav1.ListMeta `json:"metadata,omitempty"`
	Items           []FeedReaderList `json:"items"`
}

Here, we've defined a FeedReader struct that has a Spec and a Status field. Seem familiar? Almost every Kubernetes resource that we deal with has a .spec and .status field in its definition. But the types of our FeedReader's Spec and Status are actually custom types that we defined directly in our Go code. This gives us the ability to craft our CRD schema in a compiled language with static type-checking while letting the tooling emit the equivalent yaml CRD spec to use in our Kubernetes clusters.

If you compare the CRD yaml with the FeedReaderSpec struct, you can see that controller-gen extracted the struct fields, types, and docstrings into the emitted yaml. Also, since our FeedReader inherits from metav1.TypeMeta and metav1.ObjectMeta and is annotated with the //+kubebuilder:object:root=true comment, the fields from those interfaces (like apiVersion and kind) are included in the yaml as well.

But what about things like the API group, crd.sklar.rocks, and version v1alpha1? These are handled in a separate Go file in the same package:

// +kubebuilder:object:generate=true
// +groupName=crd.sklar.rocks
package v1alpha1

import (
	"k8s.io/apimachinery/pkg/runtime/schema"
	"sigs.k8s.io/controller-runtime/pkg/scheme"
)

var (
	// GroupVersion is group version used to register these objects
	GroupVersion = schema.GroupVersion{Group: "crd.sklar.rocks", Version: "v1alpha1"}

	// SchemeBuilder is used to add go types to the GroupVersionKind scheme
	SchemeBuilder = &scheme.Builder{GroupVersion: GroupVersion}
)

func init() {
	SchemeBuilder.Register(&FeedReader{}, &FeedReaderList{})
}

Here, we initialize a GroupVersion instance that defines our API group (crd.sklar.rocks) and version (v1alpha1). We then use it to instantiate the SchemeBuilder, which is used to register the FeedReader and FeedReaderList structs from above. When controller-gen is run, it will load this Go package, check for any registered structs, parse them (and any related comment markers), and emit the CRD yaml that we can use to install the CRD in our cluster.

Assuming that our api definitions are in the ./api directory and we want the output in ./deploy/crd, all we need to do is run the following command to generate our FeedReader CRD's yaml definition:

$ controller-gen crd paths="./api/..." output:dir=deploy/crd

Installing our Custom Resource Definition

Now that we've generated our Custom Resource Definition, how do we install it in the cluster? It's actually very simple! A CustomResourceDefinition is its own Kubernetes resource type, just like a Deployment, Pod, StatefulSet, or any of the other Kubernetes resources that you're used to working with. All you need to do is kubectl create your CRD yaml, and the new CRD will be validated and registered with the API server.

You can check this by typing kubectl get crd, which will list all of the Custom Resource Definitions installed in your cluster. If you've copy-pasted the CRD yaml and installed the FeedReader CRD in your cluster, you should see the feedreader.crd.sklar.rocks in the list, under the v1alpha1 API version.

With our FeedReader CRD installed, you are free to create as many resources of kind: FeedReader as you wish. Each one of these represents a unique instance of the FeedReader application in your cluster, with each instance managing its own Deployment, StatefulSet and CronJob. This way, you can easily deploy and manage multiple instances of your application in your cluster, all by creating and modifying Kubernetes objects in the same way that you would for any other resource type.

What's Next?

Now that we have a Custom Resource Definition and a Custom Resource that defines our FeedReader application, we need to actually perform the orchestration that will manage the application in our cluster. Stay tuned for the next article in this series, which will discuss how to do that in a custom controller.

If you like this content and want to follow along with the series, I'll be writing more articles under the Operators 101 tag on this blog.