Error handling in Go

Error handling in Go

One of Rob Pike's famous Go Proverbs is that "Errors are values". This concept is deeply embedded in the go language and is seen in countless lines of source code; from multiple return values to if err != nil constructs. While treating errors like values provides you with error-handling flexibility that can't be found in most modern languages, sometimes you just need to check the type of an error and handle it as you would in except or Catch clauses of languages like Python or Java.

The errors package

One of the most common uses of the errors package is errors.New, which creates a new error object from a string. But there are some other useful utilities in this package that can help to improve code flow and readability when inspecting errors.

errors.Is

The errors package provides two incredibly useful functions for checking error value types and casting them as more complex error types that allow users to access additional error metadata. The first is errors.Is(err, target error) bool

For example, we can define a custom error like so. This error is designed to be returned when a config file is invalid.

var ErrInvalidConfig = errors.New("Invalid Config!")

Then, when reading a config file (in this case with an already-initialized configReader), we can check the error value against our custom error to differentiate between an invalid config and something like an IO or OS-level error.

if _, err := configReader.Read("/path/to/config.conf"); err != nil {
    if errors.Is(err, ErrInvalidConfig) {
        println("Invalid Config!  Please check your config and try again")
    } else {
        println(err)
    }
}

errors.As

We can even go a step further and use errors.As to cast the error value into an InvalidConfigError struct to obtain additional metadata, such as the path of the invalid config file. Here is the definition for an InvalidConfigError

type InvalidConfigError struct {
	Reason string
	Path   string
}

Now we need to satisfy the error.Error() interface using a receiver method:

func (e *InvalidConfigError) Error() string {
	return e.Reason
}

Then, if our configReader struct returns an InvalidConfigError, we can print the invalid config file's path using errors.As(err error, target any) bool

if _, err := configReader.Read("/path/to/config.conf"); err != nil {
    var ce *InvalidConfigError
    if errors.As(err, &ce) {
        println(ce.Reason, ce.Path)
    } else {
        println(err)
    }
}

Real World Examples

Many packages encode additional information in custom errors. For example, the kubernetes/apimachinery/pkg/api/errors package contains a StatusError struct that provides significantly richer metadata about an API error than the simple error string. This package includes a multitude of helper functions like IsNotFound(err error) bool, IsAlreadyExists(err error) bool (and more) that allow the caller to pass an error and return a boolean that signifies whether the error is of that particular type.

Under the hood, these methods work using casting and errors.As. As an example, let's take a look at this helper method, ReasonForError:

func ReasonForError(err error) metav1.StatusReason {
	if status, ok := err.(APIStatus); ok || errors.As(err, &status) {
		return status.Status().Reason
	}
	return metav1.StatusReasonUnknown
}

This method attempts to cast an error into an APIStatus struct and then accesses that struct's metadata to return the underlying reason of the error. This is much more useful than using strings.Contains or some other method to determine an error's root cause.

As you can see, it is possible to write write rich error-handling code for your API's end users and developers by using a few simple primitives provided by the errors package

Hacking in kind (Kubernetes in Docker)

How to dynamically add nodes to a kind cluster

Kind allows you to run a Kubernetes cluster inside Docker. This is incredibly useful for developing Helm charts, Operators, or even just testing out different k8s features in a safe way.

I've recently been working on an operator (built using the operator-sdk) that manages cluster node lifecycles. Kind allows you to spin up clusters with multiple nodes, using a Docker container per-node and joining them using a common Docker network. However, the kind executable does not allow you to modify an existing cluster by adding or removing a node.

I wanted to see if this was possible using a simple shell script, and it turns out that it's actually not too difficult!

Creating the node

Using my favorite diff tool, DiffMerge, and docker inspect to compare an existing kind node's state to a new container's, I experimented with various docker run flags until I got something that's close enough to the kind node.

docker run \
--restart on-failure \
-v /lib/modules:/lib/modules:ro \
--privileged \
-h $NODE_NAME \
-d \
--network kind \
--network-alias $NODE_NAME \
--tmpfs /run \
--tmpfs /tmp \
--security-opt seccomp=unconfined \
--security-opt apparmor=unconfined \
--security-opt label=disable \
-v /var \
--name $NODE_NAME \
--label io.x-k8s.kind.cluster=kind \
--label io.x-k8s.kind.role=worker \
--env KIND_EXPERIMENTAL_CONTAINERD_SNAPSHOTTER \
kindest/node:v1.25.2@sha256:9be91e9e9cdf116809841fc77ebdb8845443c4c72fe5218f3ae9eb57fdb4bace

Joining to the cluster

You can join new nodes to a k8s cluster by using the kubeadm join command. In this case, we can use docker exec to execute this command on our node after its container has started up.

This command won't work out of the box because kind uses a kubeadm.conf that does not exist in the node docker image. It is injected into the container by the kind executable.

Again, using my trusty DiffMerge tool, I compared two /kind/kubeadm.conf files in existing kind nodes and found very few differences. This allowed me to just grab one from any worker node to use as a template.

docker exec --privileged kind-worker cat /kind/kubeadm.conf > $LOCAL_KUBEADM

From here, I needed to set the node's unique IP in its kubeadm.conf. We can use docker inspect to grab any node IP address we need. Since I'm working in bash, I just decided to use a simple sed replacement to replace the template node's IP address with my new node's IP in my local copy of kubeadm.conf.

TEMPLATE_IP=$(docker inspect kind-worker | jq -r '.[0].NetworkSettings.Networks.kind.IPAddress')
NODE_IP=$(docker inspect $NODE_NAME | jq -r '.[0].NetworkSettings.Networks.kind.IPAddress')

ESCAPED_TEMPLATE_IP=$(echo $TEMPLATE_IP | sed 's/\./\\./g' )
ESCAPED_NODE_IP=$(echo $NODE_IP | sed 's/\./\\./g')

sed -i.bkp "s/${ESCAPED_TEMPLATE_IP}/${ESCAPED_NODE_IP}/g" $LOCAL_KUBEADM

Now that our kubeadm.conf is prepared, we need to copy it to the new node:

docker exec --privileged -i $NODE_NAME cp /dev/stdin /kind/kubeadm.conf < $LOCAL_KUBEADM

Finally, we can join our node to the cluster:

docker exec --privileged $NODE_NAME kubeadm join --config /kind/kubeadm.conf --skip-phases=preflight --v=6

Node Tags

Since you have complete control of the new node's kubeadm.conf, it is possible to configure many of its properties for further testing. For example, to add additional labels to the new node, you can run something like this:

sed -i.bkp "s/node-labels: \"\"/node-labels: \"my-label-key=my-label-value\"/g" $LOCAL_KUBEADM

This will add the my-label-key=my-label-value label to the node once it joins the cluster.

Future Work

Based on this script, I believe it's possible to add a kind create node subcommand to add a node to an existing cluster. Stay tuned for that...

Writing a QuestDB ILP client in nim

Why nim?

I first encountered nim at a PyGotham talk about optimizing hotpaths in Python code by actually rewriting the logic in nim and importing it back into your Python application. The language's Python-like syntax, static typing, easy-to-use FFI, and focus on performance appealed to me. Since that talk, nim was in the back of my mind as a new low-ish level language to pick up.

Fast forward to a few months ago, I felt the familiar urge to learn a new programming language. First I tried Rust due to its recent inclusion into the kernel, but I ended up being fairly unproductive in it after a few nights of hacking. I realized that it would take a lot more work to get comfortable with the compiler before I could write anything substantial. Not to knock on Rust, I'm still spending time learning it! But with limited brain capacity after a long day of work, I just wanted to sling some code and explore some new computing concepts that interested me.

Enter nim. After my first night of coding, I was actually able to write working code that implemented basic CRDT data types. I was happy since my code compiled, I could quickly iterate on it due to its python-like syntax, and I was quickly getting the hang of the type system.

QuestDB

In September, I joined QuestDB as a Senior Cloud Engineer and became fully immersed in the project and its ecosystem. One of the ingestion methods that QuestDB supports is the Influx Line Protocol (ILP). Even though we have a bunch of mature ILP clients written in Rust, C++, Python, and Java, I'm a hands-on learner and wanted some experience working with the database. I figured that writing an ILP client in nim would be a fantastic project to improve my skills while also learning more about the about the protocol and some QuestDB fundamentals.

questdb-nim

After some hacking, I ended up with a general purpose ILP client that supports both synchronous and async execution. Nim made my job easy with a few nice features:

Single file execution

Unlike other compiled languages, nim makes it easy to compile and execute any file with a .nim extension. Similar to python's if __name__ == "__main__":, you can run code under if isMainModule: simply by adding the -r flag to the nim compile command. This lets you iterate quickly and easily when working on smaller features instead of setting up an entire test harness and/or framework.

Autodocumentation

Nim comes with an autodocumentation tool, nim doc, that makes it incredibly easy to generate clean html docs for your module. For a production-grade project, I would implement this in a CI pipeline using GitHub actions or a similar CI/CD platform. But for the initial stages of the library, I decided to just include a git pre-commit hook and installation command in my Makefile. While this does make commits larger and clutter up the git history slightly, all documentation-related changes are in a single directory so they should be easy to ignore when looking at code-related changes.

Compile-time support

Nim has a strong sense of compile-time support including metaprogramming, easy-to-use compiler pragmas, and even the ability to execute code at compile time. For example, adding async support to the library was simple using the {.async.} pragma. All I had to do was mark methods with that pragma and they would be eligible for use with the std/async* packages, which make it very easy to instantiate objects and execute async code.

The Library

You can find the finished product on Github: https://github.com/sklarsa/questdb-nim/. I've tested it against a local QuestDB instance, and have some simple unit tests to validate basic functionality.

Future work

  1. Add ILP authentication support to the library
  2. Improve ILP-line parsing and error-handling. There are still many edge cases that would not be parsed correctly by the logic that I implemented.
  3. E2E testing and benchmarking

Nim Resources