I wrote some Rust!

Thanks to a weekly Rust workshop hosted by one of my colleagues at QuestDB, along with some extra time spent in an airport lounge during a long layover, I was finally able to hack together some Rust that does something useful!

questdb-retention

The project I put together is questdb-retention, a small program that allows you to manage data retention in your own QuestDB instance.

Even though QuestDB does not support a traditional SQL DELETE statement (as of v7.0), you can still purge stale data from you database by using the DROP PARTITION command. Check out the docs for more information about the specifics around data retention.

The questdb-retention package allows you to drop old questdb partitions either interactively (on the command line) or through a yaml config file. Only tables that are partitioned are supported, for obvious reasons.

In my view, the best way to use this package is to write a yaml config, compile the executable, and add a cron job that runs the command at regular intervals. Here's an example of a yaml file that you can use:

---
conn_str: host=localhost user=admin password=quest port=8812
tables:
  my_partitioned_table_by_month: 5
  my_partitioned_table_by_day: 5

The config only specifies the number of partitions to keep per table, but since QuestDB supports partitioning by anywhere from HOUR to YEAR, it's difficult to tell how much data is being retained by just inspecting the config file. This is something that's worth improving in the future. Also, once the package reaches stability, I'll take a look at adding it to crates.io to make it official.

All in all, once I got the hang of some Rust basics, I really enjoyed working in the language! I find Rust's matching to be incredibly useful, and I'm starting to get comfortable with borrows, mutability, and common Rust types like Option and Result. I'm excited to learn more concepts like async and lifetimes, as well as starting to write some lower-level code that really can take advantage of all Rust has to offer.

My Fractal FM3 Setup

My Fractal FM3 Setup

I'm a longtime bass and guitar player who has lived in apartments for the better part of 15 years. Throughout this time, I've had a ton of "apartment friendly" guitar setups that used different combinations of effects pedals, amp simulators, headphones, low wattage amps, recording interfaces, and software plugins. Yet despite all of this tinkering, I've never been completely happy with my setup. Amps are too loud, "amp-like" pedals don't really sound like the real thing, and plugins lack the "feel" of a real amp.

When my family moved last year, I finally had my own office space! So of course, that meant also trying out a new guitar setup to work with the room. I figured that this time around, I would finally try an amp modeler. I wanted something portable with flexible I/O routing options, but the main criteria were tone and feel and a low volume.

I ended up choosing the Fractal Audio FM3, and for the first time in a long time, am completely happy with the result!

I'm not going to go into any of the device's features in this article; there are enough resources out there for that. Instead, I'll describe my ideal tone and setup, and how I got there with the FM3.

The Idea

I'm a fan of the hot-rodded Marshall sound; like modded JCM800s, Friedmans, and Bogners. A few years back, I found this video on r/toobamps and I was immediately hooked. This is my holy grail tone, and I would love to achieve something similar on the FM3.

Even though the FM3 is pretty much the most configurable guitar simulator around, I'm actually a fan of simplicity when it comes to my music setups. I'd prefer to just plug-in and play instead of tweaking lots of meta-parameters searching for the perfect sound. Even my pedal setups were relatively simple; I gave up on any menu-diving, digital pedals a long time ago and stuck to simple designs with only a few knobs to tweak.

I decided that I wanted to create an amp + pedalboard-like experience with the FM3. Given that the pedal only has 3 footswitches, that seems like an impossible task without buying an extender like the FC-6 or FC-12. But that's not what I wanted to do. I wanted the whole thing to fit under my sit/standing desk, and also in a backpack for travel. Luckily for me, the FM3 provides significant levels of programmability for its footswitches, as you'll see later.

The Sound

At its core, the effects chain is relatively standard:

guitar -> wah -> drive -> modulation -> amp -> speakers -> delay -> reverb -> out

For the amp, I chose the FAS Hot Rod, which is actually a custom model that is built on the ideal hot-rodded EL34-powered amp. The speakers are two different mics (and positions) of a classic '68 Marshall 4x12 cab.

Here's a picture of the chain:

Signal Chain

The Controls

This is where things start to get interesting. The FM3 offers 9 "scenes" per preset, where you can toggle different combinations of effects (and channels). As long as you're not changing effect or amp "channels", there is no sound cutoff between scene changes. But even with the number of effects that I have in my chain, there are too many possible combinations to create a scene for each. It is also difficult to seamlessly switch between 9 scenes on the FM3, despite the rich footswitch programmability.

So I decided to go for more of a traditional pedalboard-like experience, using the footswitches to toggle effects. How can I do this using only 3 switches? Here's a picture of my "Performance" footswitch layout to see how I do it:

Footswitches

I use the first switch as a dedicated drive switch, since I toggle the drive fairly often. If you hold this switch, it brings you to a utility menu (basically my tuner, since I don't use tap-tempo). A second tap (or hold) of the switch brings you back to the drive.

I use the second switch to toggle my modulation effect of choice. If I want to change the modulation effect, I just hold the pedal down and it cycles to the next effect type. In actuality, holding down the middle switch toggles a new footswitch view. This view is exactly the same as the first view, just with a different modulation effect assigned to the second pedal. So in practice, holding down the pedal just cycles through the modulation effects!

This can get a bit tricky, since it is possible to toggle an effect on and cycle to the next view, where the effect is still on, but you can't turn it off, since the footswitch is no longer controlling it. To turn it off again would require more holds of the footswitch to cycle back to the effect that you want to turn off. This isn't ideal, but I do my best to work with this limitation by keeping all of my modulation blocks on the unit's display as a visual indicator of what is on and off. Also I try to just use one modulation type-per song so I can reset all of the effects between songs.

The final switch toggles my delay. I also can toggle reverb by holding down the switch. This function actually causes a scene change to a new scene with reverb enabled, which actually creates another problem. When you change scenes, all of your effect settings (drive, modulation, and delay) are wiped out. This means that it's difficult to toggle reverb on-and-off in the middle of a song. This hasn't been a huge deal for me, since reverb is usually an always-on or always-off type of effect based on the situation. If I'm playing through headphones during practice, I like the reverb on. But if I'm in a live room, I might not need it since it can lead to a washed-out sound.

What about the wah? I have it set to be auto-enabled by an expression pedal. When the pedal is disengaged (below 5% of its max value), the wah is turned off. And as soon as I push the pedal up a bit, it engages the wah. I also like a "cocked wah" sound, where the wah is engaged at the low end of its range, which creates a cool filtered effect. I created a custom controller curve that gives me some more wiggle-room to hit that sweet spot before taking off into the higher filter ranges.

Wah Curve

And finally, what about cleans? At first I tried multiple amp-channel setups, but I found it tricky to control yet another variable in the setup, while maintaining the effect flexibility that I desired. Instead, I do this the old-fashioned way, by turning down the guitar volume knob! The Fractal models are incredibly amp-like, and respond very well to the guitar's volume knob. By setting the volume down to around 2-3 on the guitar and lightening my picking strength, I can get a very convincing "clean-enough" tone for my use.

Conclusion

Ok, so I actually did end up tweaking a bunch of meta parameters to get a setup that I like. Almost every time I come home from a jam, I'm back at the computer playing around with the footswitch and effect settings in the FM3-Edit software. But recently, my changes have been getting smaller and smaller; like tweaking the drive control or delay feedback by a few percent. I guess this means that I'm almost happy with my tone? And the added benefit is that the entire setup: FM3, expression pedal, strap, and cables, all fits in my normal-sized backpack!

Using Prometheus, Loki, and Grafana to monitor QuestDB in Kubernetes

Originally posted on the QuestDB Blog

Monitoring QuestDB in Kubernetes

As any experienced infrastructure operator will tell you, monitoring and observability tools are critical for supporting production cloud services. Real-time analytics and logs help to detect anomalies and aid in debugging, ultimately improving the ability of a team to recover from (and even prevent) incidents. Since container technologies are drastically changing the infrastructure world, new tools are constantly emerging to help solve these problems. Kubernetes and its ecosystem have addressed the need for infrastructure monitoring with a variety of newly emerging solutions. Thanks to the orchestration benefits that Kubernetes provides, these tools are easy to install, maintain, and use.

Luckily, QuestDB is built with these concerns in mind. From the presence of core database features to the support for orchestration tooling, QuestDB is easy to deploy on containerized infrastructure. This tutorial will describe how to use today's most popular open source tooling to monitor your QuestDB instance running in a Kubernetes cluster.

Components

Our goal is to deploy a QuestDB instance on a Kubernetes cluster while also connecting it to centralized metrics and logging systems. We will be installing the following components in our cluster:

  • A QuestDB database server
  • Prometheus to collect and store QuestDB metrics
  • Loki to store logs from QuestDB
  • Promtail to ship logs to Loki
  • Grafana to build dashboards with data from Prometheus and Loki

These components work together as illustrated in the diagram below:

Diagram

Prerequisites

To follow this tutorial, we will need the following tools. For our Kubernetes cluster, we will be using kind (Kubernetes In Docker) to test the installation and components in an isolated sandbox, although you are free to use any Kubernetes flavor to follow along.

Getting started

Once you've installed kind, you can create a Kubernetes cluster with the following command:

kind create cluster

This will spin up a single-node Kubernetes cluster inside a Docker container and also modify your current kubeconfig context to point kubectl to the cluster's API server.

QuestDB

QuestDB endpoint

QuestDB exposes an HTTP metrics endpoint that can be scraped by Prometheus. This endpoint, on port 9003, will return a wide variety of QuestDB-specific metrics including query, memory usage, and performance statistics. A full list of metrics can be found in the QuestDB docs.

Helm installation

QuestDB can be installed using Helm. You can add the official Helm repo to your registry by running the following commands:

helm repo add questdb https://helm.questdb.io/
helm repo update

This is only compatible with the Helm chart version 0.25.0 and higher. To confirm your QuestDB chart version, run the following command:

helm search repo questdb

Before installing QuestDB, we need to enable the metrics endpoint. To do this, we can override the QuestDB server configuration in a values.yaml file:

<<EOF > questdb-values.yaml
---
metrics:
  enabled: true
EOF

Once you've added the repo, you can install QuestDB in the default namespace:

helm install -f questdb-values.yaml questdb questdb/questdb

To test the installation, you can make an HTTP request to the metrics endpoint. First, you need to create a Kubernetes port forward from the QuestDB pod to your localhost:

export QUESTDB_POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=questdb,app.kubernetes.io/instance=questdb" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $QUESTDB_POD_NAME 9003:9003

Next, make a request to the metrics endpoint:

curl http://localhost:9003/metrics

You should see a variety of Prometheus metrics in the response:

# TYPE questdb_json_queries_total counter
questdb_json_queries_total 0

# TYPE questdb_json_queries_completed_total counter
questdb_json_queries_completed_total 0

...

Prometheus

Now that we've exposed our metrics HTTP endpoint, we can deploy a Prometheus instance to scrape the endpoint and store historical data for querying.

Helm installation

Currently, the recommended way of installing Prometheus is using the official Helm chart. You can add the Prometheus chart to your local registry in the same way that we added the QuestDB registry above:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

As of this writing, we are using the Prometheus chart version 19.0.1 and app version v2.40.5

Configuration

Before installing the chart, we need to configure Prometheus to scrape the QuestDB metrics endpoint. To do this, we will need to add our additional scrape configs to a prom-values.yaml file:

<<EOF > prom-values.yaml
---
extraScrapeConfigs: |
  - job_name: questdb
    metrics_path: /metrics
    scrape_interval: 15s
    scrape_timeout: 5s
    static_configs:
      - targets:
        - questdb.default.svc.cluster.local:9003
EOF

This config will make Prometheus scrape our QuestDB metrics endpoint every 15 seconds. Note that we are using the internal service URL provided to us by Kubernetes, which is only available to resources inside the cluster.

We're now ready to install the Prometheus chart. To do so, you can run the following command:

helm install -f prom-values.yaml prometheus prometheus-community/prometheus

It may take around a minute for the application to become responsive as it sets itself up inside the cluster. To validate that the server is scraping the QuestDB metrics, we can query the Prometheus server for a metric. First, we need to open up another port forward:

export PROM_POD_NAME=$(kubectl get pods --namespace default -l "app=prometheus,component=server" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $PROM_POD_NAME 9090

Now we can run a query for available metrics after waiting for a minute or so. We are using jq to filter the output to only the QuestDB metrics:

curl -s http://localhost:9090/api/v1/label/__name__/values | jq -r '.data[] | select( . | contains("questdb_"))'

You should see a list of QuestDB metrics returned:

questdb_commits_total
questdb_committed_rows_total
...

Loki

Metrics are only part of the application support story. We still need a way to aggregate and access application logs for better insight into QuestDB's performance and behavior. While kubectl logs is fine for local development and debugging, we will eventually need a production-ready solution that does not require the use of admin tooling. We will use Grafana's Loki, a scalable open-source solution that has tight Kubernetes integration.

Helm installation

Like the other components we worked with, we will also be installing Loki using an official Helm chart, loki-stack. The loki-stack helm chart includes Loki, used as the log database, and Promtail, a log shipper that is used to populate the Loki database.

First, lets add the chart to our registry:

helm repo add grafana https://grafana.github.io/helm-charts
helm repo update

Loki and Promtail are both enabled out of the box, so all we have to do is install the Helm chart without even supplying our own values.yaml.

helm install loki grafana/loki-stack

After around a minute or two, the application should be ready to go. To test that Promtail is shipping QuestDB logs to Loki, we first need to generate a few logs on our QuestDB instance. We can do this by curling the QuestDB HTTP frontend to generate a few INFO-level logs. This is exposed on a different port than the metrics endpoint, so we need to open up another port forward first.

# Open up the port forward
export QUESTDB_POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=questdb,app.kubernetes.io/instance=questdb" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace default port-forward $QUESTDB_POD_NAME 9000:9000

Now navigate to http://localhost:9000, which should point to the QuestDB HTTP frontend. Your browser should make a request that causes QuestDB to emit a few INFO-level logs.

You can query Loki to check if Promtail picked up and shipped those logs. Like the other components, we need to set up a port forward to access the Loki REST API before running the query.

export LOKI_POD=$(kubectl get pods --namespace default -l "name=loki,app=loki" -o jsonpath="{.items[0].metadata.name}")
 kubectl --namespace default port-forward $LOKI_POD 3100:3100

Now, you can run the following LogQL query against the Loki server to return these logs. By default, Loki will look for logs at most an hour old. We will also be using jq to filter the response data.

curl -s -G --data-urlencode 'query={pod="questdb-0"}' http://localhost:3100/loki/api/v1/query_range | jq '.data.result[0].values'

You should see a list of logs with timestamps that correspond to the logs from the above sample:

[
  [
    "1670359425100049380",
    "2022-12-13T20:43:45.099494Z I http-server disconnected [ip=127.0.0.1, fd=23, src=queue]"
  ],
  [
    "1670359425099842047",
    "2022-12-13T20:43:45.099278Z I http-server scheduling disconnect [fd=23, reason=12]"
  ],
  ...

Grafana

Now that we have all of our observability components set up, we need an easy way to aggregate our metrics and logs into meaningful and actionable dashboards. We will install and configure Grafana inside your cluster to visualize your metrics and logs in one easy-to-use place.

Helm Installation

The loki-stack chart makes this very easy for us to do. We just need to enable Grafana by customizing the chart's values.yaml and upgrading it.

<<EOF > loki-values.yaml
---
grafana:
  enabled: true
EOF

With this setting enabled, not only are we installing Grafana, but we are also registering Loki as a data source in Grafana to save us the extra work.

Now we can upgrade our Loki stack to include Grafana:

helm upgrade -f loki-values.yaml loki grafana/loki-stack

To get the admin password for Grafana, you can run the following command:

kubectl get secret --namespace default loki-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

And to access the Grafana frontend, you can use a port forward:

kubectl port-forward --namespace default service/loki-grafana 3000:80

Configuration

First navigate to http://localhost:3000 in your browser. You can log in using the username admin and the password that you obtained in the previous step.

Grafana login

Once you've logged in, use the sidebar to navigate to the "data sources" tab:

Grafana

Here, you can see that the Loki data source is already registered for us:

Grafana data sources

We still need to add our Prometheus data source. Luckily, Grafana makes this easy for us.

Click "Add Data Source" in the upper right and select "Prometheus". From here, the only thing you need to do is enter the internal cluster URL of your Prometheus server's Service: http://prometheus-server.default.svc.cluster.local. Scroll down to the bottom, click "Save & test", and wait for the green checkmark popup in the right corner.

Grafana data source config

Now you're ready to create dashboards with QuestDB metrics and logs!

Conclusion

I have provided a step-by-step tutorial to install and deploy QuestDB with a monitoring infrastructure in a Kubernetes cluster. While there may be additional considerations to make if you want to improve the reliability of the monitoring components, you can get very far with a setup just like this one. Here are a few ideas:

If you like this content, we'd love to know your thoughts! Feel free to share your feedback or just come and say hello in the QuestDB Community Slack.