Matt Crook

Software engineer in the healthcare field • UoA dropout • Please consider the environment before printing this email. #software #dotnet #tech #devops

Productionising a Listed Blog

As part of my efforts to simplify my personal projects, I had merged my personal and professional blogs into a single platform: Listed. Listed is a neat little value-add to the Standard Notes app which I use - allowing you to publish blog posts to a website from a note.

The issue is: it is quite slow, has a so-so uptime record, and offers no access to site analytics. Their response to this is that the intention for Listed is to provide a place to publish journal-style posts, certainly not to rely on it for hosting mission-critical websites.

Fair enough, but I really like the convenience of this system of publishing from notes, and I don't want to host a separate site for my professional blog any more. Fortunately, there is a middle-ground which can be achieved to address all of these issues - if you are a paying Standard Notes subscriber (requires a Listed custom domain).

Speed and Uptime

The page load time is not awesome with Listed - the average response time as measured from my uptime system hosted in West US is about two seconds, which seems fine until you realise that this is just to load the HTML page and no other resources. The uptime for the last thirty days was measured at 99.95%, which is actually quite a good month for Listed.

To resolve these issues, one can simply proxy their Listed custom domain through Cloudflare. By default, this will only cache resources such as stylesheets, javascript, and images - this may be enough to get a decent performance increase on its own, but using custom page rules to set the 'cache level' to 'cache everything' will net the largest performance improvements. With everything cached, the impact of outages are also reduced.

Pro tip: make sure you don't cache your settings page. Create another page rule which matches your settings page, set the 'cache level' to 'bypass', and make sure its at the top. Cloudflare will only execute one matching page rule per request.

Screenshot of the Cloudflare custom page rules overview used on this site

Screenshot of a custom Cloudflare page rule displaying the options necessary to cache all content explicitly

You can see from the graph below the difference it has made - the average page load time is now 215 milliseconds, which is a significant improvement.

Screenshot of a graph showing the average page response time for the Listed blogging platform


This is another area where the Listed platform could do with some improvement. There is no option to add analytics tracking tags to your blog, such as Google Analytics, Application Insights, or New Relic. This is being considered by the team, but is currently not supported due to privacy and security concerns, which is absolutely fair enough. However, I find analytics to be too important not to have.

Like with before, this is achievable through Cloudflare. Once your site is being proxied through Cloudflare, you can make use of page injection at-the-edge. The Cloudflare CDN allows you to alter the website served without modification to the origin - which is perfect for this situation.

There exist apps in Cloudflare for injecting Google Analytics and Matomo Analytics, as well as Google Tag Manager for everything else. However I wish to make use of Cloudflare Insights, which is able to be automatically setup in the same way, with no changes to the origin - a single click injects the scripts necessary.

Screenshot of the Cloudflare insights statistics for this site over the last 24 hours

In conclusion, Listed is a really convenient platform for blogging - being able to publish notes directly from the same app where you keep your grocery list is quite cool. It has a few issues here and there which would make you second-guess hosting anything professional through it, but with a little help from Cloudflare these issues evaporate.

If you haven't already, go ahead and check out Standard Notes and Listed. It's a really cool productivity suite.

Scaling Back Side Projects

Over the last year I have found myself to be busier than ever before with work, yet my desire to create side projects has not scaled back to balance this workload.

I always have a bunch of ideas I want to try out as a project, but I always end up getting half way through and leave it to stagnate when work picks-up. Then I get annoyed that I haven't completed anything, and wonder why.

This ends here, its time for me to begin aggressively scaling back my side-projects. I am archiving all of my active projects, deleting plans for ones I have not started, and turning off auto-renew for the domain name collection I have built-up.

Going forwards, any project I want to begin must be limited in scope, must be completed within six months of starting, and must be limited to one active project at a time. I see this as a good way of preventing myself from embarking on 'professional' projects, which always end up as a massive time sink.

It is kind of a shame, but there just is not enough time in the day to plan and build projects with larger scopes than my actual work. Start small.

Managing Kubernetes Workloads with Terraform

Terraform is an awesome way of managing infrastructure as code. It builds a graph of your definition, compares it to what exists already, and makes only the required changes. It handles dependencies automatically, allowing you to configure cloud resources based on the outputs of others.

One of the so-called "providers" in Terraform is Kubernetes, which allows you to make changes to a running Kubernetes instance in the same way you would with infrastructure. This lets you create a managed Kubernetes instance with a cloud provider, and then deploy to it using the same language.

If you are at all familiar with Terraform, you might try something like this, all in one file:

# > don't do this (read on)

terraform {
  required_version = ">= 0.14.0"
  required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "2.22.3"
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "2.13.1"

resource "digitalocean_kubernetes_cluster" "cluster" {
  name   = "my-sample-cluster"
  # ... snip ...

provider "kubernetes" {
  host  = digitalocean_kubernetes_cluster.cluster.endpoint
  token = digitalocean_kubernetes_cluster.cluster.kube_config.0.token

  cluster_ca_certificate = base64decode(digitalocean_kubernetes_cluster.cluster.kube_config.0.cluster_ca_certificate)

resource "kubernetes_namespace" "example" {
  metadata {
    name = "my-first-namespace"

The above will work, but likely only once. Upon update, you might come across an error indicating that the host was incorrect, appearing to be localhost on port 80 - the defaults. This would be because of the way in which the Terraform dependency model graph works - the providers are always evaluated before actual resources.

To get around this limitation, and avoid difficult to diagnose issues, it is important to split your definitions into two completely separate Terraform projects. This will mean you will need to run the terraform apply command two times, one for each project. The configuration of the Kubernetes provider in your 'workload' Terraform project would be configured wither by sharing state through Terraform Cloud, or by using a data source of the cluster you created (if supported by your cloud provider).

Here is an example of this in action:

# resources/main.tf

terraform {
  required_version = ">= 0.14.0"
  required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "2.22.3"

resource "digitalocean_kubernetes_cluster" "cluster" {
  name   = "my-sample-cluster"
  # ... snip ...
# workload/main.tf

terraform {
  required_version = ">= 0.14.0"
  required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "2.22.3"
    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "2.13.1"

data "digitalocean_kubernetes_cluster" "cluster" {
  name = "my-sample-cluster"

provider "kubernetes" {
  host  = data.digitalocean_kubernetes_cluster.cluster.endpoint
  token = data.digitalocean_kubernetes_cluster.cluster.kube_config.0.token

  cluster_ca_certificate = base64decode(data.digitalocean_kubernetes_cluster.cluster.kube_config.0.cluster_ca_certificate)

resource "kubernetes_namespace" "example" {
  metadata {
    name = "my-first-namespace"

A Return to The Office

Most of the nation has been working from home since about August of last year, which was when we had the last big lockdown - and subsequently gave up on the idea of doing those as a nation as it dragged-on.

This has worked out quite well for me. I've traded a small apartment and a short commute, to a home and garden with a longer one. Being able to roll directly out of bed and log-in five minutes before the work day begins has been revolutionary.

It hasn't come without downsides, however. It wouldn't be uncommon for me to go the day without physically speaking to another person, other than in the usual rituals such as stand-up. I do tend to miss the casual interactions with my colleagues. Swivelling my chair around to chat to the people next to me felt really organic, compared to a more formal and organised Slack or Teams call.

So I suppose it's no surprise that management want everyone to come in at least two days a week now. Might also have something to do with the business paying for a lease on an empty office.

Full-time back in the office is hell, as I discovered when the team was asked to come in the entire week during a crunch. Waking up early, starting my commute in the dark, and returning home in the dark sucks. And then I have to cook and clean. Gross.

The carrot in this whole situation is that we are getting our workstations and peripherals upgraded. Goddammit it's working, too.

Positive Habits

I'm trying to write more often. In a quite a nebulous hand-wavey way, I want to improve my ability to put thoughts to paper, so I can communicate more effectively. To that end, I've begun writing in a private journal about every day, and committing to write one of these blog posts weekly.

This isn't about writing a novel every other day - it's more like jotting down a few lines in a journal throughout the day, and then sitting down to research and write-up a short post on a subject that interests me at the end of the week.

Practice makes perfect, so this habit creates the occasion to do just that. I'm hoping that this will help me to improve my writing skills over time, as well as help with memory by way of the journal.

The system I'm writing this on, Listed, has the concept of the 100 day writing challenge. That's very good and all, but I'm not so sure I could do it each and every day - best to test the waters before giving it a go, lest I be disappointed.

Fixing Difficult to Search Problems

My daily driver OS is Linux Mint, which I have installed on my a pretty new "gaming" laptop. Every now-and-then, it would crash. Completely frozen, would respond to nothing except for a hard-reboot by holding the power button down. What on earth do I type into that little search box to figure out a solution to this problem?

I know that it wasn't just a graphics issue, I couldn't even SSH into the machine from another. Pings were unresponsive. Maybe I search for "Linux Mint system completely unresponsive random"? That didn't turn up any relevant results.

Maybe its something to do with my hardware? Perhaps "Linux Mint MSI laptop 2022 system frozen" would be a better search term? Nope, nothing particularly useful there either.

By pure chance, I was exploring the updater application, and noticed that I could select a different Linux kernel. I saw the one I was currently on, it was pretty old, but still supported. Oh.

There was a much newer one that I could select, so I figured sure, why not. I updated, and have not had a single crash in months. Incredible!

Perhaps the moral of the story is to explore more, rather than articulate more. Does that make sense? Doesn't really sound like a good moral.

Java Woes

For personal projects, as well as some other basic automation tasks, I use Jenkins to host pipelines. It's very extensible, and I really like the old-school UI. Takes me back to the times of Minecraft server plugins and such.

One of the plugins (written in Java) I used appeared to have broken - it interacted with a public API which annoyingly broke their contract without publishing a new version. It looked like a simple change to fix, so I went out and pulled the code to see if I could do it myself and create a PR. I found that somebody had already had a fix, but had been sitting unapproved in a PR since the start of the year. Uh-oh, abandonware.

It was a pretty simple plugin, really, so I figured I would just re-create it myself. I use OpenAPI quite a bit at work in C#, so I figured I would use that to generate the client, rather then manually interact with the public API like the other plugin did. All I need to do is create a new project, add a generate hook with the API's OpenAPI doc, and publish it. I do it all them time.

Oh how naïve I was. Java is quite complicated, I struggled just to get a simple project up and running. I also had no idea why clicking the green hammer wasn't producing any output, apparently I had to set up Maven? And how the hell are dependencies supposed to work, do I really have to add the transitive dependencies myself?

I spent a good couple of hours trying to get everything together, only to find out that the OpenAPI doc provided by that public API wasn't well-constructed, and would need extensive modification to get it to produce a Java project that would build without error. Real agile.

You never know how well you have it until you try something new. Java is so much more complicated than I had previously thought, and there are so many ways to do simple things like dependency and build management. The .NET ecosystem certainly has problems of its own, but its so much simpler and quicker to get up and running. Modern .NET has considered the development experience from the ground up, and it really shows.

Enterprise Resource Planning, For the Home

I live in a flat which I share with some really good friends; meals are planned and shared to lighten the load and costs between us all, same with household supplies. Because of the distributed nature of this setup, it can be time-consuming to organise the grocery shopping and meal plans each week.

As an avid homelabber, I spend plenty of time lurking in the r/homelab and r/selfhosted subreddits, and I came across a self-hosted application called grocy - which promised to help with this organisation. I was desperate to make it easier to plan things out for the week, so I launched a Docker instance and started exploring.

Now, my flat has a fair bit of organisation through the use of Trello, and I have some extensive automation set up using the built-in Butler, but none of this was particularly helpful for this task because there was no inventory tracking. It was more suited to things like chores, where a cards would come in and be randomly assigned on a schedule.

So by tracking inventory with grocy, I had hoped that automating groceries would be easier, and lessen the time that I had to personally invest in managing the household, distributing it amongst the rest of my flatmates. So I gave it a go, and everyone was on board to try it out.

I ended up cancelling the trial after a week. I found it didn't lighten the workload at all, quite the opposite. With grocy, you must scan groceries in and consume things out, or the automation I was hoping for - minimum stock amounts, adding recipes to a shopping list - just doesn't work. The flatmates would often forget to consume what they used, and of course we weren't too particular about stock-takes and inventory as it would just take too much time.

Another thing we all found way too difficult was manually adding all the products. You have to define the products before you can add them to the shopping list or recipes, which doubles the effort required to do what was previously a simpler task - throwing some free-form text on a checklist.

We ended up switching back to Trello for the grocery list, but with a few changes; we now have new columns/lists in the board for recipes and the meal plan. Recipes have an ingredients checklist, which gets copied to the grocery checklist when moved to the meal plan - all automated with Butler, of course. This makes planning for the next week much simpler and faster, which was one of the things I was hoping to achieve with grocy.

Grocy certainly has a purpose, but unfortunately it wasn't for us. We're a busy bunch with not much time to spare on household duties, so we were not able to put in the effort required to get the gains promised by the system.

After Hours On-Call Support

The company I work for is the antithesis of Google; where Google kills products almost religiously, my workplace has deprecated almost nothing in thirty years of existence, but still loves to ship new products and systems.

There are valid reasons for this operating strategy; our customers appreciate the long-term support and reliability, but couple this with an unprecedented shortage of software engineers in the industry, and you have a recipe for trouble.

I'm on the after hours on-call roster, which is voluntary. There are not many others on this list, and the number of services we monitor is high - as well as a catch-all Jira service desk that is routed to my team.

With so many things that could fail, the likelihood of receiving a callout is very high. There can be massive gaps of several months between callouts, interspersed with so-called "hell weeks", where there can be a callout almost every night.

Being called-out this often is a problem, but I always come-up empty when thinking of a solution. The business wants high-availability for all services, but we don't have the head-count to work on reliability of those services.

Monitoring your Self-Hosted Estate

If you build and develop a system with paying customers, you know that tracking and publishing service uptime is absolutely critical to build trust. If your customers don't trust that your system will be up and available when they need it, its unlikely that they will remain your customers for very long. Hosting a central web page where customers can view and track outage events over time builds that trust, and helps with communicating these events as they happen.

I believe that the same things are valuable in the self-hosting/homelab world - only, your "customers" are your friends and family.

Of course, you don't need to treat the end users of your self-hosted systems in quite the same way as a business would treat customers, but it is valuable to monitor the various services you may be hosting to allow yourself to be pro-active with issues. Personally, I get a little frustrated when anything I host goes down, and having several people tell me its down is unhelpful. I would rather share a link to a status page where they can see progress and status, and forget about it. If things go down in the future, they'll go direct to the status page instead of badgering me.

Keeping with the self-hosted nature of my homelab, I also self-host the monitoring tool - I use Uptime Kuma. Just as an aside: try not to host your monitoring tools alongside the things they are monitoring - they might end up going down together, and it would ruin the entire point. I host my instance on a DigitalOcean VM to keep it entirely separate from my home infrastructure, but using a Raspberry Pi in a different room or at a friends house would also do nicely.

I also find another benefit in monitoring your hosted services in a central location; you automatically get a tested inventory of everything you manage. It can be hard to track what you have hosted, since self-hosting can be quite low-friction (if you're doing it right). I can spin-up a Docker container, and completely forget about it. Maybe it goes into a crash-loop, using up resources, and I'm none the wiser until it starts affecting other services. If I always add a monitor to every service I spin-up, I can know when its misbehaving and make a decision of if it should still be there.

So yeah, monitoring your uptime can be a good idea for a homelabber, too. You can avoid getting snippy with your friends and family when things go down, and it can also help keep your digital estate under control. It might also serve as bragging rights, displaying all the services your host and their uptime.


The Rust programming language has been one of those things that I want to get into, but have just been struggling to get my head around. I've seen the light, but how the hell do I use it? From the outside, it looks so different to anything I've used before.

For example; I had learned to program originally in PHP, but when I realised that it was certainly not the language for me long-term, I was able to step into C# with very little resistance. I've found myself having to read documentation very frequently with Rust, it's not as intuitive in comparison.

That being said, the effort that has gone into education with Rust is phenomenal. The compiler errors are quite clear and visual, and there are help commands that give you examples on errors, and how to get around them. This is primarily how I had been learning: find a thread from the web and the Rust docs to give me examples on what I'm trying to do, coding it, and then using the help commands when it inevitably goes wrong.

Additionally, the build system through Cargo work exactly as I had hoped it would. The projects I work on tend to be quite complicated, with many external dependencies in different languages - and I'm of the opinion that each individual application/library/crate/etc should be responsible for wrangling its own dependencies at build time, so I was pretty thrilled when I found out about Cargo build scripts. The build script for one of my Rust projects builds a .NET project, and consumes its output. It works pretty well!

So yeah, I keep getting these little tastes of Rust, and I'm liking what I'm seeing so far.

New Years Resolutions

Its getting real close to that time of year again, where we all pretend to perform some serious introspection and decide that we have no real issues other than superficial. We all decide we want to start going to the gym, but then give up after a month or so.

I had some thoughts about this - not so much about what my resolutions will be, more about how I think I should set them.

Holding myself to a SMART goal seems unreasonable given the context. The time span for a New Years resolution is far too long, and because of this I would set a goal that's more difficult to achieve.

The goals never stick, because I'm gonna do what I want to. If the goal was fun and achievable I would be doing it already.

That's why this year, I'm not setting goals - I'm setting a direction.

By direction, I mean something like "focus on health". Its non-specific and un-measurable so that you won't be disappointed if you don't achieve it, but at the same time provides the same amount of motivation you would have early in the New Year.

Its a way of remembering and focusing on what was important to you at the time, and hopefully re-discovering that mindset that lead you to it. You can use it to inform decisions, and prioritise against other tasks in your day-to-day.

Write your direction on a sticky note and put it on your mirror, on the back of your front door, or at eye level by your toilet. For the non-psychopaths amongst us, try pinning it in your Notes app.


After spending an entire year of promising myself I wouldn't distro-hop into another Linux OS any more, I thought I would reward myself by installing another OS. Linux Mint this time.

Can't say its entirely my own fault though - I tried updating Ubuntu 20.04 to 21.10, but apparently I didn't have enough space in my boot partition to upgrade, and I couldn't resize my other partitions to make room. I would have had to re-install anyway.

May as well have made that something new and exciting!

I've used Mint before, and I was totally fine with it last time before switching to Debian (then Ubuntu). My problem is just that I want a bit of change every now-and-then, so I swap Cinnamon for GNOME.

I could probably just switch out my desktops instead of re-installing, which I may look at doing this time if I have the need. Though, this is what I said to myself last time, and look how that turned out.

Vagrant is definitely going to be something to look at, for the future.

Advanced Custom Shortcuts on Windows

Shortcuts on Windows are annoying. Inside those little binary lnk files is a large number of properties which are essential to doing some pretty cool stuff.

One of those things is the ability to enable toast notifications, without the need to use a UWP project. This was one of the things I wanted to do, and it ended up being surprisingly difficult. The thing I found most strange when implementing this, was that the key to the entire operation was the need to add special identifiers inside a shortcut on desktop or start menu.

I found it was incredibly difficult to create such a shortcut, unless I changed the installer technology I was using - which I was not super interested in doing. It turns out that there is no way to do this manually, either.

So instead, I created a new PowerShell Module named PSAdvancedShortcut which I could use inside my installer to create the shortcut for me.

This module is a C# binary module, using p/invoke to set the hidden properties inside a shortcut file.

You can check it out on GitHub and the PSGallery.

Notes on Blazor

At work, I have started using Blazor as the base for a new web project - and I have compiled some notes that I would like to share about my experience with it so far. There is no particular format or ordering here. Also note that I am still only exploring Blazor, my notes are generally in comparison to some other similar web technologies.

Without further ado, begin brain-dump;

  • Being able to re-use class libraries and components from existing .NET projects is huge
  • Quite easy to pick up, particularly if you're at all familiar with ASP.NET
  • Remember that the code that is published with WebAssembly is publicly decompile-able - much like how you wouldn't connect directly to a database, you shouldn't reference projects which contain sensitive code (like license key generators)
  • WebAssembly Blazor is definitely the more attractive project type, server-based Blazor has the drawback of requiring an active connection for the entire lifetime of the application - I have seen Blazor apps that stopped working while I was reading some text, because it lost connection to the server. It was probably just a poorly implemented app, but it gave me a moment of pause
  • The benefit of having a web application as a first-class csproj is clear, interacting with the project is simple inside of Visual Studio / Rider, and organisation is similar to ASP.NET and WPF projects (thinking like CSS files presented as WPF 'code behind' files if they are named similarly to a Blazor component file)
  • You'll probably still want to use a server API, and you'll definitely be wanting to implement JWT-like authentication for that - the application I was developing uses single-tenant AAD auth, and this guide was perfect for securing the client and server side together
  • WebAssembly definitely shines with a CDN in front for speedy delivery of comparatively large assets that all need to be downloaded up-front, but:
  • Deployment in tandem with a CDN seems precarious, because if any of the binaries don't match the manifest, the application breaks - this can be an issue where the new version's manifest is served, but some of the old version's binaries are cached
  • Probably wouldn't use Blazor for a home page or news site just yet, plain HTML would still reign king for that - but complex applications in the browser seem like a perfect fit

End of brain-dump.

I might come back and add more things here as I go, but that is about where I'm at with the technology so far.

Generally I am pleased however, I think this might stick with me and find their way into my own personal projects.

Who Really Needs a Database, Anyway

As of today, my blog is serving requests from Azure Table Storage - and it is working far better than I had anticipated. It has even reduced the complexity in my codebase by allowing me to rip out all of the Entity Framework pieces, like migrations and setup.

I know most personal developer blogs prefer to use static content generators like Hugo et al, or even just off-the-shelf blog software like Wordpress or Blogger, but I have mostly settled on building my own - I want to be able to fine tune how everything works. This has allowed me to simply drop the need for the database I had been reluctantly paying for.

Previously, I was using DigitalOcean App Platform - I definitely did not want to be managing a virtual machine myself, but this came with the caveat of having to pay extra for a database, rather than just installing Postgres or similar on the same machine. Fair enough, but my site is mostly delivered from edge caching through CloudFlare. I really only needed something to persist a tiny amount of data should the application need to be restarted.

This is where Azure Table Storage comes in. It has some limitations, namely; a maximum row size of 1MiB, a throughput of around 2,000 entities per second, somewhat varying response latency, and of course very limited querying capability. These are hardly issues for my tiny blog, as my posts tend to be very small, and of course as I said before, most requests are delivered via an edge cache. Loading the entire table into memory to perform anything more complex than a direct lookup is no big deal because of this.

Table Structure

In Azure Table Storage, the most important 'schema' (there are no real schemas in table storage) decisions are really only what you choose for the PartitionKey and RowKey. For the former I chose to use the published year, and for the latter I chose to use the post 'slug'.

If you cast your gaze to the URL structure of this post, you can see that I have always had the year and the slug in the route. This means that I can perform a very fast direct lookup, without having to scan the table for a matching slug - had I used some random identifiers in the keys.

Given that I have so few posts, I definitely will have been able to get away with a single static PartitionKey and continue using the slug for the RowKey, but the logical partitioning by year simply feels better to me.

Performance Results

Screenshot of a graph showing an end-to-end transaction of dependencies when loading the home page of this site, with the entire duration of the request taking 28.9 milliseconds, and two requests to Azure Table Storage taking 12.5 and 5.9 milliseconds respectively

Screenshot of a graph showing the distribution of latency for retrieving data from Azure Table Storage, with the 50th, 95th, and 99th percentile results being 8.5, 120, and 300 milliseconds respectively

As you can see from the images above, it looks quite good! The requests to table storage are still slightly slower: the database had an average latency of around 4ms, where table storage has an average data retrieval latency of around 30ms, but in the grander scheme of a web request it goes pretty much unnoticed - and again the edge caching makes this a no-op for most visitors.


For cost comparison, a database just cannot compete. I was paying US$7 for the smallest database provided by the DigitalOcean App Platform, which is cheaper than their smallest managed database offering, but it was still costing more than the application hosting itself. This was the original reason why I wanted to drop it.

Azure Table Storage has so far had an undetectable cost, though I'm expecting something along the lines of less than US$1 by the end of the month. Paying only for what you use continues to be exactly the pricing model I look for, especially for my personal projects like this blog, where traffic is low. No reason to pay hundreds of dollars to support such things!