more info ⬇


SW engineering, engineering management and the business of software

subscribe for more
stuff like this:

2020 01 01

2019 Year in Review

Inspired by:



Programming Languages


Popular Content by me:

Content I’m proud of:



Best Blog posts read

Best Books Read

Overall number of books read went way up in 2019: Over 120 if my math is correct across paper, kindle, iBooks. Most were light fiction (bedtime reading).

The best non-fiction I read:

Favorite Media:

My overall media consumption in 2019 went down I think. Nothing felt really satisfying to me like Infinity War did in 2018.



Todo in 2020

2020 01 09

What to do to become an Engineering Manager before you are an Engineering Manager

A lot of engineers want to be managers. I don’t think that eng management is right for everyone, but if you at least want the chance, here are somethings you can be doing to help you get that manager opporunity: (In no particular order)

It’s a harder transition that you think. Don’t lose faith in yourself. Take feedback well (even poorly delivered feedback). Ask for feedback proactively. Honestly assess yourself. Do what you can to increase your own self-awareness.

The realities of the workplace mean that the quality of your current manager has a lot to do with how fast you become a manager. You may be left the with unfortunately options of having to change your environment in order to achieve your goals.

In a nutshell: Be excellent at work and make you and everything around you better.

This post based of a series of tweets you can read here. You should follow me on twitter for more of my thoughts on engineering, management and hiring.

2020 01 14

My Nine Years of Go

I learned about the Go programming language when it was announced back in November 2009. It probably happend when it popped up on HN. There was a lot to like about it, even way back then. I’ve always had a list of things that I’d want in my hypothetical perfect programming languages and go ticked off more of them than any other language at the time. Spoiler alert: It still does.

I really sat down to learn the language in early/mid 2011. The language itself didn’t hit 1.0 until early 2012. I can’t find the original repo, but some of that code still powers this blog today.

A lot of the original design holds up well. Goroutines, fast compilation, static typing, interfaces, gofmt, defer, small core, broad standard library that included a webstack in particular. Some of the original quirks (no warnings, capitalization for public/private, disallow unused imports/vars, etc.) took no time at all to get over despite the internet opinions of the time. In fact, most of those look like great decisions in hindsight. As an app developer, I haven’t needed generics much. I love the error handling philosophy and wrote my own error wrapper. It’s part of why go codebases end up reliable.

Not only that but Go has been improving itself over time. It’s ecosystem and early killer apps will keep it around and popular for a long time going forward. I can like a language, but that doesn’t mean that language will gain any reasonable popularity. And popularity is important. You need a critical mass of users to have a reasonable ecosystem.

Go itself has influenced my options on things like GC. Back in 2009, I was a big fan of ObjC/Swift’s ARC and Rust-style lifetime management (even though Rust wasn’t around back then). These days, I prefer that something else thinks about memory management for me. The performance ramifications of GC are no longer relevant.

Go is one of my secret weapons. It still allows me to get from idea to production faster than anything else out there. The positive words I wrote about Go in 2013 still hold up.

The first blog post†† on this particular blog briefly touched on the nascent, early, promising language back in 2011 included these lines:

[Go] is a tremendous productivity multiplier. I wish my competitors to use other lesser means of craft.


That is still something I wish.


I need to update it for the go.mod era, but it still works great.

†† If you read that post, you’ll find the writing style… different. I’m both proud and ashamed of younger me.

2020 02 13

Questions to ask about Growth Opportunities while you interview

I was asked a good question this morning about evaluating growth opportunities in roles you are applying for. The easiest thing is to ask a few pertinent questions to all prospective hiring teams. In particular:

Who will be your manager? How many people have they promoted recently? How have people grown in the past year? This will help you understand a companies track record of growth. Hopefully you see both promotions and people’s roles evolving and developing over time.

How often does management turn over? Frankly speaking, a lot of your growth opportunity is you but a big chunk is how good your manager is at providing opportunities for you. If your manager shifts every 6 months (quitting, moving on, promoted up or sideways, terminated, etc.) it’s a big problem. Similarly if everyone doing the particular role keeps quitting, figure out why. That’s a fairly loud warning sign.

How many people in your function? Will you be the first person doing the role? 5th? 10th? 50th? The lower this number, the more responsibility you will have and the more diverse set of problems you can assume will come your way. This is generally a good thing for career growth as you get more exposure and experience in a short amount of itme. If you are the 50th person doing the role, you are likely to be isolated to a small niche responsibility for a while.

How fast is the company overall growing, both in people headcount and business (revenue or users)? How many more people for this role do you plan to hire over the next year or two? This represents the surface area growth opportunity. Being the first X in a 30 person company that gets to 150 headcount in 18 months represents tremendous career opportunity. converserly, If you are the 5th X and the company headcount is stagnent, you have a long wait for management opporuntities.

Unless you are close to retirement, growth should be one of your top priorities in your job search. Many companies do this poorly. Your best bet is smaller startups if you can tolerate the chaos and anarchy lack of structure. If you are the first or second person in a role in a startup, you tend to be asked to wear a lot hats and structurally, startups are essentially designed for and even defined by growth.

Thanks to Joyce Park for the tip on management turnover.

2020 04 02

Vue.js, vue-router's history mode and Caddy2

I’ve spent the bulk of March’s shelter-in-place order learning Vue.js (v2) and Tailwind.

I quickly came upon and greatly prefer vue-router’s history mode.

Brief summary, this allows urls in a Vue-base SPA (single page app) that look like:

instead of:

That initial hash tag in the path always felt un-web-like to me.

The downside of this is that you only have one index.html at the root (/) directory. If a user navigates from / to /login/, no big deal cause the router is doing it’s thing. If a user types in directly, a standard config webserver will try to fetch /login/index.html instead and throw up a 404.

As documented in the docs, there is a workaround:Rewrite directives. The docs list rewrite examples for a handful of webservers, including Caddy v1. Caddy 2, currently in late beta doesn’t have an example.

In an attempt to help the internet out, here’s a working Caddy2 Caddyfile example to get history mode to work:

localhost {
    try_files {path} /index.html

A real production Caddyfile will have other stuff in it of course: {
    root public_html
    encode zstd gzip
    try_files {path} /index.html

I’ve been very happy with Caddy. It’s first class support of https is amazing and it’s performance has only gotten better over time. I’ve been using it for all new projects and highly recommend it.

I’ve got lots of other thoughts (primarily positive) on Vue.js, Tailwind CSS and FE development in general, but I’m saving that for a future post.

2020 07 15

CRDTs in a Nutshell

Note: This post is a updated, expanded rewrite of an older post from 2012. I have removed references to older technology, but the concepts still stand.

Understanding CRDTs is fairly straightforward if you have some concepts clear ahead of time.

First, you have to be clear on two different types of possible contention with distributed data stores:

  1. Simultaneous Read and Write
  2. Simultaneous Write and Write

In case 1, If you are writing a value to a key, that value needs to be replicated to a few other nodes. This is where the read quorum (for fetches) and write quorum (for stores) come into play: While reading or writing, I must have X number of nodes agree on what the correct value is. You can set X to be all the nodes to get a strong certainty of getting the most recent value. If X is 1 then you might give up on getting the latest value in exchange for some improvement to latency. The typical, balanced – and often default solution – is to set X to be a simple majority of nodes.

In case 2, the write quorum has no practical use. If you have a 10-node cluster, and some client writes “flub” to nodeA and some other client writes “biggle” to nodeF, then we have something called write contention or siblings. How do you decide who wins? This is where sibling resolution comes into play. There are many, many strategies for this. The simplest involve last-write-wins (which is a good way to lose data if you have a counter). If you’ve ever seen a file in your Dropbox called SOME_TITLE (Matt Nunogawa's conflicted copy 2020-07-12) this is an example of siblig resolutions I like to call last-write-wins-but-save-a-copy-of-the-loser.

What happens in practice is that you cannot think about a distributed data store in terms of sets and gets. A better mental model is more like an operation log.

A counter works well in a distributed system if you are only adding. Each client simply tags his increment with some arbitrary but unique client ID. The operation is not get x, then set x+1, but rather counterID:ABC:clientID:gamma:count:+1. Since it’s add-only, if you have multiple counters incrementing at once, they will only modify their own clientID entry. If client alpha adds 1 to nodeA while client gamma adds 1, 1 and 1 to nodeF, you would see something like this:


or posisbly this:


To get the total count of the ABC counter, simply add up all the counts, taking the highest value per clientID. Both of the above examples resolve to 4. You could eventually whip up some garbage collection to clean up older entries if space is something you care about.

If client gamma partitions off the network? It’s just going to keep accumulating counts from is local clients. The short term totals are off but hopefully the system knows it is partitioned, but when it connects back, doesn’t matter what happend locally or on client alpha, the total counts should again be accurate assuming eventual consistency.

Add-only-counters are great but if you need counters that go up and down, you have to get a little clever. If you keep two counters, posABC and negABC and subtract, you effectively get a couter that can go in both directions.

And now we have Conflict-free Replicated Data Types:

CRDTs can be thought of as primitive, resolvable operation logs that can be composed into useful data types.

Now the implications are interesting, because if your distributed datastore can do key-value, and is eventually consistent, you can hypothetically develop CRDTs on top of it.

The above example is an over-simplification. In a real world operation, you would do lots and lots of finicky housekeeping around compression of the log as high-volume writes start to get expensive in terms of storage. But hopefully the concept is clearer.

CRDTS are exciting (in the way that only useful mathematical properties are). The add-only-counter and the add-only-set versions of these are relatively straight-forward. Some very smart people have created CRDTs for more complex data types like lists, maps and even directed graphs. Searching for CRDTs should bring you to the work of Marc Shapiro.

Very recently, Martin Kleppmann published a CRDT talk on recent advancements in this area as well.

But if you really want to learn about all the edge cases, finicky bits, etc. there’s no substitute for implementing a counter in a three node distributed datastore and hammering all three nodes with writes. Odd cookie that I am, I found it to be a great learning experience.

2020 09 19

Plain old HTML Tailwind CSS projects

Tailwind has quickly become my favorite CSS framework. One quirk of working with tailwind is that the core framework is massive. You need a proper npm or yarn setup and tree shaking via purgeCSS is necessary for any kind of production usage. You can see some numbers at the end of this post, but spoiler: 3 orders of magnitude reduction in CSS size is normal and necessary to get your CSS file to a reasonable size.

Tailwind is fantastic when paired with front end app frameworks such as Vue, React, etc. I don’t always need a front-end app framework however and simple html works fine for certain work.

It turns out that this is just a little fiddly and you can quite easily make “plain old html” projects work just fine with Tailwind.

Here’s how I do it:

Starting From Scratch

First, some basic setup:

mkdir -p site/static/css

site is where our plan old html will live. site/static is a great place for images, css and other files that don’t change.

Conceptually, site doesn’t even have to be even in the project directory, if you make the necessary adjustments to the appropriate config files.

npm init
npm install --save-dev tailwindcss
npm install --save-dev cssnano
npm install --save-dev @fullhuman/postcss-purgecss 
npm install postcss-cli --global

Next, you need a basic input.css file.

Create this at the root of your project directory (where you ran npm init).

touch site/static/css/input.css

It should contain the following three lines:

@import "tailwindcss/base";
@import "tailwindcss/components";
@import "tailwindcss/utilities";

Basic Config

You need a few config files. What’s a modern frontend project these days without a mountain of config?

I use two different postcss config files depending on whether I want to minify or not.

mkdir build debug
touch /build/postcss.config.js
touch /debug/postcss.config.js

/build/postcss.config.js should contain the following:

module.exports = {
  plugins: [
    // ...

/debug/postcss.config.js is the same, but remove the require('cssnano') line

touch tailwind.config.js

tailwind.config.js should contain the following:

module.exports = {
  purge: {
    enabled: true,
    content: ['site/**/*.html'],
  theme: {
    extend: {},
  variants: {},

Lastly, open up package.json and add the following scripts:

"scripts": {
    "debug": "postcss style.css --verbose --dir ../assets/static/css",
    "build": "postcss style.css --config minify --ext min.css --dir ../assets/static/css"

You can do either of the following to generate CSS:

npm run debug   # generate, purge, but don't minify
npm run build   # generate, purge, minify

Purging can sometimes be finicky and I don’t recommend you turn it off, even in during dev or debug mode.

At this point, any html files you add to the site directory should be scanned during the purgeCSS processing.

Your style.css file should contain only relevant tailwind CSS classes.

A quick size check, using a minimal hello world HTML file:

Plain Old HTML

I love POH for simple pages. It’s rarely worth the effort to setup an entire vue or react project for something like a landing page or placeholder work. For me, it’s almost always worth the effort to get tailwind in place.

the fine print:
aboutarchive@amattnconsulting or speaking inquiries
© matt nunogawa 2010 - 2020 / all rights reserved
back ⬆