more info ⬇


SW engineering, engineering management and the business of software

subscribe for more
stuff like this:

2013 02 05

The Best Way To Learn

The best way to learn is to do.
The other best way is to teach.

A distant third is to hangout with people who do or teach.

Pretty far down in the rankings is to hang out on the internet and read blogs.

2013 04 22

Weighted Credit Pools for API Rate Limiting

I’ve been spending lots of time thinking about and discussing APIs lately.

Eventually, the topic of rate limiting comes up, because an API is an open invitation for people to cause work to be done on your servers. Most of the time, people are polite about it, but a few curious people (and very rarely outright malicious people) will stress the limits of any exposed API.

At a high level, developer API keys and user authentication helps, but ultimately some form of rate limiting becomes necessary.

One scheme I thought up was likely born out of my youth going to local arcades. Back in the stone age, if you wanted to play video games, you had to pester your mother until she drove you to a dedicated place of business. At that point you continue to pester until a conversion of money to quarters or tokens takes place. You would then insert one or more of the tokens into a game of your choice for a few chances at dopamine release.

The rate-limiting model works like the arcades of antiquity with a very generous and patient mother. Each user has a set number of credits in a credit pool. Credits are deducted from the pool each time you hit an API endpoint. These credits have a regeneration rate (X/minute) and also a cap (MAX_CRED). Each endpoint would consume one or more credits.

The trick is that endpoints have a credit cost relative to the resources required. For example, GET methods to items that are easily cached would only cost a few credits. Expensive endpoints, such as multi-server queries with JOINS or POST methods that upload/create resources that take permanent storage would cost an order of magnitude more.

No Stalling

This system allows “spiky” user behavior without arbitrary stalling. Other rate-limiting systems use a time window of 15 to 60 minutes. If an end user exceeds their quota in the first minute of a 15 minute window, they have to wait 14 minutes before doing anything. To differentiate between expensive and cheap actions, each endpoint (or group of endpoints) would have to track its own time window.

With a credit based system, the window can be arbitrarily small. A regeneration rate of 6/minute corresponds to an effective time window of 10 seconds. If an end user blows through CRED_MAX, they aren’t stalled for very long before they can resume inexpensive actions.


In the simplest case, you are only tracking one pool per user, rather than having a time window for every endpoint. An API could could certainly have multiple pools, but it is not required.

Furthermore, the system doesn’t need to actually keep track of every user’s credit balance every minute, but rather just the user’s last known balance at a point in time.

A hypothetical example where credits regenerate at one/minute would be:

    Time:   Event:                      Cost:  Pool Balance:
    00:00   User A has CRED_MAX         --     100
    00:10   User A POSTs a new image    20     80
    00:10   User A POSTs a new image    20     60
    00:10   User A POSTs a new image    20     40
    00:20   User A GETs list of images  02     48

When the app reads the credit balance at 00:20, we have a record that states the balance was 40 at time 00:10. The hypothetical getCreditBalanceForUser() function does some math knowing that 10 minutes have passed (during which 10 credits have been regenerated) and returns the current pool balance of 50 which is enough to cover the cost of the GET. There is no need to iterate across all users and increment the credit value every minute.

This system adheres to one of the principles of scalable architectures:

Don’t incur resource costs for actions that aren’t taken.

In this case, no work is being done by the system from time 00:11 ~ 00:19, even though conceptually, credits are regenerating during that time.

In Practice

One of the advantages of this scheme is that the back-end storage of credits can live happily in Redis or some other memory based system without permanent storage. If the server is reset, then everyone just gets a free play; their credits can be temporarily reset to CRED_MAX. A memory based system is extremely unlikely to be a bottleneck for any given endpoint. Redis is doubly appropriate because of its nice increment and decrement operations and EXPIRE can be used to clear out users that reach the credit cap.

In practice, I’m looking at a very fine-grained production implementation, where CRED_MAX is in the thousands and regeneration rate is near one per second. Cheap, cacheable endpoints cost 5-10 credits and expensive ones are in the double or triple digits. Ultimately, you’ve succeeded if the vast majority of end-users never notice the rate-limiting system at all.

Now, if only I had a quarter for every time my articles caused a dopamine release.

Many thanks to @jedlau, @TimHaines, @jkubicek and @nolancaudill for reading drafts.

2013 07 30

Objective C Blocks: Summary, Syntax & Best Practices

Blocks are closures for C, Objective C and C++. You may know them as anonymous functions or lambda expressions.

Good usage of blocks is an excellent path to reducing typing, line count and bug count in your Cocoa programs.

Blocks should not be confused with Grand Central Dispatch (GCD): GCD is primarily a queue library, which typically uses blocks.

Blocks do have a learning curve attached to them. They also have a tremendously wonky syntax. Buyer beware: in simple situations, blocks are very readable, but excessively nested blocks can transform your source into inscrutable rivers of punctuation and indentation.

Blocks: The Good

Blocks are closures for C.

We’ve briefly threw out the above quote earlier, but it’s time to explore why closures are a good thing.

This particular example is a simple implementation of a callback. Prior to Snow Leopard, Objective C callbacks were implemented in one of two ways. The first is simply passing a selector and a target, as in the example below. The alternative method of doing callbacks using delegates is even more verbose.

The classic example is that prior to blocks, Apple documentation had roughly 1,300 words dedicated to getting setting up a delegate and getting data from a URL:Using NSURLConnection.

Now compare this with the more recent blocks based solution:

id someOtherClass;
[NSURLConnection sendAsynchronousRequest:request
                                   queue:[[NSOperationQueue alloc] init]
                       completionHandler:^(NSURLResponse *resp, NSData *data, NSError *err) {
                           // Do something with the data
                           // Because this block "closes around" and captures the surround scope,
                           // you can use someOtherClass in this block if necessary.

The magic of blocks is that all the variables in the same scope as the block are kept with the block as it gets passed to the networkManager and back. This is called a closure. Block-based APIs are more typically more resilient. If a block is passed around a few times, and at some point you realize you need an instance variable from two or three objects ago, it’s likely still in the block. A delegate based API would likely have to rewrite the delete protocol as well as any objects that conform to that protocol.

Most of the other block-based goodness comes from combining it’s super-powers with Grand Central Dispatch queues.

Blocks: The Bad

Blocks have a somewhat steep learning curve.

Oddly enough, newcomers to the platform have an advantage here. Long time Cocoa-heads likely have to break ingrained habits to clearly see where blocks can help them.

Here are some hints:

Blocks: The Ugly

Blocks are ugly.

Actual Block syntax:

NSError * (^workerBlock)(NSString *someString, BOOL(^afterWorkBlock)(int));

The above defines a block that takes two arguments, an NSString and a afterWorkBlock and returns an NSError pointer. The afterWorkBlock takes an int and returns a BOOL.


The languages designers are constrained by the existing C, C++ and Objective C languages. According to an Apple developer, the caret (^) was chosen because it is the only character you can’t overload in C++.

Let’s breakdown the two primary types of block syntax:

Block literals are defined inline with your code. Here is an example of directly calling a method, passing a block literal as an argument:

NSUInteger foundAtIndex = [someArray indexOfObjectPassingTest:^ BOOL (id object, NSUInteger idx, BOOL *stop) {
    return [object hasPrefix:@"SOME_PREFIX"];

Block pointers look similar to function pointers, but use the ^ (caret) instead of the * (asterisk/star/splat). Here is an example of assigning a block literal to a block pointer:

// checkMatch is a block pointer that we assign to.
BOOL (^checkMatch)(id, NSUInteger, BOOL *) = ^ BOOL (id object, NSUInteger idx, BOOL *stop) {
    return [object hasPrefix:@"SOME_PREFIX"];

NSUInteger foundPrefixAtIndex = [someArray indexOfObjectPassingTest:checkMatch];

It’s important to note that block literals and block pointers are ordered slightly differently:

In any case, good typedefs are your friend. They will improve readability and clean up your method definitions. Life will be easier and I highly recommend making use of them as much as possible. Consider the difference between:

typedef BOOL (^SomeBlockType)(id object, NSUInteger idx, BOOL *stop);

- (void)collectionToCheck:(SomeBlockType)checkerBlock;
- (void)singleItemToCheck:(SomeBlockType)checkerBlock;


- (void)collectionToCheck:(BOOL(^)(id object, NSUInteger idx, BOOL *stop)) checkerBlock;
- (void)singleItemToCheck:(BOOL(^)(id object, NSUInteger idx, BOOL *stop)) checkerBlock;

If you have to change the block signature, it is much easier to change the typedef. The compiler, being a nice fellow, will then tell you all the places the block signature doesn’t match.

Lastly, for block literals, you can abbreviate when you have void returnType or args:

^ (arguments) { ... }  // if returnType is void
^ returnType { ... }  // if argument is void
^ { ... }  // if returnType & arguments are void

It takes some getting used to. The compiler will attempt to help with cryptic error messages as well. If you see a good one, let me know and we’ll try to help decode it together.

Calling blocks

Calling a block works just like a calling a function.

void (^logBlock)(id) = ^ (id object) {
    NSLog(@"object %@", object);


But beware: trying to call a nil or undefined block will likely crash. If you are really unlucky, you might just corrupt some memory creating heisenbugs:

typedef void (^myBlockType)(id object);

myBlockType logBlock = nil;


Instead, you need to check for nil or define a “do nothing” block:

typedef void (^myBlockType)(id object);

myBlockType logBlock = nil;

// check for nil
if (logBlock)

// or define a do nothing block
logBlock = ^ (id object) { /* does nothing */ };

logBlock(@"DOES NOTHING");

Blocks and mutable variables

Blocks capture the variables in the surrounding scope, but they are treated as constants unless you use the __block keyword.

__block BOOL foundIt = NO;
BOOL foundIt2 = NO;         

[someArray enumerateObjectsUsingBlock:^(id obj, NSUInteger i, BOOL *stop){
     if (obj == objectWeAreLookingFor) {
         *stop = YES;
         foundIt = YES;  //no compiler error
         foundIt2 = YES; //compiler MAD!

Blocks and memory management

Block are Objective-C objects, but their memory management situation is somewhat unique. Most of the time you won’t need to copy or retain a block at all. If you need to save a block beyond the scope in which it was created, you have two different options.

In C and C++, you use the Block_copy() and Block_release() functions to extend the life of a block beyond the scope in which it is created. In Objective C, you have the usual retain, copy, release and autorelease methods.

The nuance is that most of the time, in Objective C you want to use copy instead of retain. When blocks are created, like most variables, they live on the stack. When a copy is performed, the block is copied to the heap.

This can be easily done in a property with the copy keyword:

@property (nonatomic, copy) SomeBlockType someBlock;

The memory management of blocks changes slightly in ARC. In general, blocks just work. There are a few exceptions however.

When adding block pointers to a collection, you need to copy them first.

someBlockType someBlock = ^{NSLog(@"hi");};
[someArray addObject:[someBlock copy]];

Retain cycles are somewhat dangerous with blocks. You may have seen this warning:

warning: capturing 'self' strongly in this block is likely to lead to a retain cycle [-Warc-retain-cycles,4]

SomeBlockType someBlock = ^{
    [self someMethod];

The reason is that someBlock is strongly held by self and the block will “capture” and retain self when/if the block is copied to the heap.

The safer, but loquacious workaround is to use a weakSelf:

__weak SomeObjectClass *weakSelf = self;

SomeBlockType someBlock = ^{
    SomeObjectClass *strongSelf = weakSelf;
    if (strongSelf == nil) {
        // The original self doesn't exist anymore.
        // Ignore, notify or otherwise handle this case.
    } else {
        [strongSelf someMethod];

Sometimes, you need to take care to avoid retain cycles with arbitrary objects: If someObject will ever strongly hold onto the block that uses someObject, you need weakSomeObject to break the cycle.

SomeObjectClass *someObject = ...
__weak SomeObjectClass *weakSomeObject = someObject;

someObject.completionHandler = ^{
    SomeObjectClass *strongSomeObject = weakSomeObject;
    if (strongSomeObject == nil) {
        // The original someObject doesn't exist anymore.
        // Ignore, notify or otherwise handle this case.
    } else {
        // okay, NOW we can do something with someObject
        [strongSomeObject someMethod];

Many thanks to @jkubicek for reading early versions and providing feedback.

2013 08 13

Cognitive Offloading and the Productivity of Go

Minimizing The Time from Idea to Production

There are many steps in the making of software. Conceptually, we can organize them in stages of a metaphoric pipeline as idea, architecture, prototype, and production-ready product.

I’ve been writing software for over three decades and Go is the best tool I’ve ever had for getting from idea to production.

There are many small reasons and two big reasons for this kind of efficiency and productivity.

Go Is Inherently Productive and Efficient

The small reasons are fairly well documented, but some highlights:

Go is a compiled, statically-typed language that feels more like a dynamic language than its peers. The syntax has some convenience sugar sprinkled in but the bulk of the credit is due to the compiler. Primarily via type elision, the compiler is smart enough to enforce static typing with minimal developer hand-holding. Also the compiler is faster than an ADHD squirrel marinated in Red Bull.

The built-in library covers a large surface area for such a young language, and the overall ecosystem is flourishing.

Error handling seems overwrought and full of boilerplate, but my experience is that idiomatic style of inline error handling makes programs faster and easier to debug. The end result is being able to zero in on problematic lines of code quickly which reduces the overall time to solution. (Russ Cox talks about the philosophy of Go and errors here.)

There are too many other small reasons to continue, but assume for now the language was crafted with productivity in mind.

Cognitive Offloading

Go’s primary advantage in facilitating fast time-to-product is a high level of positive cognitive offloading.

Making software involves quite a bit of mental juggling. You have to keep many disparate thoughts, concepts, requirements and goals in working memory simultaneously. The reason that Paul Graham coined the term Maker’s Schedule and the concept of half day chunks is that typically, in order to write software you need to load your working memory with the context of the problem your are solving and the existing state of the solution. This “ramp up” takes time and an interruption can wipe out a good chunk of that working memory.

Positive Cognitive Offloading can be thought of as a juggling partner you can hand off items to. If you trust them not to drop things, it frees your working memory for other items or allows you to juggle fewer items faster. Since there’s less to load, you move into the productive state faster.

Language features such as static typing, interfaces, closures, composition over inheritance, lack of implicit integer conversion, defer, fallthrough, etc. all result in a compiler that tells you when your code is likely to be buggy. Lack of warnings enforce discipline on the weak, squishy, analog life-forms who would otherwise allow ambiguity to deploy to production.

defer is an absolutely brilliant pattern that clearly illustrates this:

file, err := os.Open("some.file")
if err != nil {
    // don't forget to handle this!
defer file.Close()

if X {
} else if Y {
// otherwise whatever

The explicit offloading that occurs is that you don’t have to worry about if else chains or intermediate returns. In practice you can write code that needs cleanup without having constantly be on alert for exit points or wrapping code around an closure. As a matter of practice, the odds of leaving a dangling file are much lower when Close is nearby Open.

The time and energy requried to keep code bug-free is lower with defer allowing you to progress faster.

This kind of mindset is even more apparent in the toolchain. One of the nicest features of Go is the fmt tool. By outsourcing all coding formating standards to a command line tool, a surprising amount of weight is lifted from the task of writing code. Time wasting aside, the reduction of social friction or worse, check-in ping pong, over coding standards makes the whole world feel a bit more civilized.

Other parts of the toolchain which reduce cognitive weight include vet (a lint-like tool), test and even the GOPATH mechanism which forces a high level folder structure across all go projects.

It’s also important to contrast negative cognitive offloading. If your juggling partner drops things occasionally, this is arguably worse than having no juggling partner at all. If your ORM occasionally produces poor SQL that takes down your database, suddenly your cognitive overhead every time you use ORM methods skyrockets, because you have to ensure your ORM code doesn’t negatively impact the system.

In Go, the current state of the garbage collector can potentially cause NCO, but the existing profiler and improvements to the GC itself as well as some library additions in the upcoming Go 1.2 offer some relief. Needless to say, the other benefits far outweigh the cost.

Pipeline reversal penalty

One of the most common tasks a developer does is rewriting code that already exists. Thinking again about our metaphoric pipeline (idea, architecture, prototype, and production-ready product), rewriting code is essentially backing up through the pipeline.

There are many good and bad reasons to go in reverse, but there is always a short-term efficiency penalty to doing so. Good developers tend to offset that penalty by achieving longer-term benefits in maintainability, correctness and/or business goals.

Go, via intentional design and compiler implementation, has the shortest pipeline reversal penalty of any development ecosystem I’ve ever used. In practice, this means you are able to refactor more often with less regressions.

If you change an interface, the compiler tells you every single place that needs to be modified. Change a type and you are notified by line number everywhere your round peg no longer fits in the old, square hole.

If you take advantage of unit testing and benchmarking infrastructure, then you are residing near the magical Stuff Just Works Zone™. Even if you are not a developer, it should also be obvious that Go codebases are more easily adapted to changing business requirements.

Not Perfect

There is some tarnish on the generally shiny Go. Most crashes in Go are due to nil pointer references. John Carmack very concisely explains why:

The dual use of a single value as both a flag and an address causes an incredible number of fatal issues.

Something like Haskell’s Maybe type would be nice, or possibly some kind of Guaranteed-Good-Pointer type. In the meantime, there is a the cognitive overhead of nil checking.

The concurrency model is great, but has a bit of a learning curve to it. If you identify a performance bottleneck, you end up implementing & profiling both the the traditional way with mutexs/locks and the idiomatic way with channels and goroutines.

Some things that seem like negatives aren’t. The lack of generics is unfortunate, but the language designers are not willing to give up any of the other good stuff in Go in order to shoehorn them into the language. So far I’m convinced it’s the right decision. If they manage to pull it off in the future, their track record suggests that they will have found the right tradeoffs.

Net Win

Nearly two years ago, I said the following and I still believe it to be true:

Go is a tremendous productivity multiplier. I wish my competitors to use other lesser means of craft.

Go may not be for everyone, but there is more and more evidence that others are coming to similar conclusions.

One last point of interest, many of the above posts discuss how much fun Go is and my own experience upholds this. Not just programming, but in any domain, better tools that reduce friction nearly always make the process more entertaining. It turns out when you help people avoid the some of the irritations in their craft, they have a more enjoyable time with it.

Many thanks to @jkubicek, @yokimbo and Daniel Walton for reading drafts and providing feedback.

2013 08 29

Installing MariaDB/MySQL with Docker

Simply put, I think docker is going to change the game. Anyone who has any interest whatsoever in devops better be paying attention.

The best part for me about docker is that I can iterate very, very quickly on getting images the way I want. If I am installing a VM, and I screwup a step, reinstalling ubuntu from scratch is no fun. Spinning up a new docker container or image takes less than half a second.

Derek from Scout put it simply and concisely: “Docker is git for deployment”

Here’s what I’ve learned over the past week:

Typical docker usage:

Create a new container by loading a load a fresh image

sudo docker run -i -t ubuntu /bin/bash

Start populating the container with whatever:

apt-get update
apt-get install <WHATEVER>

Leave the container


Create a snapshot image of the current state of the container

# grab the container id (this will be the first one in the list)
docker ps -a  

Run the container as necessary, confifguring ports, using detached mode what not.

At this point you have a docker image (like a snapshot) of a clean install of WHATEVER. Running a container will load the image as a starting point and then allow you to configure as necessary. Screw up the conf file? Just exit the container and start over. The pipeline reversal penalty is minimal.


Create container and do the basic install

Launch a fresh container (do one of the following):

sudo docker run -i -t ubuntu:precise /bin/bash

Inside the docker container, just do the regular install dance:

apt-get update
#a mounted file systems table to make MySQL happy
cat /proc/mounts > /etc/mtab


apt-get install mysql-server


apt-key adv --recv-keys --keyserver 0xcbcb082a1bb943db

# if you want v10.0.x
echo "deb precise main" >> /etc/apt/sources.list
# if you want v5.5
echo "deb precise main" >> /etc/apt/sources.list
apt-get update
apt-get install mariadb-server

# exit the container

Commit the basic install, I like to note which version. Everything after this assumes 5.5, but the basic directions work for 10.0.x as well.

# one of the following:
docker commit <container_id> <YOU>/mariadb55
docker commit <container_id> <YOU>/mariadb100


First launch back into a new container, but this time with a data directory mounted. This example uses a host folder at $HOME/mysqldata. Inside the container the directory is mapped to /data. This makes it easy to spin up different instances of MySQL without having to be constantly configuring new data dirs.

sudo docker run -v="$HOME/mysqldata":"/data"  -i -t -p 3306 <YOU>/mariadb55 /bin/bash


Backup my.cnf

cp /etc/mysql/my.cnf /etc/mysql/my.cnf.orig

Allow access from any ip address. This is obviously not secure for production, but more than enough for this example.

sed -i '/^bind-address*/ s/^/#/' /etc/mysql/my.cnf

Change the data dir

sed -i '/^datadir*/ s|/var/lib/mysql|/data/mysql|' /etc/mysql/my.cnf
rm -Rf /var/lib/mysql

Setup new data tables



/usr/bin/mysqld_safe &

Follow the prompts (Typically, set a root pw and answer Y for everything else:


Allow docker to login from wherever (Again, obviously not secure for production):

mysql -p --execute="CREATE USER 'docker'@'%' IDENTIFIED BY 'tester';"
mysql -p --execute="GRANT ALL PRIVILEGES ON *.* TO 'docker'@'%' WITH GRANT OPTION;"

Bail from the container, back to the host:

mysqladmin -p shutdown

In the host, commit and tag:

sudo docker commit -m "mariadb55 image w/ external data" -author="<YOU>" <CONTAINER_ID> amattn/mariadb55 <SOME_TAG>


We can then run with:

sudo docker run -v="$HOME/mysqldata":"/data" -d -p 3306 amattn/mariadb55:<SOME_TAG> /usr/bin/mysqld_safe

See which port is being forwarded with:

sudo docker ps -a

At this point, you can access MariaDB at the host IP address and the forwarded port (usually 49xxx).


Iterate! Did you screw up the mysql_secure_installation step? Just exit the container and start from your last docker commit. Docker is all about iterating repeatable steps until you have an image ready to go.

There are lots of docker commands. Spend some time browsing the docs:

The two most useful tutorials I came across were:

And an older list of tutorials:

Use tags. The repo format is <USERNAME>/<IMAGENAME>:<TAG>. I tend name like this:

2013 09 03

Grand Central Dispatch (GCD): Summary, Syntax & Best Practices

Queue and A

Apple originally described Grand Central Dispatch (GCD) this way:

  1. Threading is hard
  2. Using GCD makes it simple and fun

Both statements are correct; here are some additional points:

Submitting Blocks to Queues

The primary mechanism of using GCD is by submitting blocks to queues or responding to events that pop out of queues. That’s it. There are different ways of submitting and many kinds of queues, some of them quite fancy. Ultimately, you are just scheduling tasks to be performed or performing tasks in response to events.

The magic part is that the concurrency aspect is handled for you. Thread management is automatic and tuned for system load. The usual concurrency dangers apply however: all UI must be done on the main queue and as always, check the documentation/googles to see if specific NS or UI bits are thread safe or not.

This post focuses on “submitting blocks to queues” but the buyer should be aware that libdispatch has more under the hood:

- Dispatch Groups        // coordinate groups of queues
- Semaphores             // traditional counting Semaphores
- Barriers               // synchronize tasks in a given concurrent queue
- Dispatch Sources       // event handling for low-level events
- Dispatch I/O           // file descriptor–based operations
- Dispatch Data Buffers  // memory-based data buffer

Creating or Getting Queues

It is worth repeating: the primary mechanism of using GCD is submitting tasks to queues.

The best way to conceptualize queues is to first realize that at the very low-level, there are only two types of queues: serial and concurrent.

Serial queues are monogamous, but uncommitted. If you give a bunch of tasks to each serial queue, it will run them one at a time, using only one thread at a time. The uncommitted aspect is that serial queues may switch to a different thread between tasks. Serial queues always wait for a task to finish before going to the next one. Thus tasks are completed in FIFO order. You can make as many serial queues as you need with dispatch_queue_create.

The main queue is a special serial queue. Unlike other serial queues, which are uncommitted, in that they are “dating” many threads but only one at time, the main queue is “married” to the main thread and all tasks are performed on it. Jobs on the main queue need to behave nicely with the runloop so that small operations don’t block the UI and other important bits. Like all serial queues, tasks are completed in FIFO order. You get it with dispatch_get_main_queue.

If serial queues are monogamous, then concurrent queues are promiscuous. They will submit tasks to any available thread or even make new threads depending on system load. They may perform multiple tasks simultaneously on different threads. It is important that tasks submitted to the global queue are thread-safe and minimize side effects. Tasks are submitted for execution in FIFO order, but order of completion is not guaranteed.

In Mac OS X 10.6 and iOS 4, there were only three, built-in (global) concurrent queues and you could not make them, you could only fetch them with dispatch_get_global_queue. As of Mac OS 10.7 and iOS 5, you can create them with dispatch_queue_create("label", DISPATCH_QUEUE_CONCURRENT). You cannot set the priority of concurrent queue you create yourself. In practice, it often makes more sense to use the global concurrent queue with the appropriate priority than to make your own.

The primary functions used to create or get queues are summarized here:

dispatch_queue_create       // create a serial or concurrent queue
dispatch_get_main_queue     // get the one and only main queue
dispatch_get_global_queue   // get one of the global concurrent queues
dispatch_get_current_queue  // DEPRECATED

dispatch_queue_get_label    // get the label of a given queue

A quick note on dispatch_get_current_queue: It is deprecated and it also didn’t always work in every case. If your implementation requires this, then your implementation should be refactored. The most common use case of this was to “run some block on whatever queue I am running on”. Refactored designed should pass an explicit target queue along with the block as arguments or parameters, rather than trying to rely on the runtime to determine which queue to submit to.

Adding Tasks to the Queues

Once you have queues of your very own, you can make them useful by adding tasks to them.

The primary mechanisms for do so are the following:

// Asynchronous functions
// Synchronous functions

dispatch_async will submit a task to a queue and return. immediately. dispatch_after returns immediately, but delays until the specified time to submit the task. dispatch_apply also returns immediately and the task is submitted multiple times.

dispatch_sync will submit a task to a queue, and returns only when the task completes. dispatch_once will submits a task once and only once over the application lifetime, returns when the block completes.

In practice, I find myself using dispatch_async, dispatch_after and dispatch_once the most.

Example Code:

// add ui_update_block to the main queue
dispatch_async(dispatch_get_main_queue(), ui_update_block);

// add check_for_updates_block to some_queue in 2 seconds
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, 2 * NSEC_PER_SEC), some_queue, check_for_updates_block);

// add work_unit_block to some_queue i times.
dispatch_apply(i, some_queue, work_unit_block);

// perform the only_once_block once and only once. 
static dispatch_once_t onceToken = 0; // It is important this is static!  
// wait for completion
dispatch_once(&onceToken, only_once_block);

// add blocking_block to background_queue & wait for completion
dispatch_sync(background_queue, blocking_block);

Queue memory management

GCD first became available in Mac OS X 10.6 and iOS 4. At that time, GCD objects (queues, semaphores, barriers, etc.) were treated like CFObjects and required you to call dispatch_release and dispatch_retain according to the normal create rules.

As of Mac OS X 10.8 and iOS 6, GCD objects are managed by ARC and as such manual reference counting is explicitly disallowed.

Furthermore, under ARC the following caveats apply:

  1. If you are using a GCD object within blocks that are used by the GCD object, you may get retain cycles. Using __weak or explicitly destroying the object (via mechanisms such as dispatch_source_cancel) are good ways around this. As of Xcode 4.6, the static analyzer does NOT catch this. Example:

    // Create a GCD object:
    dispatch_queue_t someQueue = dispatch_queue_create("someQueue", nil);
    // put a block on the queue, the queue retains the block.
    dispatch_async(someQueue, ^{
        // capture the GCD object inside the block,
        // the block retains the queue and BAM! retain cycle!
        const char *label = dispatch_queue_get_label(someQueue);
        NSLog(@"%s", label);
    // You can use the typical __weak dance to workaround:
    __weak dispatch_queue_t weakQueue = someQueue;
    dispatch_async(someQueue, ^{
        __strong dispatch_queue_t strongQueue = weakQueue;
        const char *label = dispatch_queue_get_label(strongQueue);
        NSLog(@"%s", label);
  2. Lastly, this little nugget was buried in man dispatch_data_create_map. The GCD functions dispatch_data_create_map and dispatch_data_apply create internal objects and extra care must be taken when using them. If the parent GCD object is released, then the internal objects get blown away and bad things happen. The __strong variables or the objc_precise_lifetime on the parent dispatch_data_t can help keep the parent object alive.

    // dispatch_data_create_map returns a new GCD data object.
    // However, since we are not using it, the object is immediately
    // destroyed by ARC and our buffer is now a dangling pounter!
    dispatch_data_create_map(data, &danglingBuffer, &bufferLen);
    // By stashing the results in a __strong var, our buffer
    // is no longer dangerous.
    __strong dispatch_data_t newData = dispatch_data_create_map(data, &okBuffer, &bufferLen);

Queues In Practice

Queues, like most powerful tools, can cause bodily harm if used inappropriately. Real world usage requires some discipline. Here are some general guidelines:

The second bullet above deserves further exploration. Because queues are lightweight, you can make lots and lots of them. It is better to have many specialized serial queues than to stuff many disconnected tasks into one or two “mega” serial/concurrent queues.

Typical “purposeful” queues look like this:

//used for importing into Core Data so we don't block the UI
dispatch_queue_create("com.yourcompany.CoreDataBackgroundQueue", NULL);

//used to prevent concurrent access to Somefile
dispatch_queue_create("com.yourcompany.SomeFile.ReadWriteQueue", NULL);

//used to perform long calculations in the the background
dispatch_queue_create("com.yourcompany.Component.BigLongCalculationQueue", NULL);

Practical queue usage typically involves nested dispatching:

dispatch_queue_t background_queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, NULL);
dispatch_async(background_queue, ^{
    // do some stuff that takes a long time here...

    // follow up with some stuff on the main queue
    dispatch_async(dispatch_get_main_queue(), ^{
        // Typically updating the UI on the main thread.

Here we launch a long-running task on the background queue. When the task is complete, we finish up by triggering a UI update to be performed on the main queue.

Also be aware of excessively nested dispatching. It hampers readability & maintainability and should be considered a somewhat pungent code smell.

Advanced Studies

If you have particular interest on any of the more quiet corners of GCD (dispatch groups, semaphores, barriers, etc.), let me know and I’ll write something up.

In the mean time, the usual sources of knowledge apply, documentation available on the web and via Xcode as well as WWDC talks on GCD and blocks.

2013 09 10

Principles of Scalable Architectures

Most of these rules are broken in the name of performance. Make sure the tradeoff is worth it. If you overcomplicate your architecture it means you will have more components where something can go wrong.

When building or replacing a component, the general rule of thumb is to plan for 2 orders of magnitude of growth. That gives you room to grow without over planning.

When choosing components to replace, you just find the bottleneck, widen and repeat.

2013 09 11

Notes & Summary of Gail Goodman’s The Long Slow SaaS Ramp Of Death

If you’ve every had the thought: “I know, I’ll make a SaaS product” then Gail Goodman’s 2012 talk on the Long Slow Saas Ramp of Death is for you.

Transcript and more here:

Here are my notes and summary of that presentation. Any errors are likely induced by me. I don’t know Gail, nor am I a customer of her company, but I loved this presentation.

The One Paragraph Version

The long, slow SaaS ramp of death is that it just takes a long time to get to minimum critical mass.

The basic premise is that you may never see hockey stick user growth, but with SaaS it might not matter. If your customers are saying you have something and you have some growth, then over time (possibly a long and challenging time), the math of SaaS usually works out in your favor.

Another initial point she made was to think of SaaS as more like a flywheel than hockey stick.

Avoid Mirages

Don’t fall for the mirages. There are lots of different components that seem like they will boost you to some next level (partners, new feature, free, viral, seo, etc). Rather, your mindset should be more like SW development in that there is no silver bullet. No one event or feature will induce hockey stick growth, instead you will end up working on a thousand cumulative optimizations.

(08:40) Instead, you have to work the funnel by

… making sure that when someone tries or buys your product, they have a ‘wow’ experience, they get quick to an understanding and an outcome that blows them away.

The Funnel

(12:22) “I would argue that most of those little things will happen if you continue to view your business from your customer or user inward rather than from the metrics you want to change outward.”

Try to optimize the customer outcomes first. Spending significant time optimizing your landing page before nailing a feature set that delivers value is an inversion of the process.

(12:40) “The key to changing those internal metrics (funnel), is by starting with the view from your customer looking at your business and your experience. Not by looking at your metrics & trying to change your customer’s behavior.”

Sone solutions were decidedly old fashioned. Radio and free seminars worked for Constant Contact because small business owners often have radios on during the work day.

At the top of the funnel (landing pages, ad buys, etc.): Test, Scale, Tune & repeat.

Try Understand why customers weren’t flocking to you.

(22:35) “Quick to Wow.”

It’s all about optimizing the quick to wow.


Turns out the number one way to get them to stay, is to get them successful early.

Human Nature: When faced with a learning curve, humans tend to learn just enough to get the job done, then stop learning. It is hard to get customers to look at new features.

(23:53) Middle & Bottom of the funnel: Measure, test repeat.

(25:00) Innovate everywhere. Not just on the tech side of the house.

Lifetime value

(26:20) A simple formula for calculating LTV (Lifetime Value of a single customer). It’s one over your retention rate. In the case of Constant Contact, their average monthly retention is 2.2% a month. One over 2.2% is ~45 months.

As an aside, different industries and regions seem to have different acronyms for this concept. I’ve seen LTV, LCV, CLTV, CLV & LTCV.

Best blog post: David Skok: SaaS Metrics, A guide to measuring and improving what matters.

How did we survive?

“Operating at cash level: Only eating what we were killing.”

All spare cash going into marketing spend because at that time, CAC (Customer Acquisition Cost) was ~300 and LTV as ~1650.

2013 09 19

Tutorial: PostgreSQL Usage and Examples with Docker

So I’m a loyal acolyte in the church of docker. I also have this little schoolgirl crush on PostgreSQL. Here’s how you can combine both into a crime-fighting dream team.

The Long, Instructive Way

Just the basics:

Spin up a container, install a text editor and snapshot an image:

sudo docker run -i -t ubuntu:precise /bin/bash

Inside the container install a text editor (because the default precise image doesn’t come with one installed):

apt-get update
apt-get install vim-tiny

Snap an image. Your name is probably not amattn, however just for a moment, pretend otherwise. I know it is unpleasant, but only for a short while. I called my image precise-vim but you can call it dinglemuffin if you really want to.

sudo docker commit CONTAINER_ID amattn/precise-vim

Install the default PostgreSQL

Again with the spinning up of a new container:

sudo docker run -i -t amattn/precise-vim /bin/bash

Do the basic install. The assist with the repo info is credited to

apt-get update
apt-get install -y wget
wget -O - | apt-key add -
echo "deb precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list
apt-get update
apt-get install -y postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3

Just a note, the above will install postgres-9.3.X where X is the latest. At the time of this update (Early Jan 2014), that is 9.3.2, but obviously, that may or may not be the case when you read this.

Again with the snapping of an image. Just a note here, I got odd failures when my image names had capital letters (as of docker 0.6.1).

sudo docker commit CONTAINER_ID amattn/postgresql-9.3.2

Container Cleanup

You can list all containers with docker ps -a We don’t actually need the containers that we used to create images. Once we have images, we simply spin up totally new containers, while the sad, lonely ones we used to create the images get rm’d.


Typical configuration from here:

Here’s the magic part. We want to configure PostgreSQL to put its data in the container’s a directory at the root level called /data. This folder is shared with the docker host. This way, we can use any container configured to look at /data with a persistent file on the host. Our data becomes decoupled from our container. In this example we use $HOME/postgresdata, but feel free mount any host directory you like.

mkdir -p $HOME/postgresdata
sudo docker run -v="$HOME/postgresdata":"/data"  -i -t -p 5432 amattn/postgresql-9.3.2 /bin/bash

First setup our .conf & .hba files:

cp /etc/postgresql/9.3/main/postgresql.conf /data/postgresql.conf
cp /etc/postgresql/9.3/main/pg_hba.conf /data/pg_hba.conf

Use our custom data directory (/data/main) & .hba file:

sed -i '/^data_directory*/ s|/var/lib/postgresql/9.3/main|/data/main|' /data/postgresql.conf
sed -i '/^hba_file*/ s|/etc/postgresql/9.3/main/pg_hba.conf|/data/pg_hba.conf|' /data/postgresql.conf

Create /data/main/ and fill it with stuff.

mkdir -p /data/main
chown postgres /data/*
chgrp postgres /data/*
chmod 700 /data/main
su postgres --command "/usr/lib/postgresql/9.3/bin/initdb -D /data/main"
cp /postgresql.conf /data/postgresql.conf
cp /pg_hba.conf /data/pg_hba.conf

If you want to allow access from any ip address, the next three commands are for you. This is obviously a huge security risk, especially if you don’t have a firewall or similar in place. Caveat Developor

sed -i "/^#listen_addresses/i listen_addresses='*'" /data/postgresql.conf
sed -i "/^# DO NOT DISABLE\!/i # Allow access from any IP address" /data/pg_hba.conf
sed -i "/^# DO NOT DISABLE\!/i host all all md5\n\n\n" /data/pg_hba.conf

Start PostgreSQL

su postgres --command "/usr/lib/postgresql/9.3/bin/postgres -D /data/main -c config_file=/data/postgresql.conf" &

# As the user postgres, create a user named docker
su postgres --command 'createuser -P -d -r -s docker'

# As the user postgres, create a db docker owned by postgres user docker
su postgres --command 'createdb -O docker docker'

Shutdown PostgreSQL

su postgres --command '/usr/lib/postgresql/9.3/bin/pg_ctl --pgdata=/data/main stop'

Now we commit, but we should use a tag! Until now, all our commits are for general purpose containers. Even though all data and configuration is “outside” the container, we still want to be able to identify for what purpose a container exists. As of this writing, tags are the best way to do so.

sudo docker commit CONTAINER_ID amattn/postgresql-9.3.2 TAGNAME

I’ve found that tags in the format of amattn/component:appname work very well in practice:


The tags also help us remember not to delete those containers.

Launching the Container

Launch the container with the run command. Notice that we aren’t spinning up a shell anymore. We are launching a container w/ the tag TAGNAME, running a single process (postgres) as the user postgres, with a random port forwarded to the container’s port 5432 and a directory mounted to the container’s /data.

sudo docker run -v="$HOME/postgresdata":"/data" -d -p 5432 amattn/postgresql-9.3.2:TAGNAME su postgres --command "/usr/lib/postgresql/9.3/bin/postgres -D /data/main -c config_file=/data/postgresql.conf"

At this point, the container should be humming along in the background. You can even prove it to your disbelieving self with the ps command. In particular, the status column should list an uptime and not an exit code:

docker ps -a

Start and stop the container with:

sudo docker stop CONTAINER_ID
sudo docker start CONTAINER_ID

Get the host port with either of:

sudo docker ps -a
sudo docker port CONTAINER_ID

The Short, Borderline Cheating Way

In the host:

mkdir -p $HOME/postgresdata
sudo docker run -v="$HOME/postgresdata":"/data"  -i -t -p 5432 amattn/postgresql-9.3.2 /bin/bash

Inside the container:

cp /etc/postgresql/9.3/main/postgresql.conf /data/postgresql.conf
cp /etc/postgresql/9.3/main/pg_hba.conf /data/pg_hba.conf
sed -i '/^data_directory*/ s|/var/lib/postgresql/9.3/main|/data/main|' /data/postgresql.conf
sed -i '/^hba_file*/ s|/etc/postgresql/9.3/main/pg_hba.conf|/data/pg_hba.conf|' /data/postgresql.conf

mkdir -p /data/main
chown postgres /data/*
chgrp postgres /data/*
chmod 700 /data/main
su postgres --command "/usr/lib/postgresql/9.3/bin/initdb -D /data/main"

# OPTIONAL: configure /data/postgresql.conf & /data/pg_hba.conf to allow access from trusted IP addresses

# Start PostgreSQL
su postgres --command "/usr/lib/postgresql/9.3/bin/postgres -D /data/main -c config_file=/data/postgresql.conf" &

# OPTIONAL: add PostgreSQL user(s), go other setup config

# Stop PostgreSQL
su postgres --command '/usr/lib/postgresql/9.2/bin/pg_ctl --pgdata=/data/main stop'


Back in the host, optionally commit and tag. Launch the container with the run command:

sudo docker run -v="$HOME/postgresdata":"/data" -d -p 5432 amattn/postgresql-9.3.2:OPTIONAL_TAGNAME su postgres --command "/usr/lib/postgresql/9.3/bin/postgres -D /data/main -c config_file=/data/postgresql.conf"
2013 10 09

PaSH is the New SaaS, or How to Jump the Long Slow Ramp of Death

The Math of SaaS

There are tons of Software as a Service apps out in the world. A big part of the reason is that the math of SaaS apps allows for single founders or small teams to achieve livable wages with low risk.

How achievable? If your SaaS app solves a business need or produces some desired outcome, and you charge an average of US $60 a month, you only need ~150 customers to break 6 figures in annual revenue. 150 customers is small enough that you can almost brute force a customer base with old-school tactics like trolling niche meetups and even cold-calling customers to setup demos.

So, if you don’t want to try your hand in the App Store Casino or the Build-The-Next-Instagram Lottery, but you also don’t want to work for the Man/Woman, SaaS is a fairly well-trodden path.

However, SaaS means less risk, not riskless. Where does SaaS break down? First, it’s got a potentially painful, long, and shallow growth curve. This was masterfully described by Gail Goodman in her talk, The Long, Slow SaaS Ramp of Death (my notes here)

Small team SaaS apps have the usual problems, such as being responsible for the customer development, product sales, and marketing, despite the fact that you may only be a domain expert in one or two of the three. This is one of the biggest problems with SaaS, especially in the early stages of development when there isn’t much cash flow.

Cash is King

With a SaaS app, the more cash you have, the more your options open up. You can spend cash on customer acquisition for long-term revenue bumps, or you can spend it on improving your product to help with customer retention.

In other words, working SaaS apps are magic green boxes that eat Cash and Time, and spit out more money.

Cash flow in a SaaS app, however, is incremental. Each new customer only gets you ~$60. The total lifetime value of the customer is locked up in the future.

The traditional workaround is to offer a month or two free if the customer pays for a year in advance. With annual prepay, each customer brings in ten times more delicious lucre than the otherwise incremental $60.

Now you have ten times as much cash to blow on customer acquisition, requested features, or that new ping pong table.

Enter PaSH apps

A more recent phenomenon I’ve seen in the wild is the notion of Product & Service Hybrid apps.

PaSH: Product & Service Hybrid

In this case, we are talking about:

The typical PaSH app can be thought of as a SaaS app with a micro-consultant add-on to help you leverage the SaaS app.

For example, Optimization Robot is an upcoming combination AB testing platform and set of experts to help you run tests, suggest new tests, and monitor the results. Lead Genius is a combination of lead gen lists as well as lead gen army. Churn Buster is a combination of email dunning as well as humans who call expired accounts. No SaaS app can compete with voices on the phone.

The more traditional path is to actually come about it the other way. Starting with a consulting service and adding product over time. Copy Hackers might be the epitome of this, productizing their consulting w/ ebooks, videos, courses, and more.

Jump the Long Slow Ramp of Death

If you’ve been reading closely, the Long, Slow Ramp Of Death refers to the slow wind-up times that SaaS apps typically run into.

PaSH apps always have the option to charge for the service component. The basic premise is that the service component delivers more value than just the monthly cost of the product, but costs less than an equivalent consultant or contractor. This can lead to large cash flow spikes and incremental monthly revenue much higher than the $60 you could charge for a code-only solution.

This potentially allows you to ride out the most grueling part of the ramp. Even one or two service add-ons might mean the difference between crashing into the end of the runway versus achieving takeoff velocity.

You will still have the awkward points while your customer count is <= 10, or when you start thinking about your first hire. PaSH is more risk mitigation and less sparkling, magical, silver bullet.

Concierge is a fancy word for Human-Assisted Onboarding

The Service part doesn’t have to be ongoing nor does it even have to have an explicit cost. The term “concierge service” is making the rounds and can have a measurable effect on customer acquisition and churn rate. Drip does this very well, and their concierge service is free as in beer, but awesome as in customer experience.

Lastly, despite being an unemotional husk, even I won’t discount the value of human contact with your customer. If you or your workers spend any time interacting with users, that’s the perfect opportunity to build trust, discover desired outcomes, and ultimately turn customers into advocates.

What then?

In the early days of a product, think of PaSH as life-support/plan B.

As your business evolves and matures, you should also be evolving the service component as well. One common direction is that the service component becomes less high-touch one-off, and a bit more scalable maintenance type work. Ideally, it evolves into an ongoing thing that benefits both parties. The SaaS operator gets higher monthly revenue and the customer reaps the benefits of improved business outcomes without the hassle of a W-2 or W-9 form. You want the customer to be thinking: “really, really inexpensive/highly optimized out-sourcing.”

At the end of the day, your options are not limited to the typical indie playbook of contractor moving into download or SaaS product. As long as you have customers who value business outcomes, a hybrid product/service is a potentially less perilous option.

Thanks to Micheal Buckbee, Tim Cull, Andrew Culver, Aaron Francis, Jeffrey Abbott for reading drafts & providing feedback.

2013 10 15

Venture Capital Math 101: Pre-money, Post-money, Seed, Series A, B, C, D, up rounds and down rounds

Have you ever heard someone say “We raised 4mil A round at 20 pre”?

Like all trades, tech financing has its own vernacular. There is nothing you can’t figure out with basic math in the above quote. Soon, you as well can decode the crazy moon language spouted by entrepreneurs, investors & tech media.

Raising money from investors (VCs, angels, even friends & family) is always about two things:

  1. Price
  2. Control

This post will focus on the price of ownership bit. Also caveat lector: like snowflakes, the process of raising funds is unique, fragile and sometimes packed into a hard ball of ice which will give you a black eye. Every statement and example in this post has variations and exceptions.


Step 0 is to split initial ownership among the founders.

If you are a single founder, this step is quite easy. Most of our examples will use a 2575 split between two cofounders, just to make the math interesting.

Seed round

If you raise a seed round in Silicon Valley, odds are that you are getting money from Angels. Odds also are high that using something called a convertible note.

This means that the angels are essentially loaning you money, and expect to be paid back when you raise a Series A in the form of Stock.

The thing about raising money is that you need a valuation. If you don’t know how much a company is worth, then people cannot buy shares. In the stock market, the buyer and seller agree to a price of some shares and multiply that by all outstanding shares and you have a simple calculation of the value of the company.

During the fundraising process, this is flipped around. The investors and the founders usually start with a ballpark number of how much money the company wants to raise. Our example here is 1 million dollars. Starting with that, the parties negotiate out how much percentage ownership of the company that 1 million dollar buys. The actual valuation is then just simple math:

Founders want = 1 million
VC wants = 20% ownership

ownership calculation:
founders  |  VCs  | total
   80%    |  20%  |  100%

valuation calculation:
founders  |  VCs  | total
   4MM    |  1MM  |  5MM

Back to convertible notes:

The problem, is that early, early seed stage companies cannot accurately set valuations. During the seed round, convertible notes allow angels and other early, early investors to give money to the company without setting a valuation. Basically, the angels are saying that “I will give you this money now, and later during your series A, when you do set a valuation, use that number to issue me the right number of shares.” In practice angels can get essentially bonus shares issued to them as a reward for taking higher risk that comes with investing so early. These bonus shares are usually done through caps or discounts, but for now, just be aware of the terms and their purpose.

So the typical Seed Round looks like this:

ownership calculation:
founders  |  Angels  | total
   100%   |   TBD    |  100%

valuation calculation: 
TBD, but angels have put in $X in convertible notes with a 50% discount.

Series A

Here’s where things get fun. In our story, the founders are doing well. They raised money from angel investors, launched a product and got traction.

This is a story, so we skip the part about self-doubt, the valley of sorrow, the struggle to grow and the lack of communication and accusations of level of commitment between co-founders.

In our story, the founders started with a 7525 split. Got 200k from angels & now want to raise 2 million dollars (2MM) in the form of a Series A.

We now have to get a little bit into the mechanics of ownership. Ownership is done through shares. In the simple case, your ownership is the number of shares you own divided by all outstanding shares. Fairly simple. Our founders originally issued 1 million shares, so Founder A, got 750,000 shares and Founder B got 250,000 shares.

Initial ownership calculation:
founders  | total
   1MM    |  1MM

Series A ownership calc (in shares):
founders  |  Series A  | total
   1MM    |     TBD    |  1MM + TBD

If the Series A investors and the founders agree to give up 20% of the company for the 2MM dollars, then we can start filling in the table:

Series A ownership calc (in shares):
founders  |  Series A  | total
   1MM    |     TBD    |  1MM + TBD

Series A ownership calc (in %):
founders  |  Series A  | total
   80%    |     20%    |  100%

At this point, basic algebra kicks in, just solve for TBD:

Series A ownership calc (in shares):
founders  |  Series A  |  total
   1MM    |     250k   |  1.25MM

Also, at this point, we calculate the price of a share. The VC’s paid 2MM and got 250k shares: 1 share is 2MM/250k or 8 dollars a share.

Series A ownership calc (in shares,%):
founders  |  Series A  |  total
   1MM    |     250k   |  1.25MM
   80%    |     20%    |  100%

What about the Angels? Normally they would get their own column, but Here, the angels got rolled up into the Series A. In our example, the Angels got a discount on their price of their shares of 50%. That means, they were able to buy shares at 4 dollars a share. Their original convertible note is now magically transformed into 50,000 shares. For the sake of the simplifying the math, assume that 50k of the 250k Series A shares belong the angels above.

Pre, Post

We also see that there are 1.25 million shares outstanding, worth 8 dollars a share. Simple math gets us a total company post-money valuation of 10 million dollars.

Since the founders raised 2MM, the pre-money valuation is 8MM.

The simple formula works like this:

pre-money val + size of round = post-money val

Series B

The real fun comes with Series B. We two basic ways things can go from here: better or worse. In the case of better, The founders can raise more money at a higher price, (an up round). In the case of worse, the founders raise more money at a lower price (a down round).

Let’s be optimistic: The founders are doing very well. They’ve got product market fit, and good cash flow and now want to raise 20 million dollars to accelerate the business. Time for a Series B.

Series A ownership calc (in shares,%):
founders  |  Series A  |  Series B  |  total
   1MM    |     250k   |    TBD     |  1.25MM + TBD
   ???    |     ???    |    ???     |  100%

The math is similar, but we have a reference point.
In this round, the founders and Series B investors have agreed that the company is doing well and worth much more than before.

When a startup is doing well, the Series B is usually made up all of the Series A investors plus some new ones.

In this case, the founders and investors have agreed to a 20 million round at a pre-money valuation of 180 million dollars.

Our post money valuation is 200 million dollars. It also means that the Series B investors have 10% of the company:

Series B ownership calc (in shares,%):
founders  |  Series A  |  Series B  |  total
   1MM    |     250k   |    TBD     |  1.25MM + TBD
   ???    |     ???    |    10%     |  100%

To solve for TBD, we know that 1.25MM/TBD = 90%/10% TBD resolves to 138,888 shares at 144 dollars per share. At this point, we can use algebra to solve for ???

Series B ownership calc (in shares,%):
founders  |  Series A  |  Series B  |  total
   1MM    |     250k   |    139k    |  1.39MM
   72%    |     18%    |    10%     |  100%

One more quick term: dilution

See how the founders % of ownership drops at each round? That’s called dilution. When you join a startup and have X%, you have to expect your % to go down. This is okay, though because the shares you owned are essentially worthless the day you start the company. But after every up round, they are worth more. In our fictitious case here, the price per share went from 0 to 8 to 144. Dilution is no fun, and unfortunately, built in to the system.

Series B(ad)

In our optimistic example, we saw an up round. What about a down round?

Imagine our founders have not quite yet nailed product-market fit or are trying to add a bit more features to reach critical mindshare or need an influx of funds to finally do a huge marketing push. As long as the investors believe in you, you are okay.

If the investors have lost faith, you might be in trouble. A down round looks like this:

Instead of a humongous 200MM post-money valuation, you get a much smaller X valuation and are only able to raise 2.5MM instead of the 20 you wanted. Further more, that 5 MM is at 2 dollar per share. That means the Series B team bought two thirds of the company (5MM/2 = 2.5MM):

Series A ownership calc (in shares, ~%):
founders  |  Series A  |  Series B  |  total
   1MM    |     250k   |    2.5MM   |  3.75MM
   27%    |      7%    |    66%     |  100%

At 2 dollars per share, the new post-money valuation of company is 2 * 3.75MM or just 7.5 million dollars.

Also, notice how much more severe the dilution is during a down round. Your pride and joy and blood and sweat and tears just got gobbled up. Furthermore in the event of an exit, your personal stake is cut sharply.

Nobody (not even the investors) like down rounds.

There’s one last variation called an even round. Those aren’t great, but they are better than down rounds.

Series C and beyond.

Series C works just like Series B. So does D-Z.

On last point. That table we made above? It’s called a cap table. It’s usually a rather unremarkable excel spreadsheet, that investors and founders have been fighting over for months.


These are fictitious examples with numbers pulled out of thin air for the purpose of education. No founders or investors were harmed in the authoring of this post.

Do you want a real education? Go buy this book: Venture Deals: Be Smarter Than Your Lawyer and Venture Capitalist

Let’s revisit our pithy lead: “We raised 4mil A round at 20 pre”

Now you know that 20MM pre-money + 4MM round = 24 MM post money valuation. The series A investors got 17% of the company and the founders and seed/angels got the rest.

Concepts you should have learned:

the fine print:
© matt nunogawa 2010 - 2019 / all rights reserved
back ⬆