AWS + Developer Lockin

aws-logo-1The recent claims of ‘developer lock-in’ to AWS from one of the newer PaaS companies got me thinking. Big dev ecosystems draw developers in w/ services + price points that are attractive – the more the better, making them bigger. Rinse, repeat.

Scale matters when it comes to a company’s capacity to lower – and then maintain – pricing of their services, obviously. This creates and feeds a cycle that can fuel massive growth, very rapidly. Whether you’d say 6 years is ‘rapid’, there’s no denying that AWS is today massive, both at pure scale and in terms of being the infrastructure of thousands (and thousands) of unrelated services.

I think AWS pricing is aggressive, but not predatory, i.e. it’s not akin to dumping product in the marketplace to gain share. When they began rolling out AWS services in August 2006, with the now ubiquitous (and nearly entirely obfuscated behind 3rd party APIs that use it) S3, they stated the mission as “…selling excess Amazon capacity”.

While that may have been initially true (I doubt it), it quickly became apparent that AWS was a viable + rapidly growing stand-alone business. It grew rapidly for three reasons:

✓  Ease of Use

Initially, using S3 was somewhat difficult because there wasn’t yet any apparent way to utilize the storage in the manner to which most devs were accustomed – as a mountable block-storage device, i.e. a disk. I personally recall spending hours of frustrating work connecting S3 with a local fuse file system (yes, it worked, no it didn’t scale well). The early hashed values (the bucket + some unique item name) made real use pretty difficult. You could store files in S3, sure, but you couldn’t actually *interact* with them as normal files. Yet.

Quickly, though, a handful of libraries emerged that allowed use of S3 through file system abstraction layers that were exposed as simple API endpoints in whatever language the dev was already using. This was a huge breakthrough. The prospect of unlimited, on-demand file storage was now a reality. And tons of companies, especially startups that had no legacy infrastructure to consider, began to use S3 immediately.

✓  Solves a Real Problem

S3 and the subsequent EC2 meant that anyone, anywhere – with just a credit card and a little bit of effort – could spin up a brand new server, configured exactly (more or less) the way they wanted, attach an unlimited disk drive to it and start doing stuff. This meant that classic systems administrators, the people who had the specialized knowledge in how to spec out a system, drop a disk into it, get the interfaces setup etc. – were no longer necessary.

In fact, provisioning a system could now be done on a web form in a few clicks. The hardcore sys admin was no longer required; the desired expertise moved up the stack to become what we’re all today calling devops – half developer, half operations engineer: s/he develops solutions that require deep integration with the operational infrastructure, like the AWS components, as well as higher up with the functional infrastructure, such as a Hadoop cluster and whatnot. Nowhere in this scenario does anyone need to edit /etc/fstab or create xinet.d services files (and deal with all the shit that comes along with that).

Instead, the devops folks only need to understand how + where to use the APIs that expose these services to their application. And that solves a huge problem across the board: a startup can be quickly up-and-running without the separate headcount of a sys admin who does nothing else. Engineering time (read that as money) could be devoted nearly entirely to the development of the core application or service. This is like getting an extra utilization bump for free.

✓  Pay as You Go

This one is huge. Previously, a startup had to purchase physical servers, arrange co-location at a data center, and sign a long-term contract that locked them into the market prices *of that moment*. This could result in years of paying for services identical to what AWS was offering, at a 2x or 3x premium, complete with non-budgeted bandwidth overage charges. Being able to simply pay for what you need, as you need it – with no up-front costs, no equipment purchase and no contract – has enabled so many startups to cost-effectively get traction that it’s importance cannot be overstated.

As of result of these factors, thousands and thousands of companies – mostly startups, but established companies, too – flocked to AWS. As these developers integrated AWS into their applications, AWS simultaneously began rolling out additional services. It is now possible to develop and deploy an application using AWS infrastructure, where the only non-AWS piece of what a user of the application touches is the application itself.

And almost unbelievably, AWS still occasionally lowers the pricing on one or more of its services.

So there’s your lockin. But is this a bad thing?