Serverless And The DORA Research Cloud Adoption Path To Elite Performance

Abner Germanow

If you even partly believe Marc Andreessen's 2011 "software is eating the world" comment, it stands to reason that companies who are good at software will be the winners in a digital world. Given this, I find it ironic that little large-scale research has gone into what it takes to be good at software. Despite the $6B a year spent on IT research, there is only one research company with a long-term focus on developers (RedMonk) and one research team with a long-term focus on what it takes to successfully run a world-class software organization (DORA). All the other firms are playing catch-up.

If you aren't familiar with DORA's work, you should be. Stemming from the State of DevOps research originally sponsored by Puppet Labs in 2014, the annual study quickly grew in sample size, breadth, and a connection to business outcomes by looking at the financial results of public companies. This set of research includes data from both the annual public survey of software teams along with data from a private benchmarking service. It's fair to say, Dr. Nicole Forsgren, Jez Humble and Co. have successfully collected more data on software team behaviors than anyone else in the world.

Defining proper clouding

Among the headlines of the 2018 study mangled by overly excited people on Twitter was the notion that teams using cloud are 23 times as likely to be elite performers relative to other software teams.

Check out this video where Nicole and Jez troll an auditorium full of software leaders on the truth of that line:

According to the NIST definition that Nicole and Jez rightly subscribe, what is the actual outcome and definition of using cloud well? From an outcomes perspective, you are 23 times more likely to be an elite performing software team if you are using the cloud properly. Which, by the NIST definition, means your use of cloud services should follow this list of attributes:

1. On-demand self-service. Anyone in the organization can self-service the cloud resources they want on demand.

2. Broad network access. Access cloud resources on any device.

3. Resource pooling. Compute resources should be managed efficiently.

4. The appearance of infinite resources. Or as Dr. Forsgen says, "Bursty like magic"

5. Measured services. You only pay for what you use.

Serverless and clouding properly

Everyone wants to be an elite performer, so let's look at this list through the lens of serverless and Stackery. I'm going to reverse the order because #5 and #4 carry the core definitions of what I mean when I say the word serverless.

5. Measured services: you only get charged for what you use.

Check plus on this one. The number of services now available on a charge-by-use basis is skyrocketing. Serverless databases, API gateways, storage, CDNs, secrets management, GraphQL, data streams, containers, functions, and more. These services represent both a focal point of cloud service provider innovation and undifferentiated burden for most companies. When these services are used as application building blocks, it significantly reduces the amount of code a team needs to write in order to deliver an application into production.

Another often overlooked aspect of these pay-for-use services is that they are configured, connected, and managed with infrastructure as code. Stackery makes the process of composing these services into an application architecture super easy, enabling teams to test and swap out the services best suited to the behaviors of their application.

4. The appearance of infinite resources. "Bursty like magic."

Again, another check plus. Not only are all those services in the prior section evolving and innovating like mad, but most of them can also automatically burst way past the capabilities of what most enterprise cloud ops teams can support. Most can scale right down to zero, too. The nature of this scaling behavior even shifts how developers prioritize how they write code.

See James Beswick's take on saving time and money with AWS Lambda using asynchronous programming: Part 1 and Part 2.

3. Resource pooling.

Check plus plus? With serverless, resource pooling isn't even a thing anymore. When you build apps on foundational building blocks of serverless databases, storage, functions, containers, and whatever else you need, resource pooling is the cloud provider's problem.

2. Broad network access. Access cloud resources on any device.

Ok, sure. I'll admit, I think this one is intended to throw a wet blanket on private cloud-ish datacenters where resource access is limited to black Soviet-era laptops. Otherwise, the public cloud, including all the serverless offerings, checks the box on this one.

1. On-demand self-service. Anyone in the organization can self-service the cloud resources they want on demand.

With Stackery? Check plus.

Without Stackery, serverless offerings have made some efforts to solve this problem, but as soon as you add in existing databases or multiple cloud accounts, things get pretty tough to manage as you scale up the number of services and collaborators working on the application.

When building server-centric applications, developers replicate a version of the services running in production on their laptops; databases, storage, streaming, and other dependencies. They then test and iterate on the app until it works on the laptop. When developing serverless apps, that localhost foundation shifts to a cloud services foundation where the application code is still cycled on the developer's laptop, but the rest of the stack and the app as a whole needs to be tested and iterated cloud-side.

This is the opposite of many organizations, where access to a cloud provider account requires an official act from on high as a remnant from the days when compute resources were really expensive. This is also why developers at those same companies have personal cloud accounts. While I'm sure that's fine from a security perspective (not), even in companies that provision developer accounts, cloud providers don't have native ways of managing dev/test/prod environments.

That's where Stackery comes in to automate the namespacing, versioning, and attributes of every environment. For example, dev and test environments should access test databases while prod should access the production database. Stackery customers embracing serverless development generally provision each developer with two AWS dev environments, and then a set of team dev, test, staging, and production environments across several AWS accounts.

Anyone can become an elite performer

As Dr. Forsgren says, being an elite performer is accessible to all, you just have to execute. With a Stackery and AWS account, your existing IDE, Git Repo, and CI/CD, you too can be on your way to being an elite performer. Get started today.

And make sure you go take the 2019 survey!

Related posts

Serverless is Awesome For APIs
Cloud InfrastructureServerless is Awesome For APIs

© 2022 Stackery. All rights reserved.