featured-article-image

How Does Docker Fit In A Serverless World?

Author | Chase Douglas @txase

The debut of AWS Lambda in 2014 spawned a debate: serverless vs Docker. There have been countless articles comparing cost efficiency, performance, constraints, and vendor lock-in between the two technologies.

Thankfully, the second half of 2017 has shown this all to be a bit beside the point. With recent product announcements from Azure and AWS, it is more clear than ever that serverless and Docker are not opposing technologies. In this article, we’re going to take a look at how Docker fits into a serverless world.

Docker Isn’t Serverless (circa 2016)

“Serverless” has about a dozen definitions, but a few are particularly important when we talk about Docker:

  • On-demand resource provisioning
  • Pay-per-use billing (vs up-front provisioning based on peak utilization)
  • Elimination of underlying server management

Until recently, Docker-based applications deployed to the major cloud providers were not “serverless” according to these three attributes. In the beginning, Docker containers were deployed to servers directly, and Ops teams had to build custom solutions to provision and manage pools of these servers. In 2014, both Kubernetes and AWS EC2 Container Service (since renamed Elastic Container Service) enabled easier provisioning and management of these server pools.

Kubernetes and AWS ECS provided two key benefits. On the technical side, they provided templates for provisioning pools of servers to run Docker containers, making it easier to get started and maintain for devops teams. On the business side, they provided proof that Docker as a technology was mature enough for production work loads. Partly because of these tools, in the past few years Docker became an increasingly common choice for hosting services.

And yet, with all that Kubernetes and AWS ECS provided, we were still left with the tasks of optimizing resource utilization and maintaining the underlying servers that make up the Docker cluster resource pools.

Docker Is Serverless (2017)

Two new services have brought Docker into the serverless realm: Azure Container Instances and AWS Fargate. These services enable running a Docker container on-demand without up-front provisioning of underlying server resources. By extension, this also means there is no management of the underlying server, either.

According to our definition above, Docker is now “serverless”. Now it starts to make sense to compare Docker and Functions-as-a-Service (FaaS), like AWS Lambda. In one sense, we’ve come full circle back to our familiar comparisons between Docker and “serverless”. Except the goal has shifted from the less useful question of which technology is “better” to the more interesting question of when you should use Docker vs when you should use FaaS.

FaaS vs Docker

Going back to the dozen definitions of “serverless”, there are a few definitions that are now clearly misplaced. They are instead definitions of FaaS:

  • Low latency scaling (on the order of a second or less to invoke computation)
  • Managed runtime (Node.js, Java, Python, etc.)
  • Short-lived executions (5 minutes or less)

The new Docker invocation mechanisms now show how these are not applicable to all forms of “serverless” computing. Serverless Docker has the following characteristics instead:

  • Medium latency scaling (on the order of minutes or less to invoke computation)
  • Complete control of runtime environment
  • Unlimited execution duration

These differences help us determine where Docker fits in the serverless world.

How Does Docker Fit In Serverless?

Now that we have seen how Docker can be serverless and also how it differs from FaaS, we can make some generalizations about where to use FaaS and Docker in serverless applications:

Use Cases For Functions-as-a-Service

  • Low-latency, highly volatile (e.g. API services, database side-effect computation, generic event handling)
  • Short-lived computations (FaaS is cheaper because of faster startup, which is reflected in the per-invocation costs)
  • Where the provided runtimes work (if the runtimes work for your application, let the service provider deal with maintaining them)

Use Cases For Docker

  • Background jobs (where invocation latency is not an issue)
  • Long-lived computations (execution duration is unlimited)
  • Custom runtime requirements

This categorization papers over a few cases and leaves a lot of gray area. For example, a serverless Docker application could still back a low-latency API service by spinning up multiple containers and load balancing across them. But having these gray areas is also helpful because it means that we now have two tools we can choose from to optimize for other concerns like cost.

Taking the low-latency API service example again, a decision could be made between a FaaS and a Docker backend based on the cost difference between the two. One could even imagine a future where base load for a highly volatile service is handled by a Docker-based backend, but peak demand is handled by a FaaS backend.

2018 Will Be Exciting For Serverless In All Forms

Given that it’s the beginning of the new year, it’s hard not to look forward and be excited about what this next year will bring in the serverless space. A little over three years since AWS Lambda was announced it has become clear that building applications without worrying about servers is empowering. With Docker joining the fold, even more exciting possibilities open up for serverless.

Try Stackery For Free

Gain control and visibility of your serverless operations from architecture design to application deployment and infrastructure monitoring.