What's your Serverless Maturity Level?

Nate Taggart

In talking with hundreds of early-adopting companies in the serverless space, we've seen a clear pattern emerge in how companies embrace serverless architectures. Whether you're just beginning to evaluate a FaaS (Function-as-a-Service) architecture, or have already transitioned large portions of your application to AWS Lambda (or related FaaS offerings), it can be helpful to get a sense of where your organization lines up on the serverless maturity scale.

Evaluating

In almost every single case that we've seen, serverless adoption begins with individual developers or development teams investigating and building interest in AWS Lambda or Azure Functions. In this stage, it's common for organizations to have no formal plans to use these services and awareness and visibility of the offerings across the organization is low. Typically, to advance past this stage, individual use cases need to be identified for initial serverless adoption as a proof-of-concept.

Rogue Developer

While this may not necessarily be an ideal stage for an organization, in our research we've found that the most common initial use of FaaS within an organization is championed by an individual developer -- oftentimes without official support. The project which developers pick at this stage are frequently internal, low-visibility, inherently event-driven or workflow-based, and fairly low risk projects. While this independent usage may not initially have widespread awareness, advancement to future levels of adoption is almost always driven by increasing the visibility of the success of this early use. To drive your organization to "official" usage of serverless infrastructure offerings, it can be beneficial to document this initial use case and build retro-active support for this early success.

Background Tasks

Almost uniformly, the first officially supported usage of FaaS products is some version of a background task. This may be a cron-like job that runs periodically (like data cleanup), an on-demand implementation of a little used service (maybe an executive report), or an event or workflow driven process (a slack bot, for example). In any event (unintentional pun), these are inherently not mission-critical applications which can sustain periodic failure and can ultimately be used to build confidence in serverless-style application development. There are generally two (related) criteria for success at this stage: 1) success and reliability of the application, and 2) codifying best practices for this technology approach. As your organization begins to optimize how to best develop and consume FaaS services, the overall stability of these early efforts will improve. Over time, particularly with the implementation of serverless tools like Stackery and IOPipe, your team will be able to demonstrate the same reliability on Lambda implementations as on legacy servered architectures.

Internal APIs

After success with some non-critical workloads, companies tend to increase their FaaS usage by pioneering their first API use cases, which are typically internal (non-customer-facing) APIs. This internal model is helpful to drive down costs on these API services, without sacrificing reliability or increasing complexity in server or cluster management. At this stage, if you haven't adopted tools for building, deploying and monitoring, like what Stackery offers, it becomes increasingly important to adopt a solution. As the complexity of these services increase, particularly because serverless applications are frequently "de-composed" down to individual function deployments, you may end up with a process failure that risks derailing serverless adoption and all of the accompanying benefits. For example, tasks like package/dependency management, secret management, deployment automation, and roles/permissions are potential points of risk for human-error and tools can help mitigate these risks and increase the chance for success. Ultimately, the success or failure of your serverless architecture at the "internal" stage will determine whether your organization comes to embrace serverless broadly, and serverless operations tools will increase your odds of success.

Customer-facing Services

Once you're building externally-facing serverless applications, you've almost certainly developed broad support throughout the organization, realized some of the tremendous benefits of serverless, and built policies and best-practices that make serverless development and management as efficient and reliable as your legacy architecture, if not more so. At this point, your major blockers to further adoption will probably be driven by your use cases and the current state of the underlying serverless offerings, like AWS Lambda. These products are still limited in terms of compute resources, compute types (no GPU support, for example), memory limits, and time limits. For ultra-high performance applications, further issues relating to cold starts or noisy neighbors could potentially be problematic (although they can mostly be mitigated). Amazon and Microsoft have a proven track record of improving their offerings over time, and while it may be a waiting game, we would expect these issues to be resolved by the providers.

Eventually, we expect that many (even most) companies will be developing customer-centric services backed by serverless products. With the early success of the technology and the increasing proliferation of specialized tooling, it seems nearly inevitable that serverless adoption will become the de facto standard as this approach gains broader awareness and as the community develops more standardized methodologies.

Related posts

Using Relational Databases With Serverless Functions
ServerlessUsing Relational Databases With Serverless Functions

© 2022 Stackery. All rights reserved.