Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Posts on DevOps

Disaster Recovery in a Serverless World - Part 1
Nuatu Tseggai

Nuatu Tseggai | July 12, 2018

Disaster Recovery in a Serverless World - Part 1

This is part one of a multi-part blog series. In this post we’ll discuss Disaster Recovery planning when building serverless applications. In future posts we’ll highlight Disaster Recovery exercises and the engineering preparation necessary for success.

‘Eat Your Own Dog Food’

Nearly the entire mix of Stackery backend microservices run on AWS Lambda compute. That’s not shocking - after all - the entire purpose of our business is to build a cohesive set of tools that enable teams to build production-ready serverless applications. It’s only fitting that we eat our own dogfood and use serverless technologies wherever possible.

Which leads to the central question this blog post is highlighting: How should a team reason about Disaster Recovery when they build software atop serverless technologies?

(Spoiler Alert) Serverless doesn’t equate to a free lunch! The important bits of DR revolve around establishing a cohesive plan and exercising it regularly - all of which remain important when utilizing serverless infrastructure. But there’s good news! Serverless architectures free engineers from the minutia of administering a platform leaving them more time to focus their sights on higher level concepts such as Disaster Recovery, Security, and Technical Debt.

Before we get too far - let’s define Disaster Recovery (DR). In simple terms, it’s a documented plan that aims to minimize downtime and data loss in the event of a disaster. The term is most often used in the context of yearly audit-related exercises wherein organizations demonstrate compliance in order to meet regulatory requirements. It’s also very familiar to those who are charged with developing IT capabilities for mission-critical functions of the government.

Many of us at Stackery used to work at New Relic during a particularly explosive growth stage of the business. We were exposed to DR exercises that took months of work (from dozens of managers/engineers) to reach the objectives set by the business. That experience influenced us as we embarked on developing a DR plan for Stackery, but we still needed to work through a multitude of questions specific to our architecture.

What would happen to our product(s) if any of the following services running in AWS region XYZ experienced an outage? (S3, RDS, Dynamo, Cognito, Lambda, Fargate, etc.)

  • How long before we fully recover?
  • How much data loss would we incur?
  • What process would we follow to recover?
  • How would we communicate status and next steps internally?
  • How would we communicate status and next steps to customers?

These questions quickly reminded us that DR planning requires direction from the business. In our case, we looked to our CEO, CTO, and VP of Engineering to set two goals:

  1. Recovery Time Objective (RTO): the length of time it would take us to swap to a second, hot production service in a separate AWS region.
  2. Recovery Point Objective (RPO): the acceptable amount of data loss measured in time.

In order to determine these goals our executives had to consider the financial impact to the business during downtime (determined by considering loss of business and damage to our reputation). Not surprisingly, the dimensions of this business decision will be unique to every business. It’s important that your executive team takes the time to understand why it’s important for them to be in charge of defining the RTO and RPO and that they are engaged in the ongoing development and execution of the DR plan. It’s a living plan and as such will require improvements as the company evolves.

Based on our experience, we developed the below outline that you may find helpful as your team develops a DR plan.

Disaster Recovery Plan

  1. Goals
  2. Process
    • Initiating DR
    • Assigning Roles
    • Incident Commander
    • Technical Lead
  3. Communication
    • Engineering Coordination
    • Leadership Updates
  4. Recovery Steps
  5. Continuous Improvement
    • TODO
    • Lessons Learned
    • Frequency
Goals:

This section describes our RTO and RPO (see above).

Process:

This section describes the process to follow in the event that it becomes necessary to initiate Disaster Recovery. This is the same process followed during Disaster Recovery Exercises.

Initiating DR:

The Disaster Recovery procedure may be initiated in the event of a major prolonged outage upon the CEO’s request. If the CEO is unavailable and cannot be reached DR can be initiated by another member of the executive team.

Assigning Roles:

Roles will be assigned by the executive initiating the DR process.

Incident Commander (IC):

The Incident Commander is responsible for coordinating the operational response and communicating status to stakeholders. The IC is responsible for designating a Technical Lead and engaging additional employees necessary for the response. During the DR process the IC will send hourly email updates to the executive team. These updates will include: current status of DR process, timeline of events since DR was initiated, requests for help or additional resources.

Technical Lead (TL):

The Technical Lead has primary responsibility for driving the DR process towards a successful technical resolution. The IC will solicit status information and requests for additional assistance from the TL.

Communication:

Communication is critical to an effective and well coordinated response. The following communication channels should be used:

Engineering Coordination:

The IC, TL and engineers directly involved with the response will communicate in the #disaster-recovery-XYZ slack channel. In the event that slack is unavailable the IC will initiate a Google Hangout and communicate instructions for connecting via email and cell phone.

Leadership Updates:

The IC will provide hourly updates to the executive team via email. See details in separate Incident Commander doc.

Recovery Steps:

High level steps to be performed during DR.

  1. Update Status Page
  2. Restore Datastore(s) in prodY from latest prodX
    • DB
    • Authentication
    • Authorization
    • Cache
    • Blob Storage
  3. Restore backend microservices
    • Bootstrap services with particular focus on upstream and downstream dependencies
  4. Swap CloudFront distribution(s)
  5. Swap API endpoint(s) via DNS
    • Update DNS records to point to prodY API endpoints
  6. Verify recovery is complete
    • Redeploy stack from user account to verify service level
  7. Update Status Page
Continuous Improvement:

This section captures TODO action items and next steps, lessons learned, and the frequency in which we’ll revisit the plan and accomplish the TODO action items.

In the next post, we’ll dig into the work it takes to prepare for and perform DR exercises. To learn how Stackery can make building microservices on Lambda manageable and efficient, contact our sales team or get a free trial today.

Self Healing Serverless Applications - Part 3
Nate Taggart

Nate Taggart | July 04, 2018

Self Healing Serverless Applications - Part 3

This blog post is based on a presentation I gave at Glue Conference 2018. The original slides: Self-Healing Serverless Applications – GlueCon 2018. View the rest here. Parts: 1, 2, 3

This is part three of a three-part blog series. In the first post we covered some of the common failure scenarios you may face when building serverless applications. In the second post we introduced the principles behind self-healing application design. In this next post, we’ll apply these principles with solutions to real-world scenarios.

Self-Healing Serverless Applications

We’ve been covering the fundamental principles of self-healing applications and the underlying errors that serverless applications commonly face. Now it’s time to apply these principles to the real-world scenarios that the errors will raise. Let’s step through five common problems together.

We’ll solve:

  • Uncaught Exceptions
  • Upstream Bottlenecks
  • Timeouts
  • Stuck steam processing
  • and, Downstream Bottlenecks

Uncaught Exceptions

Uncaught exceptions are unhandled errors in your application code. While this problem isn’t unique to Lambda, diagnosing it in a serverless application can be a little trickier because your compute instance is ultimately ephemeral and will shut down upon an uncaught exception. What we want to do is detect that an exception is about to occur, and either remediate or collect diagnostic information at runtime before the Lambda instance is gone. After we’ve handled the error, we can simply re-throw it to avoid corrupting the behavior of the application. To do that, we’ll use three of the principles we previously introduced: introducing universal instrumentation, collecting event-centric diagnostics, and giving everyone visibility.

At an abstract level, it’s relatively easy to instrument a function to catch errors (we could simply wrap our code in a try/except loop). While this solution works just fine for a single function, though, it doesn’t easily scale across an entire application or organization. Do you really want to be responsible for ensuring that every single function is individually instrumented?

The better approach is to use “universal instrumentation.” We’ll create a generic handler which will invoke our real target function and always use it as the top level handler for every piece of code. Here’s an example:

def genericHandler(message, context):
	try:
		return targetHandler(message, context)
	except Exception as error:
		# Collect event diagnostics
		# Possibly re-route the event or otherwise remediate the transaction
		raise error

As you can see, we have in fact just run our function through a try/except clause, with the benefit of now being able to invoke any function with one standard piece of instrumentation code. This means that every function will now behave consistently (consistent logs, metrics, etc) across our entire stack.

This instrumentation also allows us to collect event-centric diagnostics. Keep in mind that by default, a Lambda exception will give you a log with a stack trace, but no information on the event which led to this exception. It’s much easier to debug and improve application health with relevant event data. And now that we have centralized logs, events, and metrics, it’s much easier for everyone on the team to have visibility into the health of the entire application.

Note: you’ll want to be careful that you’re not logging any sensitive data when you capture events.

Upstream Bottleneck

An upstream bottleneck occurs when a service calling into Lambda hits a scaling limit, even though Lambda isn’t being throttled. A classic example of this is API Gateway reaching throughput limits and failing to invoke downstream Lambdas on every request.

The key principles to focus on here is: identifying service limits, using self-throttling, and notifying a human.

It’s pretty straightforward to identify service limits, and if you haven’t done this you really should. Know what your throughput limits are for each of the AWS services you’re using and set alarms on throughput metrics before you hit capacity (notify a human!).

The more sophisticated, self-healing approach comes into play when you choose to throttle yourself before you get throttled by AWS. In the case of an API Gateway limit, you (or someone in your organization) may already control the requests coming to this Gateway. If, say for example, you have a front-end application backed by API Gateway and Lambda, you could introduce an exponential backoff logic to kick in whenever you have backend errors. Pay particular attention to HTTP 429 To Many Requests responses, which is (generally) what API Gateway will return when it’s being throttled. I say “generally” because in practice this is actually inconsistent and will sometimes return 5XX error codes as well. In any event, if you are able to control the volume of requests (which may be from another service tier), you can help your application to self-heal and fail gracefully.

Timeouts

Sometimes Lambdas time out, which can be a particularly painful and expensive kind of error, since Lambdas will automatically retry multiple times in many cases, driving up the active compute time. When a timeout occurs, the Lambda will ultimately fail without capturing much in terms of diagnostics. No event data, no stack trace – just a timeout error log. Fortunately, we can handle these errors pretty similarly to uncaught exceptions. We’ll use the principles of self-throttle, universal instrumentation, and considering alternative resource types.

The instrumentation for this is a little more complex, but stick with me:

def genericHandler(message, context):
	# Detect when the Lambda will time out and set a timer for 1 second sooner
	timeout_duration = context.get_remaining_time_in_millis() - 1000

	# Invoke the original handler in a separate thread and set our new stricter timeout limit
	handler_thread = originalHandlerThread(message, context)
	handler_thread.start()
	handler_thread.join(timeout_duration / 1000)

	# If timeout occurs
	if handler_thread.is_alive():
		error = TimeoutError('Function timed out')

		# Collect event diagnostics here

		raise error
	return handler_thread.result

This universal instrumentation is essentially self-throttling by forcing us to conform to a slightly stricter timeout limit. In this way, we’re able to detect an imminent timeout while the Lambda is still alive and can extract meaningful diagnostic data to retroactively diagnose the issue. This instrumentation can, of course, be mixed with our error handling logic.

If this seems a bit complex, you might like using Stackery: we automatically provide instrumentation for all of our customers without requiring *any* code modification. All of these best practices are just built in.

Finally, sometimes we should be considering other resource types. Fargate is another on-demand compute instance which can run longer and with higher resource limits that Lambda. It can still be triggered by events and is a better fit for certain workloads.

Stream Processing Gets “Stuck”

When Lambda is reading off of a kinesis stream, failing invocations can cause the stream to get stuck (more accurately: just that shard). This is because Lambda will continue to retry the failing message until it’s successful and will not get to the workload behind the stuck message until it’s handled. This introduces an opportunity for some of the other self-healing principles: reroute and unblock, automate known solutions, consider alternative resource types.

Ultimately, you’re going to need to remove the stuck message. Initially, you might be doing this manually. That will work if this is a one-off issue, but issues rarely are. The ideal solution here is to automate the process of rerouting failures and unblocking the rest of the workload.

The approach that we use is to build a simple state machine. The logic is very straightforward: is this the first time we’ve seen this message? If so, log it. If not, this is a recurring failure and we need to move it out of the way. You might simply “pass” on the message, if your workload is fairly fault tolerant. If it’s critical, though, you could move it to a dedicated “failed messages” stream for someone to investigate or possibly to route through a separate service.

This is where alternative resources come into play again. Maybe the Lambda is failing because it’s timing out (good thing you introduced universal instrumentation!). Maybe sending your “failed messages” stream to a Fargate instance solves your problem. You might also want to investigate the similar but actually different ways that Kinesis, SQS, and SNS work and make sure you’re choosing the right tool for the job.

Downstream Bottleneck

We talked about upstream bottlenecks where Lambda is failing to be invoked, but you can also hit a case where Lambda is scaling up faster that its dependencies and causing downstream bottlenecks. A classic example of this is Lambda depleting the connection pool for an RDS instance.

You might be surprised to learn that Lambda holds onto its connection, even while cached, unless you explicitly close the connection in your code. So do that. Easy enough. But you’re also going to want to pay attention to some of our self-healing key principles again: identify service limits, automate known solutions, and give everyone visibility.

In theory, Lambda is (nearly, kind of, sort of) infinitely scalable. But the rest of your application (and the other resource tiers) aren’t. Know your service limits: how many connections can your database handle? Do you need to scale your database?

What makes this error class actually tricky, though, is that multiple services may have shared dependencies. You’re looking at a performance bottleneck thinking to yourself, “but I’m not putting that much load on the database…” This is an example of why it’s so important to have shared visibility. If your dependencies are shared, you need to understand not just your own load, but that of all of the other services hammering this resource. You’ll really want a monitoring solution that includes service maps and makes it clear how the various parts of your stack are related. That’s why, even though most of our customers work day-to-day from the Stackery CLI, the UI is still a meaningful part of the product.

The Case for Self-Healing

Before we conclude, I’d like to circle back and talk again about the importance of self-healing applications. Serverless is a powerful technology that outsources a lot of the undifferentiated heavy lifting of infrastructure management, but it requires a thoughtful approach to software development. As we add tools to accelerate the software lifecycle, we need to keep some focus on application health and resiliency. The “Self-Healing” philosophy is the approach that we’ve found which allows us to capture the velocity gains of serverless and unlock the upside of scalability, without sacrificing reliability or SLAs. If you’re taking serverless seriously, you should incorporate these techniques and champion them across your organization so that serverless becomes a mainstay technology in your stack.

Self Healing Serverless Applications - Part 2
Nate Taggart

Nate Taggart | June 18, 2018

Self Healing Serverless Applications - Part 2

This blog post is based on a presentation I gave at Glue Conference 2018. The original slides: Self-Healing Serverless Applications – GlueCon 2018. View the rest here. Parts: 1, 2, 3

This is part two of a three-part blog series. In the last post we covered some of the common failure scenarios you may face when building serverless applications. In this post we’ll introduce the principles behind self-healing application design. In the next post, we’ll apply these principles with solutions to real-world scenarios.

Learning to Fail

Before we dive into the solution-space, it’s worth introducing the defining principles for creating self-healing systems: plan for failure, standardize, and fail gracefully.

Plan for Failure

As an industry, we have a tendency to put a lot of time planning for success and relatively little time planning for failure. Think about it. How long ago did you first hear about Load Testing? Load testing is a great way to prepare for massive success! Ok, now when did you first hear about Chaos Engineering? Chaos Engineering is a great example of planning for failure. Or, at least, a great example of intentionally failing and learning from it. In any event, if you want to build resilient systems, you have to start planning to fail.

Planning for failure is not just a lofty ideal, there are easy, tangible steps that you can begin to take today:

  • Identify Service Limits: Remember the Lambda concurrency limits we covered in Part 1? You should know what yours are. You’re also going to want to know the limits on the other service tiers, like any databases, API Gateway, and event streams you have in your architecture. Where are you most likely to bottleneck first?
  • Use Self-Throttling: By the time you’re being throttled by Lambda, you’ve already ceded control of your application to AWS. You should handle capacity limits in your own application, while you still have options. I’ll show you how we do it, when we get to the solutions.
  • Consider Alternative Resource Types: I’m about as big a serverless fan as there is, but let’s be serious: it’s not the only tool in the tool chest. If you’re struggling with certain workloads, take a look at alternative AWS services. Fargate can be good for long-running tasks, and I’m always surprised how poorly understood the differences are between Kinesis vs. SQS vs. SNS. Choose wisely.

Standardize

One of the key advantages of serverless is the dramatic velocity it can enable for engineering orgs. In principle, this velocity gain comes from outsourcing the “undifferentiated heavy lifting” of infrastructure to the cloud vendor, and while that’s certainly true, a lot of the velocity in practice comes from individual teams self-serving their infrastructure needs and abandoning the central planning and controls that an expert operations team provides.

The good news is, it doesn’t need to be either-or. You can get the engineering velocity of serverless while retaining consistency across your application. To do this, you’ll need a centralized mechanism for building and delivering each release, managing multiple AWS accounts, multiple deployment targets, and all of the secrets management to do this securely. This means that your open source framework, which was an easy way to get started, just isn’t going to cut it anymore.

Whether you choose to plug in software like Stackery or roll your own internal tooling, you’re going to need to build a level of standardization across your serverless architecture. You’ll need standardized instrumentation so that you know that every function released by every engineer on every team has the same level of visibility into errors, metrics, and event diagnostics. You’re also going to want a centralized dashboard that can surface performance bottlenecks to the entire organization – which is more important than ever before since many serverless functions will distribute failures to other areas of the application. Once these basics are covered, you’ll probably want to review your IAM provisioning policies and make sure you have consistent tagging enforcement for inventory management and cost tracking.

Now, admittedly, this standardization need isn’t the fun part of serverless development. That’s why many enterprises are choosing to use a solution like Stackery to manage their serverless program. But even if standardization isn’t exciting, it’s critically important. If you want to build serverless into your company’s standard tool chest, you’re going to need for it to be successful. To that end, you’ll want to know that there’s a single, consistent way to release or roll back your serverless applications. You’ll want to ensure that you always have meaningful log and diagnostic data and that everyone is sending it to the same place so that in a crisis you’ll know exactly what to do. Standardization will make your serverless projects reliable and resilient.

Fail Gracefully

We plan for failure and standardize serverless implementations so that when failure does happen we can handle it gracefully. This is where “self-healing” gets exciting. Our standardized instrumentation will help us identify bottlenecks automatically and we can automate our response in many instances.

One of the core ideas in failing gracefully is that small failures are preferable to large ones. With this in mind, we can oftentimes control our failures to minimize the impact and we do this by rerouting and unblocking from failures. Say, for example, you have a Lambda reading off of a Kinesis stream and failing on a message. That failure is now holding up the entire Kinesis shard, so instead of having one failed transaction you’re now failing to process a significant workload. If we, instead, allow that one blocking transaction to fail by removing it from the stream and (instead of processing it normally) simply log it out as a failed transaction and get it out of the way. Small failure, but better than a complete system failure in most cases.

Finally, while automating solutions is always the goal with self-healing serverless development, we can’t ignore the human element. Whenever you’re taking an automated action (like moving that failing Kinesis message), you should be notifying a human. Intelligent notification and visibility is great, but it’s even better when that notification comes with all of the diagnostic details including the actual event that failed. This allows your team to quickly reproduce and debug the issue and turn around a fix as quickly as possible.

See it in action

In our final post we’ll talk through five real-world use cases and show how these principles apply to common failure patterns in serverless architectures.

We’ll solve: - Uncaught Exceptions - Upstream Bottlenecks - Timeouts - Stuck steam processing - and, Downstream Bottlenecks

If you want to start playing with self-healing serverless concepts right away, go start a free 60 day evaluation of Stackery. You’ll get all of these best practices built-in.

Self Healing Serverless Applications - Part 1
Nate Taggart

Nate Taggart | June 07, 2018

Self Healing Serverless Applications - Part 1

This blog post is based on a presentation I gave at Glue Conference 2018. The original slides: Self-Healing Serverless Applications – GlueCon 2018. View the rest here. Parts: 1, 2, 3

This is part one of a multi-part blog series. In this post we’ll discuss some of the common failure scenarios you may face when building serverless applications. In future posts we’ll highlight solutions based on real-world scenarios.

What to expect when you’re not expecting

If you’ve been swept up in the serverless hype, you’re not alone. Frankly, serverless applications have a lot of advantages, and my bet is that (despite the stupid name) “serverless” is the next major wave of cloud infrastructure and a technology you should be betting heavily on.

That said, like all new technologies, serverless doesn’t always live up to the hype, and separating fact from fiction is important. At first blush, serverless seems to promise infinite and immediate scaling, high availability, and little-to-no configuration. The truth is that while serverless does offload a lot of the “undifferentiated heavy lifting” of infrastructure management, there are still challenges with managing application health and architecting for scale. So, let’s take a look at some of the common failure modes for AWS Lambda-based applications.

Runtime failures

The first major category of failures are what I classify as “runtime” or application errors. I assume it’s no surprise to you that if you introduce bugs in your application, Lambda doesn’t solve that problem. That said, you may be very surprised to learn that when you throw an uncaught exception, Lambda will behave differently depending on how you architect your application.

Let’s briefly touch on three common runtime failures:

  • Uncaught Exceptions: any unhandled exception (error) in your application.
  • Timeouts: your code doesn’t complete within the maximum execution time.
  • Bad State: a malformed message or improperly provided state causes unexpected behavior.
Uncaught Exceptions

When Lambda is running synchronously (like in a request-response loop, for example), the Lambda will return an error to the caller and will log an error message and stack trace to CloudWatch, which is probably what you would expect. It’s different though when a Lambda is called asynchronously, as might be the case with a background task. In that event, when throwing an error, Lambda will retry up to three times. In fact, it will even try indefinitely when reading off of a stream, like with Kinesis. In any case, when a Lambda fails in an asychronous architecture, the caller is unaware of the error – although there is still a log record sent to CloudWatch with the error message and stack trace.

Timeouts

Sometimes Lambda will fail to complete within the configured maximum execution time, which by default is 3 seconds. In this case, it will behave like an uncaught exception, with the caveat that you won’t get a stack trace out in the logs and the error message will be for the timeout and not for the potentially underlying application issue, if there is one. Using only the default behavior, it can be tricky to diagnose why Lambdas are timing out unexpectedly.

Bad State

Since serverless function invocations are stateless, state must be supplied on or after invocation. This means that you may be passing state data through input messages or by connecting to a database to retrieve state when the function starts. In either scenario, it’s possible to invoke a function but not properly supply the state which the function needs to properly execute. The trick here is that these “bad state” situations can either fairly noisily (as an uncaught exception) or fail silently without raising any alarms. When these errors occur silently it can be nearly impossible to diagnose, or sometimes to even notice that you have a problem. The major risk is that the events which trigger these functions have expired and since state may not be being stored correctly, you may have permanent data loss.

Scaling failures

The other class of failures worth discussing are scaling failures. If we think of runtime errors as application-layer problems, we can think of scaling failures as infrastructure-layer problems.

The three common scaling failures are:

  • Concurrency Limits: when Lambda can’t scale high enough.
  • Spawn Limits: when Lambda can’t scale fast enough.
  • Bottlenecking: when your architecture isn’t scaling as much as Lambda.
Concurrency Limits

You can understand why Amazon doesn’t exactly advertise their concurrency limits, but there are, in fact, some real advantages to having limits in place. First, concurrency limits are account limits that determine how many simultaneously running instances you can have of your Lambda functions. These limits can really save you in scenarios where you accidentally trigger an unexpected and massive workload. Sure, you may quickly run up a tidy little bill, but at least you’ll hit a cap and have time to react before things get truly out of hand. Of course, the flipside to this is that your application won’t really scale “infinitely,” and if you’re not careful you could hit your limits and throttle your real traffic. No bueno.

For synchronous architectures, these Lambdas will simply fail to be invoked without any retry. If you’re invoking your Lambdas asynchronously, like reading off of a stream, the Lambdas will fail to invoke initially, but will resume once your concurrency drops below the limit. You may experience some performance bottlenecks in this case, but eventually the workload should catch up. It’s worth noting, by contacting AWS you can usually get them to raise your limits if needed.

Spawn Limits

While most developers with production Lambda workloads have probably heard of concurrency limits, in my experience very few know about spawn limits. Spawn limits are account limits on what rate new Lambdas can be invoked. This can be tricky to identify because if you glance at your throughput metrics you may not even be close to your concurrency limit, but could still be throttling traffic.

The default behavior for spawn limits matches concurrency limits, but again, it may be especially challenging to identify and diagnose spawn limit throttling. Spawn limits are also very poorly documented and, to my knowledge, it’s not possible to have these limits raised in any way.

Bottlenecking

The final scaling challenge involves managing your overall architecture. Even when Lambda scales perfectly (which, in fairness is most of the time!), you must design your other service tiers to scale as well or you may experience upstream or downstream bottlenecks. In an upstream bottleneck, like when you hit throughput limits in API Gateway, your Lambdas may fail to invoke. In this case, you won’t get any Lambda logs (they really didn’t invoke), and so you’ll have to be paying attention to other metrics to detect this. It’s also possible to create downstream bottlenecks. One way this can happen is when your Lambdas scale up, but deplete your connection pool for a downstream database. These kind of problems can behave like an uncaught exception, lead to timeouts, or distribute failures to other functions and services.

Introducing Self-Healing Serverless Applications

The solution to all of this is to build resiliency with “Self-Healing” Serverless Applications. This is an approach for architecting applications for high-resiliency and for automated error resolution.

In our next post, we’ll dig into the three design principles for self-healing systems:

  • Plan for Failure
  • Standardize
  • Fail Gracefully

We’ll also learn to apply these principles to real-world scenarios that you’re likely to encounter as you embrace serverless architecture patterns. Be sure to watch for the next post!

Custom CloudFormation Resources: Real Ultimate Power
Chase Douglas

Chase Douglas | May 24, 2018

Custom CloudFormation Resources: Real Ultimate Power

my ninja friend mark

Lately, I’ve found CloudFormation custom resources to be supremely helpful for many use cases. I actually wanted to write a post mimicing Real Ultimate Power:

Hi, this post is all about CloudFormation custom resources, REAL CUSTOM RESOURCES. This post is awesome. My name is Chase and I can’t stop thinking about custom resources. These things are cool; and by cool, I mean totally sweet.

Trust me, it would have been hilarious, but rather than spend a whole post on a meme that’s past its prime let’s take a look at the real reasons why custom resources are so powerful!

an awesome ninja

What Are Custom Resources?

Custom resources are virtual CloudFormation resources that can invoke AWS Lambda functions. Inside the Lambda function you have access to the properties of the custom resource (which can include information about other resources in the same CloudFormation stack by way of Ref and Fn::GetAtt functions). The function can then do anything in the world as long as it (or another resource it invokes) reports success or failure back to CloudFormation within one hour. In the response to CloudFormation, the custom resource can provide data that can be referenced from other resources within the same stack.

another awesome ninja

What Can I Do With Custom Resources?

Custom resources are such a fundamental resource that it isn’t obvious at first glance all the use cases it enables. Because it can be invoked once or on every deployment, it’s a powerful mechanism for lifecycle management of many resources. Here are a few examples:

You could even use custom resources to enable post-provisioning smoke/verification testing:

  1. A custom resource is “updated” as the last resource of a deployment (this is achieved by adding every other resource in the stack to its DependsOn property)
  2. The Lambda function backing the custom resource triggers smoke tests to run, then returns success or failure to CloudFormation
  3. If a failure occurs, CloudFormation automatically rolls back the deployment

Honestly, while I have begun using custom resources for many use cases, I discover new use cases all the time. I feel like I have hardly scratched the surface of what’s possible through custom resources.

And that’s what I call REAL Ultimate Power!!!!!!!!!!!!!!!!!!

more awesome ninja

Fargate and Cucumber-js: A Review
Stephanie Baum

Stephanie Baum | April 16, 2018

Fargate and Cucumber-js: A Review

Lately, here at Stackery, as we’ve begun shipping features more rapidly into the product, we’ve also been shifting some of our focus towards reliability and integration testing in preparation. I decided to try out AWS Fargate for UI integration testing using BDD and Cucumber-js in a day long experimental POC. Cucumber is a behavior driven development testing framework with test cases written in a language called gherkin that focuses specifically on user features. AWS Fargate is a recently released abstraction on top of ECS services that gets rid of managing EC2 instances. These are my conclusions:

1. Fargate is awesome. Why would you not use Fargate?

If you’re configuring a Fargate task via the AWS UI it’s somewhat confusing and clumsy. With Stackery, you can configure Fargate while avoiding the pain of the AWS UI entirely. The communication between AWS Lambda to a Fargate task is the same as it would be for a normal ECS service, so moving existing ECS clusters/services to Fargate is straightforward application logic-wise. Here’s a simplified code snippet, dockerTaskPort refers to the conveniently provided Stackery Port environment variable. See our docs for the Docker Task node for more information.

  const repoName = `cross-region-us-east`;
  const browserCiRepo = `https://${token}@github.com/sbaum1994/${repoName}.git`;

  const dockerCommands = [
    `echo 'Running node index.js'`,
    `node index.js`
  ];

  const env = {
    ENV_VAR: 'value'
  };

  let dockerCommand = ['/bin/bash', '-c', dockerCommands.join('; ')];

  const params = {
    taskDefinition: dockerTaskPort.taskDefinitionId,
    overrides: {
      containerOverrides: [
        {
          name: '0'
        }
      ]
    },
    launchType: 'FARGATE'
  };

  params.networkConfiguration = {
    awsvpcConfiguration: {
      subnets: dockerTaskPort.vpcSubnets.split(','),
      assignPublicIp: (dockerTaskPort.assignPublicIPAddress ? 'ENABLED' : 'DISABLED')
    }
  };

  params.overrides.containerOverrides[0].command = dockerCommand;

  params.overrides.containerOverrides[0].environment = Object.keys(env).map((name) => {
    return {name, value: env[name]};
  });

  const ecs = new AWS.ECS({ region: process.env.AWS_REGION });
  return ecs.runTask(params)...

It’s a nice plus that there are no EC2 configurations to worry about, and it also simplified scaling. In the past we’ve had to use an ECS cluster and service for CI when the integration testing has been too long running for AWS lambda. Here, my Fargate service just scales up and down nicely without having to worry about configuration, bottlenecks or cost.

Here’s my UI integration testing set up, triggered by an endpoint that specifies the environment to test.

With Fargate there is still technically an ECS cluster that needs configuring on set up, and when using a load balancer and target group. You are still creating a task definition, containers, and a service. Stackery’s UI makes it easy to understand and configure, but if I were doing this on my own I’d still find it a PIA. Furthermore, I could see Fargate not being ideal in some use cases, since you can’t select the EC2 instance type.

Stackery UI setting up Fargate:

2. Cucumber is pretty cool too. BDD creates clear tests and transparent reporting.

I really like the abstraction Cucumber provides between the test definitions and underlying assertions/implementations. For this POC I created a simple “login.feature” file as follows:

Feature: Login
  In order to use Stackery
  As a single user
  I want to login to my Stackery account

  Background:
    Given I've navigated to the Stackery app in the browser
    And it has loaded successfully

  Scenario: Logging in as a user with a provider set up
    Given a test user account exists with a provider
    When I login with my username and password
    Then I'm taken to the "Stacks" page and see the text "Select a stack"
    And I see the "Stackery Stacks" section populated in the page
    And I see the "CloudFormation Stacks" section populated in the page

Each step maps to a function that uses Selenium Webdriver on headless chrome under the hood to run the tests. I also pass in configuration that lets the test know what the test account username and password is, what Stackery environment is being tested, and other definitions like the timeout settings. In my pipeline, I also added an S3 bucket to hold the latest Cucumber reporting results for visibility after a test finishes.

Report generated:

Overall, I think this can potentially be a great way to keep track of adding new features while maintaining existing ones / making sure everything is regressively tested on each merge. Furthermore it’s clear, organized and user flow oriented, which can work well for a dashboard style app like ours with multiple, repeatable, extensible steps (Create Environment, Deploy a Stack To Environment) etc.

Why All The Monolithic Serverless API Hate?
Chase Douglas

Chase Douglas | March 21, 2018

Why All The Monolithic Serverless API Hate?

A schism exists in serverless land. There are about equal numbers of people on two sides of an important architectural question: Should your APIs be backed by a monolithic function or by independent functions for each endpoint? To the serverless ecosystem’s credit, this schism hasn’t devolved into warring factions.

What fights are like in the serverless ecosystem

That said, some have rationalized splitting API functionality up into independent functions with arguments that boil down to a combination of the following:

  • We can now split functionality into nano-services like never before, so why not?
  • Justifications on how independent functions are actually easier to track, even though we all probably agree that most tools to track the explosion of serverless resources are still lacking in maturity*
  • Reasons why monolithic serverless functions are bad based on code architecture preferences or long cold start times due to inefficient implementations

(*Shameless plug for how Stackery can help with this)

Because the first two arguments are fairly weak, the main justification for APIs backed by independent functions are predicated on perceived problems with monolithic APIs more than why APIs backed by independent functions are better. I’ve yet to see a good argument for why all APIs should be broken up into independent functions. Just because it’s possible for monolithic architectures to have problems doesn’t prove that swinging to the opposite extreme is ideal.

There are certainly limits to the monolithic approach, but these limits are more due to Conway’s law than technical architecture. Conway’s law suggests that a reasonable approach would be to split an API up into independent functions when there are separate teams managing different parts of the API.

Some worry that cold start times of a monolithic function may be worse due to needing to initialize all the components of the API at once. However, in every language there are reasonable strategies for reducing cold start time. For example, both Node.js and Python make lazy-loading of functionality easy, while Go has low cold start times simply because it’s a fully compiled executable.

This naturally leads to a broader discussion of the effect of function architecture on cold starts. Thankfully, most API use cases are forgiving of cold start latency, but it’s not something that can be ignored entirely. For example, we are all aware of the famous studies showing how latency can have an extraordinary impact on revenue for e-commerce. In general, almost everyone building a public serverless API should be monitoring for cold starts in some form.

Yan Cui (aka The Burning Monk and all around brilliant developer) recently wrote that monolithic serverless functions won’t help with cold starts. He makes a valid point that at scale, cold starts become noise. It’s true as well that we will never get rid of cold starts without help from the service providers. But the main thrust of his argument is that you will have the same number of cold starts whether you use a monolithic function or independent functions.

However, there is one incorrect assumption underlying the argument. Yan puts forward an API where every endpoint is hit with the same amount of traffic to analyze the effects of cold starts. In reality, almost no APIs have uniform traffic across all endpoints. Most API endpoint traffic follows a pattern similar to the natural power law. A few endpoints will have a high amount of traffic, but most will have much less. A few endpoints will have very little traffic.

When your API is backed by one monolithic function, the cold starts are spread out among all API requests in proportion to their throughput. In other words, the percentage of requests that trigger a cold start will be the same for all endpoints.

Now let’s examine the implications for APIs backed by independent functions. Imagine you have an endpoint that is hit with 1000 requests per hour, and one that is hit with 5 requests per hour. Depending on the cold start rate for your function, you may find that while you have very few cold starts for the high throughput endpoint, almost every request to the low-throughput function causes a cold start.

Maybe it is ok for your API to have different rates of cold starts per endpoint. But for many APIs this is problematic. Imagine your API has endpoints for both listing items and creating items, where listings are requested much more frequently than item creation requests are. You may have a service-level agreement on latency to be met by each endpoint. It would be better to spread cold starts across all endpoints in this scenario.

While it’s possible to use triggers to keep functions warm, if you have one monolithic function it is much easier to keep it warm than it is to keep many independent functions warm. And, contrary to popular belief, there are meaningful ways to warm functions even at high throughputs, though I’ll leave that discussion for another post.


All architectural choices come with trade offs. But the choice between monolithic and independent API functions is a false dichotomy. There’s actually a broad spectrum between all functionality held in a single monolithic function and every three-line helper deployed as a separate microservice. Neither of these is desirable in all cases, which is why the arguments against one or the other are often weak or nonsensical. What folks should be doing is considering how they determine the appropriate boundaries for their API components, and how they manage that over time as their total lines of code, architectural complexity, and number of people involved grows.

How an Under-Provisioned Database 10X'd Our AWS Lambda Costs
Sam Goldstein

Sam Goldstein | February 21, 2018

How an Under-Provisioned Database 10X'd Our AWS Lambda Costs

This is the story of how running a too small Postgres RDS instance caused a 10X increase in our daily AWS Lambda costs.

It was Valentine’s day. I’d spent a good chunk of the week working on an internal business application which I’d built using serverless architecture. This application does several million function invocations per month and stores about 2Gb of data in an RDS Postgres database. This week I’d been working on adding additional data sources which had increased the amount of data stored in Postgres by about 30%.

Shortly after I got into the office on Valentine’s day I was alerted that there were problems with this application. Errors on several of the Lambda functions had spiked and I was seeing function timeouts as well. I started digging into Cloudwatch metrics and quickly discovered that my recently added datasources were causing growing pains in my Postgres DB. More specifically it was running out of memory.

You can see the memory pressure clearly in this graph:

I was able to quickly diagnose that memory pressure within the RDS instance was leading to slow queries, causing function timeouts and errors, which would trigger automatic retries (AWS Lambda automatically retries failed function invocations twice). At some point this hit the DB’s connection limits, causing even more errors, a downward spiral. Fortunately I’d designed the microservices within the application to be fault tolerant and resilient to failures, but at the moment the system was limping along and needed intervention to fully recover.

It was clear I needed to increase DB resources so I initiated an upgrade to a larger RDS instance through Stackery’s Operations Console. While the upgrade was running I did some more poking around in AWS console.

This is when things started to get really interesting. I popped into the AWS Cost Explorer and immediately noticed something strange. My Lambda costs for the application had increase 10X, from about 50¢ the previous day to over $5 on Valentine’s Day. What was going on here?

I did some more digging and things started to make sense. Not only had the underprovisioned RDS instance resulted in degraded app performance. It had also increase dramatically increased my average function duration. Functions that ordinarily completed in a few tenths of a second were running up until their timeouts, 30 seconds, or longer in some cases. Because they hit the timeout and failed they’d be retried, which meant even more long function invocations.

You can see the dramatic increase in function runtime clearly in this graph:

Once the RDS instance upgrade had completed things settled down. Error rates dropped and function duration returned to normal. Fortunately the additional $4.50 in Lambda costs won’t break the bank either. However this highlights the tighter relationship between cost and performance that exists for serverless architectures. Generally this results in significantly lower hosting costs than traditional architectures, but the serverless world is not without it’s gotchas. Fortunately I had excellent monitoring, alerting, and observability in place for the performance, health, and cost of my system, which meant I could quickly detect and resolve the problem before it turned into a full scale outage and a spiking AWS bill.

Ready to Get Started?

Contact one of our product experts to get started building amazing serverless applications quickly with Stackery.

To Top