Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Posts on Serverless

Disaster Recovery in a Serverless World - Part 2
Apurva Jantrania

Apurva Jantrania | September 17, 2018

Disaster Recovery in a Serverless World - Part 2

This is part two of a multi-part blog series. In the previous post, we covered Disaster Recovery planning when building serverless applications. In this post, we’ll discuss the systems engineering needed for an automated solution in the AWS cloud.

As I started looking into implementing Stackery’s automated backup solution, my goal was simple: In order to support a disaster recovery plan, we needed to have a system that automatically creates backups of our database to a different account and to a different region. This seemed like a straightforward task, but I was surprised to find that there was no documentation on how to do this in an automated, scalable solution - all existing documentation I could find only discussed partial solutions and were all done manually via the AWS Console. Yuck.

I hope that this post will make help fill that void and help you understand how to implement an automated solution for your own disaster recovery solution. This post does get a bit long so if that’s not your thing, see the tl;dr.

The Initial Plan

AWS RDS has automated backups which seemed like the perfect platform to base this automation upon. Furthermore, RDS even emits events that seem ideal for using to kick off a lambda function that will then copy the snapshot to the disaster recovery account.

Discoveries

The first issue I discovered was that AWS does not allow you to share automated snapshots - AWS requires that you first make a manual copy of the snapshot before you can share it with another account. I initially thought that this wouldn’t be a major issue - I can easily make my lambda function first kick off a manual copy. According to the RDS Events documentation, there is an event RDS-EVENT-0042 that would fire when a manual snapshot was created. I could then use that event to then share the newly created manual snapshot to the disaster recovery account.

This leads to the second issue - while RDS will emit events for snapshots that are created manually, it does not emit events for snapshots that are copied manually. The AWS docs aren’t clear about this and it’s an unfortunate feature gap. This means that I have to fall back to a timer based lambda function that will search for and share the latest available snapshot.

Final Implementation Details

While this ended up more complicated than initially envisioned, Stackery still makes it easy to add all the needed pieces for fully automated backups. My implementation ended up looking like this:

The DB Event Subscription resource is a CloudFormation Resource in which contains a small snippet of CloudFormation that subscribes the DB Events topic to the RDS database

Function 1 - dbBackupHandler

This function will receive the events from the RDS database via the DB Events topic. It then creates a copy of the snapshot with an ID that identifies the snapshot as an automated disaster recovery snapshot

const AWS = require('aws-sdk');
const rds = new AWS.RDS();

const DR_KEY = 'dr-snapshot';
const ENV = process.env.ENV;

module.exports = async message => {
  // Only run DB Backups on Production and Staging
  if (!['production', 'staging'].includes(ENV)) {
    return {};
  }

  let records = message.Records;
  for (let i = 0; i < records.length; i++) {
    let record = records[i];

    if (record.EventSource === 'aws:sns') {
      let msg = JSON.parse(record.Sns.Message);
      if (msg['Event Source'] === 'db-snapshot' && msg['Event Message'] === 'Automated snapshot created') {
        let snapshotId = msg['Source ID'];
        let targetSnapshotId = `${snapshotId}-${DR_KEY}`.replace('rds:', '');

        let params = {
          SourceDBSnapshotIdentifier: snapshotId,
          TargetDBSnapshotIdentifier: targetSnapshotId
        };

        try {
          await rds.copyDBSnapshot(params).promise();
        } catch (error) {
          if (error.code === 'DBSnapshotAlreadyExists') {
            console.log(`Manual copy ${targetSnapshotId} already exists`);
          } else {
            throw error;
          }
        }
      }
    }
  }

  return {};
};

A couple of things to note:

  • I’m leveraging Stackery Environments in this function - I have used Stackery to define process.env.ENV based on the environment the stack is deployed to
  • Automatic RDS snapshots have an id that begins with ‘rds:’. However, snapshots created by the user cannot have a ‘:’ in the ID.
  • To make future steps easier, I append dr-snapshot to the id of the snapshot that is created

Function 2 - shareDatabaseSnapshot

This function runs every few minutes and shares any disaster recovery snapshots to the disaster recovery account

const AWS = require('aws-sdk');
const rds = new AWS.RDS();

const DR_KEY = 'dr-snapshot';
const DR_ACCOUNT_ID = process.env.DR_ACCOUNT_ID;
const ENV = process.env.ENV;

module.exports = async message => {
  // Only run on Production and Staging
  if (!['production', 'staging'].includes(ENV)) {
    return {};
  }

  // Get latest snapshot
  let snapshot = await getLatestManualSnapshot();

  if (!snapshot) {
    return {};
  }

  // See if snapshot is already shared with the Disaster Recovery Account
  let data = await rds.describeDBSnapshotAttributes({ DBSnapshotIdentifier: snapshot.DBSnapshotIdentifier }).promise();
  let attributes = data.DBSnapshotAttributesResult.DBSnapshotAttributes;

  let isShared = attributes.find(attribute => {
    return attribute.AttributeName === 'restore' && attribute.AttributeValues.includes(DR_ACCOUNT_ID);
  });

  if (!isShared) {
    // Share Snapshot with Disaster Recovery Account
    let params = {
      DBSnapshotIdentifier: snapshot.DBSnapshotIdentifier,
      AttributeName: 'restore',
      ValuesToAdd: [DR_ACCOUNT_ID]
    };
    await rds.modifyDBSnapshotAttribute(params).promise();
  }

  return {};
};

async function getLatestManualSnapshot (latest = undefined, marker = undefined) {
  let result = await rds.describeDBSnapshots({ Marker: marker }).promise();

  result.DBSnapshots.forEach(snapshot => {
    if (snapshot.SnapshotType === 'manual' && snapshot.Status === 'available' && snapshot.DBSnapshotIdentifier.includes(DR_KEY)) {
      if (!latest || new Date(snapshot.SnapshotCreateTime) > new Date(latest.SnapshotCreateTime)) {
        latest = snapshot;
      }
    }
  });

  if (result.Marker) {
    return getLatestManualSnapshot(latest, result.Marker);
  }

  return latest;
}
  • Once again, I’m leveraging Stackery Environments to populate the ENV and DR_ACCOUNT_ID environment variables.
  • When sharing a snapshot with another AWS account, the AttributeName should be set to restore (see the AWS RDS SDK)

Function 3 - copyDatabaseSnapshot

This function will run in the Disaster Recovery account and is responsible for detecting snapshots that are shared with it and making a local copy in the correct region - in this example, it will make a copy in us-east-1.

const AWS = require('aws-sdk');
const rds = new AWS.RDS();

const sourceRDS = new AWS.RDS({ region: 'us-west-2' });
const targetRDS = new AWS.RDS({ region: 'us-east-1' });

const DR_KEY = 'dr-snapshot';
const ENV = process.env.ENV;

module.exports = async message => {
  // Only Production_DR and Staging_DR are Disaster Recovery Targets
  if (!['production_dr', 'staging_dr'].includes(ENV)) {
    return {};
  }

  let [shared, local] = await Promise.all([getSourceSnapshots(), getTargetSnapshots()]);

  for (let i = 0; i < shared.length; i++) {
    let snapshot = shared[i];
    let fullSnapshotId = snapshot.DBSnapshotIdentifier;
    let snapshotId = getCleanSnapshotId(fullSnapshotId);
    if (!snapshotExists(local, snapshotId)) {
      let targetId = snapshotId;

      let params = {
        SourceDBSnapshotIdentifier: fullSnapshotId,
        TargetDBSnapshotIdentifier: targetId
      };
      await rds.copyDBSnapshot(params).promise();
    }
  }

  return {};
};

// Get snapshots that are shared to this account
async function getSourceSnapshots () {
  return getSnapshots(sourceRDS, 'shared');
}

// Get snapshots that have already been created in this account
async function getTargetSnapshots () {
  return getSnapshots(targetRDS, 'manual');
}

async function getSnapshots (rds, typeFilter, snapshots = [], marker = undefined) {
  let params = {
    IncludeShared: true,
    Marker: marker
  };

  let result = await rds.describeDBSnapshots(params).promise();

  result.DBSnapshots.forEach(snapshot => {
    if (snapshot.SnapshotType === typeFilter && snapshot.DBSnapshotIdentifier.includes(DR_KEY)) {
      snapshots.push(snapshot);
    }
  });

  if (result.Marker) {
    return getSnapshots(rds, typeFilter, snapshots, result.Marker);
  }

  return snapshots;
}

// Check to see if the snapshot `snapshotId` is in the list of `snapshots`
function snapshotExists (snapshots, snapshotId) {
  for (let i = 0; i < snapshots.length; i++) {
    let snapshot = snapshots[i];
    if (getCleanSnapshotId(snapshot.DBSnapshotIdentifier) === snapshotId) {
      return true;
    }
  }
  return false;
}

// Cleanup the IDs from automatic backups that are prepended with `rds:`
function getCleanSnapshotId (snapshotId) {
  let result = snapshotId.match(/:([a-zA-Z0-9-]+)$/);

  if (!result) {
    return snapshotId;
  } else {
    return result[1];
  }
}
  • Once again, leveraging Stackery Environments to populate ENV, I ensure this function only runs in the Disaster Recovery accounts

TL;DR - How Automated Backups Should Be Done

  1. Have a function that will manually create an RDS snapshot using a timer and lambda. Use a timer that makes sense for your use case
    • Don’t bother trying to leverage the daily automated snapshot provided by AWS RDS.
  2. Have a second function, that monitors for the successful creation of the snapshot from the first function and shares it to your disaster recovery account.

  3. Have a third function that will operate in your disaster recovery account that will monitor for snapshots shared to the account, and then create a copy of the snapshot that will be owned by the disaster recovery account, and in the correct region.
How to Write 200 Lines of YAML in 1 Minute
Anna Spysz

Anna Spysz | September 11, 2018

How to Write 200 Lines of YAML in 1 Minute

Last month, our CTO Chase wrote about why you should stop YAML engineering. I completely agree with his thesis, though for slightly different reasons. As a new developer, I’ve grasped that it’s crucial to learn and do just what you need and nothing more - at least when you’re just getting started in your career.

Now, I’m all about learning for learning’s sake - I have two now-useless liberal arts degrees that prove that. However, when it comes to being a new developer, it’s very easy to get overwhelmed by all of the languages and frameworks out there, and get caught in paralysis as you jump from tutorial to tutorial and end up not learning anything very well. I’ve certainly been there - and then I decided to just get good at the tools I’m actually using for work, and learn everything else as I need it.

Which is what brings us to YAML - short for “YAML Ain’t Markup Language”. I started out as a Python developer. When I needed to, I learned JavaScript. When my JavaScript needed some support, I learned a couple of front-end frameworks, and as much Node.js as I needed to write and understand what my serverless functions were doing. As I got deeper into serverless architecture, it seemed like learning YAML was the next step - but if it didn’t have to be, why learn it? If I can produce 200+ lines of working YAML without actually writing a single line of it, in much less time than it would take me to write it myself (not counting the hours it would take to learn a new markup language), then that seems like the obvious solution.

So if a tool allows me to develop serverless apps without having to learn YAML, I’m all for that. Luckily, that’s exactly what Stackery does, as you can see in the video below:

Serverless for Total Beginners
Anna Spysz

Anna Spysz | August 16, 2018

Serverless for Total Beginners

As the newest member of the Stackery Engineering team and Stackery’s Resident N00b™, I have been wanting to explain what serverless is in the most beginner-friendly terms possible. This is my attempt to do so.

I recently graduated a full-stack coding bootcamp, where I learned several ways to build and deploy a traditional (i.e. monolithic) web application, how to use containers to deploy an app, but nothing about serverless architecture. It wasn’t until I started my internship at Stackery that I even began to grasp what serverless is, and I’m still learning ten new things about it every day. While the concept of serverless functions and FaaS may seem daunting to new developers, I’ve found that it’s actually a great thing for beginners to learn; if done right, it can make the process of deployment a lot easier.

Above all, serverless is a new way of thinking about building applications. What’s exciting to me as a frontend-leaning developer is that it allows for most of the heavy lifting of your app to take place in the frontend, while cloud services handle typically backend aspects such as logging in users or writing values to a database. That means writing less code up front, and allows beginners to build powerful apps faster than the traditional monolith route.

So let’s dive in with some definitions.

What is a stack, and why are we stacking things?

A stack is essentially a collection of separate computing resources that work together as a unit to accomplish a specific task. In some applications, they can make up the entire backend of an app.

Stackery dashboard

The above example is about as simple as you can get with a stack. It consists of a function and an object store. When triggered, the function manipulates the data stored in the object store (in this case, an S3 bucket on AWS).

A simple use case would be a function that returns a specific image from the bucket when triggered - say, when a user logs into an app, their profile picture could be retrieved from the object store.

Here’s a somewhat more complex stack:

Stackery dashboard

This stack consists of a function (SignupHandler) that is triggered when someone submits an email address on a website’s newsletter signup form (Newsletter Signup API). The function takes the contents of that signup form, in this case a name and email address, and stores it in a table called Signup. It also has an error logger (another function called LogErrors), which records what happened should anything go wrong. If this stack were to be expanded, another function could email the contents of the Signup table to a user when requested, for example.

Under the hood, this stack is using several AWS services: Lambda for the functions, API Gateway for the API, and DynamoDB for the table.

Finally, here is a stack handling CRUD operations in a web application:

Stackery dashboard

While this looks like a complex operation, it’s actually just the GET, PUT, POST, and DELETE methods connected to a table of users. Each of the functions is handling just one operation, depending on which API endpoint is triggered, and then the results of that function are stored in a table.

This kind of CRUD stack would be very useful in a web application that requires users to sign up and sign in to use. When a user signs up, the POST API triggers the createUser function, which simply pulls up the correct DynamoDB table and writes the values sent (typically username and password) to the table. The next time the user comes back to the app and wants to log in, the getUser function is called by the GET API. Should the user change their mind and want to delete their account, the deleteUser function handles that through the DELETE API.

Are microservices == or != serverless?

There is a lot of overlap between the concepts of microservices and serverless: both consist of small applications that do very specific things, usually as a part of a larger application. The main difference is how they are managed.

A complex web application - a storefront, for example - may consist of several microservices doing individual tasks, such as logging in users, handling a virtual shopping cart, and processing payments. In a microservice architecture, those individual apps still operate within a larger, managed application with operational overhead - usually a devOps team making it all work smoothly together.

With serverless, the operational overhead is largely taken care of by the serverless platform where your code lives. In the case of a function on AWS Lambda, just about everything but the actual code writing is handled by the platform, from launching an instance of an operating system to run the code in your function when it is triggered by an event, to then killing that OS or container when it is no longer needed.

Depending on the demand of your application, serverless can make it cheaper and easier to deploy and run, and is generally faster to get up and running than a group of microservices.

Are monoliths bad?

To understand serverless, it’s helpful to understand what came before: the so-called “monolith” application. A monolith application has a complex backend that lives on a server (or more likely, many servers), either at the company running the application or in the cloud, and is always running, regardless of demand - which can make it expensive to maintain.

The monolith is still the dominant form of application, and certainly has its strengths. But as I learned when trying to deploy my first monolith app in school, it can be quite difficult for beginners to deploy successfully, and is often overkill if you’re trying to deploy and test a simple application.

So serverless uses servers?

Stackery dashboard

Yes, there are still servers behind serverless functions, just as “the cloud” consists of a lot of individual servers.

After all, as the mug says, “There is no cloud, it’s just someone else’s computer”.

That’s true for serverless as well. We could just as well say, “There is no serverless, it’s just someone else’s problem.”

What I find great about serverless is that it gives developers, and especially beginning developers, the ability to build and deploy applications with less code, which means less of an overall learning curve. And for this (often frustrated) beginner, that’s quite the selling point.

Disaster Recovery in a Serverless World - Part 1
Nuatu Tseggai

Nuatu Tseggai | July 12, 2018

Disaster Recovery in a Serverless World - Part 1

This is part one of a multi-part blog series. In this post we’ll discuss Disaster Recovery planning when building serverless applications. In future posts we’ll highlight Disaster Recovery exercises and the engineering preparation necessary for success.

‘Eat Your Own Dog Food’

Nearly the entire mix of Stackery backend microservices run on AWS Lambda compute. That’s not shocking - after all - the entire purpose of our business is to build a cohesive set of tools that enable teams to build production-ready serverless applications. It’s only fitting that we eat our own dogfood and use serverless technologies wherever possible.

Which leads to the central question this blog post is highlighting: How should a team reason about Disaster Recovery when they build software atop serverless technologies?

(Spoiler Alert) Serverless doesn’t equate to a free lunch! The important bits of DR revolve around establishing a cohesive plan and exercising it regularly - all of which remain important when utilizing serverless infrastructure. But there’s good news! Serverless architectures free engineers from the minutia of administering a platform leaving them more time to focus their sights on higher level concepts such as Disaster Recovery, Security, and Technical Debt.

Before we get too far - let’s define Disaster Recovery (DR). In simple terms, it’s a documented plan that aims to minimize downtime and data loss in the event of a disaster. The term is most often used in the context of yearly audit-related exercises wherein organizations demonstrate compliance in order to meet regulatory requirements. It’s also very familiar to those who are charged with developing IT capabilities for mission-critical functions of the government.

Many of us at Stackery used to work at New Relic during a particularly explosive growth stage of the business. We were exposed to DR exercises that took months of work (from dozens of managers/engineers) to reach the objectives set by the business. That experience influenced us as we embarked on developing a DR plan for Stackery, but we still needed to work through a multitude of questions specific to our architecture.

What would happen to our product(s) if any of the following services running in AWS region XYZ experienced an outage? (S3, RDS, Dynamo, Cognito, Lambda, Fargate, etc.)

  • How long before we fully recover?
  • How much data loss would we incur?
  • What process would we follow to recover?
  • How would we communicate status and next steps internally?
  • How would we communicate status and next steps to customers?

These questions quickly reminded us that DR planning requires direction from the business. In our case, we looked to our CEO, CTO, and VP of Engineering to set two goals:

  1. Recovery Time Objective (RTO): the length of time it would take us to swap to a second, hot production service in a separate AWS region.
  2. Recovery Point Objective (RPO): the acceptable amount of data loss measured in time.

In order to determine these goals our executives had to consider the financial impact to the business during downtime (determined by considering loss of business and damage to our reputation). Not surprisingly, the dimensions of this business decision will be unique to every business. It’s important that your executive team takes the time to understand why it’s important for them to be in charge of defining the RTO and RPO and that they are engaged in the ongoing development and execution of the DR plan. It’s a living plan and as such will require improvements as the company evolves.

Based on our experience, we developed the below outline that you may find helpful as your team develops a DR plan.

Disaster Recovery Plan

  1. Goals
  2. Process
    • Initiating DR
    • Assigning Roles
    • Incident Commander
    • Technical Lead
  3. Communication
    • Engineering Coordination
    • Leadership Updates
  4. Recovery Steps
  5. Continuous Improvement
    • TODO
    • Lessons Learned
    • Frequency
Goals:

This section describes our RTO and RPO (see above).

Process:

This section describes the process to follow in the event that it becomes necessary to initiate Disaster Recovery. This is the same process followed during Disaster Recovery Exercises.

Initiating DR:

The Disaster Recovery procedure may be initiated in the event of a major prolonged outage upon the CEO’s request. If the CEO is unavailable and cannot be reached DR can be initiated by another member of the executive team.

Assigning Roles:

Roles will be assigned by the executive initiating the DR process.

Incident Commander (IC):

The Incident Commander is responsible for coordinating the operational response and communicating status to stakeholders. The IC is responsible for designating a Technical Lead and engaging additional employees necessary for the response. During the DR process the IC will send hourly email updates to the executive team. These updates will include: current status of DR process, timeline of events since DR was initiated, requests for help or additional resources.

Technical Lead (TL):

The Technical Lead has primary responsibility for driving the DR process towards a successful technical resolution. The IC will solicit status information and requests for additional assistance from the TL.

Communication:

Communication is critical to an effective and well coordinated response. The following communication channels should be used:

Engineering Coordination:

The IC, TL and engineers directly involved with the response will communicate in the #disaster-recovery-XYZ slack channel. In the event that slack is unavailable the IC will initiate a Google Hangout and communicate instructions for connecting via email and cell phone.

Leadership Updates:

The IC will provide hourly updates to the executive team via email. See details in separate Incident Commander doc.

Recovery Steps:

High level steps to be performed during DR.

  1. Update Status Page
  2. Restore Datastore(s) in prodY from latest prodX
    • DB
    • Authentication
    • Authorization
    • Cache
    • Blob Storage
  3. Restore backend microservices
    • Bootstrap services with particular focus on upstream and downstream dependencies
  4. Swap CloudFront distribution(s)
  5. Swap API endpoint(s) via DNS
    • Update DNS records to point to prodY API endpoints
  6. Verify recovery is complete
    • Redeploy stack from user account to verify service level
  7. Update Status Page
Continuous Improvement:

This section captures TODO action items and next steps, lessons learned, and the frequency in which we’ll revisit the plan and accomplish the TODO action items.

In the next post, we’ll dig into the work it takes to prepare for and perform DR exercises. To learn how Stackery can make building microservices on Lambda manageable and efficient, contact our sales team or get a free trial today.

Self Healing Serverless Applications - Part 3
Nate Taggart

Nate Taggart | July 04, 2018

Self Healing Serverless Applications - Part 3

This blog post is based on a presentation I gave at Glue Conference 2018. The original slides: Self-Healing Serverless Applications – GlueCon 2018. View the rest here. Parts: 1, 2, 3

This is part three of a three-part blog series. In the first post we covered some of the common failure scenarios you may face when building serverless applications. In the second post we introduced the principles behind self-healing application design. In this next post, we’ll apply these principles with solutions to real-world scenarios.

Self-Healing Serverless Applications

We’ve been covering the fundamental principles of self-healing applications and the underlying errors that serverless applications commonly face. Now it’s time to apply these principles to the real-world scenarios that the errors will raise. Let’s step through five common problems together.

We’ll solve:

  • Uncaught Exceptions
  • Upstream Bottlenecks
  • Timeouts
  • Stuck steam processing
  • and, Downstream Bottlenecks

Uncaught Exceptions

Uncaught exceptions are unhandled errors in your application code. While this problem isn’t unique to Lambda, diagnosing it in a serverless application can be a little trickier because your compute instance is ultimately ephemeral and will shut down upon an uncaught exception. What we want to do is detect that an exception is about to occur, and either remediate or collect diagnostic information at runtime before the Lambda instance is gone. After we’ve handled the error, we can simply re-throw it to avoid corrupting the behavior of the application. To do that, we’ll use three of the principles we previously introduced: introducing universal instrumentation, collecting event-centric diagnostics, and giving everyone visibility.

At an abstract level, it’s relatively easy to instrument a function to catch errors (we could simply wrap our code in a try/except loop). While this solution works just fine for a single function, though, it doesn’t easily scale across an entire application or organization. Do you really want to be responsible for ensuring that every single function is individually instrumented?

The better approach is to use “universal instrumentation.” We’ll create a generic handler which will invoke our real target function and always use it as the top level handler for every piece of code. Here’s an example:

def genericHandler(message, context):
	try:
		return targetHandler(message, context)
	except Exception as error:
		# Collect event diagnostics
		# Possibly re-route the event or otherwise remediate the transaction
		raise error

As you can see, we have in fact just run our function through a try/except clause, with the benefit of now being able to invoke any function with one standard piece of instrumentation code. This means that every function will now behave consistently (consistent logs, metrics, etc) across our entire stack.

This instrumentation also allows us to collect event-centric diagnostics. Keep in mind that by default, a Lambda exception will give you a log with a stack trace, but no information on the event which led to this exception. It’s much easier to debug and improve application health with relevant event data. And now that we have centralized logs, events, and metrics, it’s much easier for everyone on the team to have visibility into the health of the entire application.

Note: you’ll want to be careful that you’re not logging any sensitive data when you capture events.

Upstream Bottleneck

An upstream bottleneck occurs when a service calling into Lambda hits a scaling limit, even though Lambda isn’t being throttled. A classic example of this is API Gateway reaching throughput limits and failing to invoke downstream Lambdas on every request.

The key principles to focus on here is: identifying service limits, using self-throttling, and notifying a human.

It’s pretty straightforward to identify service limits, and if you haven’t done this you really should. Know what your throughput limits are for each of the AWS services you’re using and set alarms on throughput metrics before you hit capacity (notify a human!).

The more sophisticated, self-healing approach comes into play when you choose to throttle yourself before you get throttled by AWS. In the case of an API Gateway limit, you (or someone in your organization) may already control the requests coming to this Gateway. If, say for example, you have a front-end application backed by API Gateway and Lambda, you could introduce an exponential backoff logic to kick in whenever you have backend errors. Pay particular attention to HTTP 429 To Many Requests responses, which is (generally) what API Gateway will return when it’s being throttled. I say “generally” because in practice this is actually inconsistent and will sometimes return 5XX error codes as well. In any event, if you are able to control the volume of requests (which may be from another service tier), you can help your application to self-heal and fail gracefully.

Timeouts

Sometimes Lambdas time out, which can be a particularly painful and expensive kind of error, since Lambdas will automatically retry multiple times in many cases, driving up the active compute time. When a timeout occurs, the Lambda will ultimately fail without capturing much in terms of diagnostics. No event data, no stack trace – just a timeout error log. Fortunately, we can handle these errors pretty similarly to uncaught exceptions. We’ll use the principles of self-throttle, universal instrumentation, and considering alternative resource types.

The instrumentation for this is a little more complex, but stick with me:

def genericHandler(message, context):
	# Detect when the Lambda will time out and set a timer for 1 second sooner
	timeout_duration = context.get_remaining_time_in_millis() - 1000

	# Invoke the original handler in a separate thread and set our new stricter timeout limit
	handler_thread = originalHandlerThread(message, context)
	handler_thread.start()
	handler_thread.join(timeout_duration / 1000)

	# If timeout occurs
	if handler_thread.is_alive():
		error = TimeoutError('Function timed out')

		# Collect event diagnostics here

		raise error
	return handler_thread.result

This universal instrumentation is essentially self-throttling by forcing us to conform to a slightly stricter timeout limit. In this way, we’re able to detect an imminent timeout while the Lambda is still alive and can extract meaningful diagnostic data to retroactively diagnose the issue. This instrumentation can, of course, be mixed with our error handling logic.

If this seems a bit complex, you might like using Stackery: we automatically provide instrumentation for all of our customers without requiring *any* code modification. All of these best practices are just built in.

Finally, sometimes we should be considering other resource types. Fargate is another on-demand compute instance which can run longer and with higher resource limits that Lambda. It can still be triggered by events and is a better fit for certain workloads.

Stream Processing Gets “Stuck”

When Lambda is reading off of a kinesis stream, failing invocations can cause the stream to get stuck (more accurately: just that shard). This is because Lambda will continue to retry the failing message until it’s successful and will not get to the workload behind the stuck message until it’s handled. This introduces an opportunity for some of the other self-healing principles: reroute and unblock, automate known solutions, consider alternative resource types.

Ultimately, you’re going to need to remove the stuck message. Initially, you might be doing this manually. That will work if this is a one-off issue, but issues rarely are. The ideal solution here is to automate the process of rerouting failures and unblocking the rest of the workload.

The approach that we use is to build a simple state machine. The logic is very straightforward: is this the first time we’ve seen this message? If so, log it. If not, this is a recurring failure and we need to move it out of the way. You might simply “pass” on the message, if your workload is fairly fault tolerant. If it’s critical, though, you could move it to a dedicated “failed messages” stream for someone to investigate or possibly to route through a separate service.

This is where alternative resources come into play again. Maybe the Lambda is failing because it’s timing out (good thing you introduced universal instrumentation!). Maybe sending your “failed messages” stream to a Fargate instance solves your problem. You might also want to investigate the similar but actually different ways that Kinesis, SQS, and SNS work and make sure you’re choosing the right tool for the job.

Downstream Bottleneck

We talked about upstream bottlenecks where Lambda is failing to be invoked, but you can also hit a case where Lambda is scaling up faster that its dependencies and causing downstream bottlenecks. A classic example of this is Lambda depleting the connection pool for an RDS instance.

You might be surprised to learn that Lambda holds onto its connection, even while cached, unless you explicitly close the connection in your code. So do that. Easy enough. But you’re also going to want to pay attention to some of our self-healing key principles again: identify service limits, automate known solutions, and give everyone visibility.

In theory, Lambda is (nearly, kind of, sort of) infinitely scalable. But the rest of your application (and the other resource tiers) aren’t. Know your service limits: how many connections can your database handle? Do you need to scale your database?

What makes this error class actually tricky, though, is that multiple services may have shared dependencies. You’re looking at a performance bottleneck thinking to yourself, “but I’m not putting that much load on the database…” This is an example of why it’s so important to have shared visibility. If your dependencies are shared, you need to understand not just your own load, but that of all of the other services hammering this resource. You’ll really want a monitoring solution that includes service maps and makes it clear how the various parts of your stack are related. That’s why, even though most of our customers work day-to-day from the Stackery CLI, the UI is still a meaningful part of the product.

The Case for Self-Healing

Before we conclude, I’d like to circle back and talk again about the importance of self-healing applications. Serverless is a powerful technology that outsources a lot of the undifferentiated heavy lifting of infrastructure management, but it requires a thoughtful approach to software development. As we add tools to accelerate the software lifecycle, we need to keep some focus on application health and resiliency. The “Self-Healing” philosophy is the approach that we’ve found which allows us to capture the velocity gains of serverless and unlock the upside of scalability, without sacrificing reliability or SLAs. If you’re taking serverless seriously, you should incorporate these techniques and champion them across your organization so that serverless becomes a mainstay technology in your stack.

Self Healing Serverless Applications - Part 2
Nate Taggart

Nate Taggart | June 18, 2018

Self Healing Serverless Applications - Part 2

This blog post is based on a presentation I gave at Glue Conference 2018. The original slides: Self-Healing Serverless Applications – GlueCon 2018. View the rest here. Parts: 1, 2, 3

This is part two of a three-part blog series. In the last post we covered some of the common failure scenarios you may face when building serverless applications. In this post we’ll introduce the principles behind self-healing application design. In the next post, we’ll apply these principles with solutions to real-world scenarios.

Learning to Fail

Before we dive into the solution-space, it’s worth introducing the defining principles for creating self-healing systems: plan for failure, standardize, and fail gracefully.

Plan for Failure

As an industry, we have a tendency to put a lot of time planning for success and relatively little time planning for failure. Think about it. How long ago did you first hear about Load Testing? Load testing is a great way to prepare for massive success! Ok, now when did you first hear about Chaos Engineering? Chaos Engineering is a great example of planning for failure. Or, at least, a great example of intentionally failing and learning from it. In any event, if you want to build resilient systems, you have to start planning to fail.

Planning for failure is not just a lofty ideal, there are easy, tangible steps that you can begin to take today:

  • Identify Service Limits: Remember the Lambda concurrency limits we covered in Part 1? You should know what yours are. You’re also going to want to know the limits on the other service tiers, like any databases, API Gateway, and event streams you have in your architecture. Where are you most likely to bottleneck first?
  • Use Self-Throttling: By the time you’re being throttled by Lambda, you’ve already ceded control of your application to AWS. You should handle capacity limits in your own application, while you still have options. I’ll show you how we do it, when we get to the solutions.
  • Consider Alternative Resource Types: I’m about as big a serverless fan as there is, but let’s be serious: it’s not the only tool in the tool chest. If you’re struggling with certain workloads, take a look at alternative AWS services. Fargate can be good for long-running tasks, and I’m always surprised how poorly understood the differences are between Kinesis vs. SQS vs. SNS. Choose wisely.

Standardize

One of the key advantages of serverless is the dramatic velocity it can enable for engineering orgs. In principle, this velocity gain comes from outsourcing the “undifferentiated heavy lifting” of infrastructure to the cloud vendor, and while that’s certainly true, a lot of the velocity in practice comes from individual teams self-serving their infrastructure needs and abandoning the central planning and controls that an expert operations team provides.

The good news is, it doesn’t need to be either-or. You can get the engineering velocity of serverless while retaining consistency across your application. To do this, you’ll need a centralized mechanism for building and delivering each release, managing multiple AWS accounts, multiple deployment targets, and all of the secrets management to do this securely. This means that your open source framework, which was an easy way to get started, just isn’t going to cut it anymore.

Whether you choose to plug in software like Stackery or roll your own internal tooling, you’re going to need to build a level of standardization across your serverless architecture. You’ll need standardized instrumentation so that you know that every function released by every engineer on every team has the same level of visibility into errors, metrics, and event diagnostics. You’re also going to want a centralized dashboard that can surface performance bottlenecks to the entire organization – which is more important than ever before since many serverless functions will distribute failures to other areas of the application. Once these basics are covered, you’ll probably want to review your IAM provisioning policies and make sure you have consistent tagging enforcement for inventory management and cost tracking.

Now, admittedly, this standardization need isn’t the fun part of serverless development. That’s why many enterprises are choosing to use a solution like Stackery to manage their serverless program. But even if standardization isn’t exciting, it’s critically important. If you want to build serverless into your company’s standard tool chest, you’re going to need for it to be successful. To that end, you’ll want to know that there’s a single, consistent way to release or roll back your serverless applications. You’ll want to ensure that you always have meaningful log and diagnostic data and that everyone is sending it to the same place so that in a crisis you’ll know exactly what to do. Standardization will make your serverless projects reliable and resilient.

Fail Gracefully

We plan for failure and standardize serverless implementations so that when failure does happen we can handle it gracefully. This is where “self-healing” gets exciting. Our standardized instrumentation will help us identify bottlenecks automatically and we can automate our response in many instances.

One of the core ideas in failing gracefully is that small failures are preferable to large ones. With this in mind, we can oftentimes control our failures to minimize the impact and we do this by rerouting and unblocking from failures. Say, for example, you have a Lambda reading off of a Kinesis stream and failing on a message. That failure is now holding up the entire Kinesis shard, so instead of having one failed transaction you’re now failing to process a significant workload. If we, instead, allow that one blocking transaction to fail by removing it from the stream and (instead of processing it normally) simply log it out as a failed transaction and get it out of the way. Small failure, but better than a complete system failure in most cases.

Finally, while automating solutions is always the goal with self-healing serverless development, we can’t ignore the human element. Whenever you’re taking an automated action (like moving that failing Kinesis message), you should be notifying a human. Intelligent notification and visibility is great, but it’s even better when that notification comes with all of the diagnostic details including the actual event that failed. This allows your team to quickly reproduce and debug the issue and turn around a fix as quickly as possible.

See it in action

In our final post we’ll talk through five real-world use cases and show how these principles apply to common failure patterns in serverless architectures.

We’ll solve: - Uncaught Exceptions - Upstream Bottlenecks - Timeouts - Stuck steam processing - and, Downstream Bottlenecks

If you want to start playing with self-healing serverless concepts right away, go start a free 60 day evaluation of Stackery. You’ll get all of these best practices built-in.

Self Healing Serverless Applications - Part 1
Nate Taggart

Nate Taggart | June 07, 2018

Self Healing Serverless Applications - Part 1

This blog post is based on a presentation I gave at Glue Conference 2018. The original slides: Self-Healing Serverless Applications – GlueCon 2018. View the rest here. Parts: 1, 2, 3

This is part one of a multi-part blog series. In this post we’ll discuss some of the common failure scenarios you may face when building serverless applications. In future posts we’ll highlight solutions based on real-world scenarios.

What to expect when you’re not expecting

If you’ve been swept up in the serverless hype, you’re not alone. Frankly, serverless applications have a lot of advantages, and my bet is that (despite the stupid name) “serverless” is the next major wave of cloud infrastructure and a technology you should be betting heavily on.

That said, like all new technologies, serverless doesn’t always live up to the hype, and separating fact from fiction is important. At first blush, serverless seems to promise infinite and immediate scaling, high availability, and little-to-no configuration. The truth is that while serverless does offload a lot of the “undifferentiated heavy lifting” of infrastructure management, there are still challenges with managing application health and architecting for scale. So, let’s take a look at some of the common failure modes for AWS Lambda-based applications.

Runtime failures

The first major category of failures are what I classify as “runtime” or application errors. I assume it’s no surprise to you that if you introduce bugs in your application, Lambda doesn’t solve that problem. That said, you may be very surprised to learn that when you throw an uncaught exception, Lambda will behave differently depending on how you architect your application.

Let’s briefly touch on three common runtime failures:

  • Uncaught Exceptions: any unhandled exception (error) in your application.
  • Timeouts: your code doesn’t complete within the maximum execution time.
  • Bad State: a malformed message or improperly provided state causes unexpected behavior.
Uncaught Exceptions

When Lambda is running synchronously (like in a request-response loop, for example), the Lambda will return an error to the caller and will log an error message and stack trace to CloudWatch, which is probably what you would expect. It’s different though when a Lambda is called asynchronously, as might be the case with a background task. In that event, when throwing an error, Lambda will retry up to three times. In fact, it will even try indefinitely when reading off of a stream, like with Kinesis. In any case, when a Lambda fails in an asychronous architecture, the caller is unaware of the error – although there is still a log record sent to CloudWatch with the error message and stack trace.

Timeouts

Sometimes Lambda will fail to complete within the configured maximum execution time, which by default is 3 seconds. In this case, it will behave like an uncaught exception, with the caveat that you won’t get a stack trace out in the logs and the error message will be for the timeout and not for the potentially underlying application issue, if there is one. Using only the default behavior, it can be tricky to diagnose why Lambdas are timing out unexpectedly.

Bad State

Since serverless function invocations are stateless, state must be supplied on or after invocation. This means that you may be passing state data through input messages or by connecting to a database to retrieve state when the function starts. In either scenario, it’s possible to invoke a function but not properly supply the state which the function needs to properly execute. The trick here is that these “bad state” situations can either fairly noisily (as an uncaught exception) or fail silently without raising any alarms. When these errors occur silently it can be nearly impossible to diagnose, or sometimes to even notice that you have a problem. The major risk is that the events which trigger these functions have expired and since state may not be being stored correctly, you may have permanent data loss.

Scaling failures

The other class of failures worth discussing are scaling failures. If we think of runtime errors as application-layer problems, we can think of scaling failures as infrastructure-layer problems.

The three common scaling failures are:

  • Concurrency Limits: when Lambda can’t scale high enough.
  • Spawn Limits: when Lambda can’t scale fast enough.
  • Bottlenecking: when your architecture isn’t scaling as much as Lambda.
Concurrency Limits

You can understand why Amazon doesn’t exactly advertise their concurrency limits, but there are, in fact, some real advantages to having limits in place. First, concurrency limits are account limits that determine how many simultaneously running instances you can have of your Lambda functions. These limits can really save you in scenarios where you accidentally trigger an unexpected and massive workload. Sure, you may quickly run up a tidy little bill, but at least you’ll hit a cap and have time to react before things get truly out of hand. Of course, the flipside to this is that your application won’t really scale “infinitely,” and if you’re not careful you could hit your limits and throttle your real traffic. No bueno.

For synchronous architectures, these Lambdas will simply fail to be invoked without any retry. If you’re invoking your Lambdas asynchronously, like reading off of a stream, the Lambdas will fail to invoke initially, but will resume once your concurrency drops below the limit. You may experience some performance bottlenecks in this case, but eventually the workload should catch up. It’s worth noting, by contacting AWS you can usually get them to raise your limits if needed.

Spawn Limits

While most developers with production Lambda workloads have probably heard of concurrency limits, in my experience very few know about spawn limits. Spawn limits are account limits on what rate new Lambdas can be invoked. This can be tricky to identify because if you glance at your throughput metrics you may not even be close to your concurrency limit, but could still be throttling traffic.

The default behavior for spawn limits matches concurrency limits, but again, it may be especially challenging to identify and diagnose spawn limit throttling. Spawn limits are also very poorly documented and, to my knowledge, it’s not possible to have these limits raised in any way.

Bottlenecking

The final scaling challenge involves managing your overall architecture. Even when Lambda scales perfectly (which, in fairness is most of the time!), you must design your other service tiers to scale as well or you may experience upstream or downstream bottlenecks. In an upstream bottleneck, like when you hit throughput limits in API Gateway, your Lambdas may fail to invoke. In this case, you won’t get any Lambda logs (they really didn’t invoke), and so you’ll have to be paying attention to other metrics to detect this. It’s also possible to create downstream bottlenecks. One way this can happen is when your Lambdas scale up, but deplete your connection pool for a downstream database. These kind of problems can behave like an uncaught exception, lead to timeouts, or distribute failures to other functions and services.

Introducing Self-Healing Serverless Applications

The solution to all of this is to build resiliency with “Self-Healing” Serverless Applications. This is an approach for architecting applications for high-resiliency and for automated error resolution.

In our next post, we’ll dig into the three design principles for self-healing systems:

  • Plan for Failure
  • Standardize
  • Fail Gracefully

We’ll also learn to apply these principles to real-world scenarios that you’re likely to encounter as you embrace serverless architecture patterns. Be sure to watch for the next post!

Stackery 2018 Product Updates
Sam Goldstein

Sam Goldstein | May 16, 2018

Stackery 2018 Product Updates

Our product engineering team ships every single day.

That means Stackery’s product gets better every single day. Stackery engineers commit code into git which marches into our continuous delivery pipeline. We promote each version of our microservices, frontend, and CLI through multiple testing environments, rolling shiny new features into production or notifying the team of failures. This is the best way we know to develop modern software and explains why our team is able to ship so much functionality so rapidly.

However, because we’re constantly shipping, it means we need to pause periodically to take note of new features and improvements. In this post I’ll summarize some of the most significant features and changes from our product team over the past few months. For a more detailed list of changes, you can read and/or follow Stackery’s Release Notes.

Referenced Resource

One of the best things about microservice architecture is the degree which you can encapsulate and reuse functionality. For example, if you need to check if a user is authorized to perform a certain action, there’s no need to scatter permissioning code throughout your services. Put it all in one place (an AuthorizationService perhaps), and call out to that in each service that needs to check permissions.

Stackery’s Referenced Resource nodes let’s you reference existing infrastructure resources (be they Lambda functions, S3 buckets, VPCs, you name it) by their AWS ARN and seamlessly integrate these into your other services.

One of the best uses I’ve seen for Referenced Resources is using it as the mechanism to implement centralized error reporting for serverless architectures. Write one central Lambda function that forwards exceptions into your primary error reporting and alerting tool. Configure every other stack to send error events to this central handler. Viola! Complete visiblity into all serverless application errors.

Support for Multiple AWS Accounts

Every company we work with uses multiple AWS accounts. Sometimes there’s one for production and one for everything else. In Stackery’s engineering team each engineer has multiple accounts for development and testing, as well as shared access to accounts for integration testing, staging, and production. Splitting your infrastructure across multiple accounts has major benefits. You can isolate permissions and account-wide limits, minimizing risk to critical accounts (e.g. production).

However, managing deployment of serverless architectures across multiple accounts is often a major PITA. This is why working across multiple accounts is now treated as a first class concern across all of Stackery’s functionality. Multiple AWS accounts can be registered within a Stackery account using our CLI tool. Stackery environments are tied to an AWS accounts, which maps flexibly into the vast majority of AWS account usage patterns.

Managing multiple AWS accounts is a key part of most organizations’ cloud security strategy. Stackery supports this by relying on your existing AWS IAM policies and roles when executing changes. If the individual executing the change doesn’t have permission in that AWS account, the action will fail. This makes it straightforward to set up workflows where engineers have full control to make changes in development and testing environments, but can only propose changes in the production account, which are then reviewed and executed by an authorized individual or automation tool.

You can read more in our knowledge base article about Working with multiple AWS accounts in Stackery

CloudFormation Resource Nodes

Sometimes you need to do something a little different, which is why we built custom CloudFormation Resource nodes. You can use these to provision any AWS resource and take advantage of the full power and flexibility of CloudFormation, for situations when that’s required or desireable.

What’s been coolest about rolling this feature out is the variety of creative uses we’ve seen it used. For example use CloudFormation Resource nodes to automatically configure and seed a database the first time you deploy to a new environment. You can also use them to automatically deploy an HTML front end to CloudFront each time you deploy your backend serverless app. The possibilities are endless.

AWS Resource Tagging

Resource Tagging may not be the most glamorous of features, but it’s a critical part of most organizations’ strategies for tracking cost, compliance, and ownership across their infrastructure. Stackery now boasts first class support for tagging provisioned resources. We also provide the ability to require specific tags prior to deployment, making it orders of magnitude to get everyone on the same page on how to correctly tag resources.

Always Shipping

Our goal is to always be shipping. We aim to push out valuable changes every day. Customer’s gain more control and visiblity over their serverless applications each day, so they can ship faster and more frequently too. Look out for more great changes rolling out each day in the product, and watch this blog for regular announcements summarizing our progress. We also love to hear what you think so if you have wants or needs managing your serverless infrastructure, don’t hesitate to let us know.

Get the Serverless Development Toolkit for Teams

Sign up now for a 60-day free trial. Contact one of our product experts to get started building amazing serverless applications today.

To Top