Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Posts on Engineering

To Do Serverless Right, You Need A New IAM Approach
Nate Taggart

Nate Taggart | April 12, 2018

To Do Serverless Right, You Need A New IAM Approach

Identity and Access Management (IAM) is an important tool for cloud infrastructure and user management. It governs access control for both cloud services and users, and can incorporate features around auditing, authentication policies, and governance.

Use of IAM involves a multiple-step process of creating roles and permissions and then assigning those roles and permissions to users, groups, and resources. In static (or relatively stable) environments, like those on legacy infrastructure, this is a task that can be configured once and only periodically updated. For a critical, once-and-done type task like this, it has historically been a responsibility of a highly-privileged operations team which could own this responsibility and develop IAM permissioning as a core competency. In serverless environments, however, manual provisioning and assignment of IAM roles and permissions can have a dramatically negative impact on team velocity – one of the key advantages of serverless infrastructure.

Serverless Velocity and IAM

Serverless infrastructure is highly dynamic and prone to frequent change. As developers develop functions for deployment into a FaaS-style architecture, they’re fundamentally creating new infrastructure resources which must be governed. Since these changes can occur several times per day, waiting for an operations team to create and assign IAM policies is an unnecessary and highly impactful bottleneck to the application delivery cycle.

As a further challenge, FaaS architectures are difficult (if not impossible) to recreate in local environments. This means that the development cycle is likely to involve iterating and frequently deploying into a development account or environment. Having an operations team manually creating IAM policies in the course of this development cycle is prohibitively challenging.

These bottlenecks notwithstanding, IAM policies continue to play a critical role in security, governance, and access control. Organizations must find a way to create and assign IAM policies without blocking the product development team from their high-velocity serverless application lifecycle.

The New Serverless IAM Strategy

There are generally two approaches to IAM policy-making for serverless. The first is to extend the responsibility from your specialized operations team to your entire development group. This approach has a number of drawbacks, including the need for extensive training, a human-error risk, a reduction in development velocity, and a broad extension of access which dramatically reduces control.

The second, and preferred, solution is to automatically provision IAM policies based on a rule-set of best-practices and governance standards. In this scenario, a company would either develop their own release tooling or purchase a pre-built solution like Stackery’s Serverless Operations Console. This software would then be responsible for encapsulating principles of “Least Privilege,” environment management, policy creation, and policy application for all serverless stacks.

In this way, your product engineering team can focus on developing code and can have permissions to provision their services into development environments which are automatically sandboxed and isolated. Once development has been satisfied, this software can promote the new service into a new sandboxed environment for integration testing and QA. Your CI/CD pipeline can continue to promote the service all the way to production, using appropriate roles and permissions at each step, thereby ensuring both IAM policy compliance and high-velocity through automation.

This automatic creation and assignment of IAM policies reduces the risk for human error, ensures that resources are appropriately locked down in all stages of release, and encapsulates DevOps best practices for both high velocity and consistent control.

If you’re still manually creating and assigning IAM policies in your serverless development cycle, I encourage you to consider the advantages of modernizing this workflow with specialized serverless operations software.

Improved Git Management
Anna Yovandich

Anna Yovandich | January 25, 2018

Improved Git Management

This week we’re releasing a Git integration feature that introduces a significant improvement to editing and remote collaboration. Our goal was to mitigate conflicts (or “hosing”) that occur when changes are committed to the same stack branch from multiple workspaces and to enable harmonious collaboration.

For instance, one person may switch from editing functions in a code editor and push changes through the command line, then switch to the Stackery web app to make changes and push them from the UI. It’s also possible that multiple instances of the web app are open at once (in separate windows), causing changes in one instance to be lost by a push from another. Now, anytime changes are pushed, the UI will have awareness that a refresh or new branch is needed.

On a team, different members might be editing from the same branch without realizing their changes are at risk of being lost or trampling others. Instead of introducing conflicts or encountering rejected updates, changes can be isolated on a new branch without fear of collision.

Stacks created from this point forward will be provisioned with a webhook that enables remote detection. When a remote change triggers the webhook, an IoT socket channel sends a message to the UI. The message will appear with instructions to refresh the stack (reset to remote HEAD) or seamlessly branch the current changes.

In additions, the Stack Editor UI now provides the option to create a new branch by selecting “Create Branch…” from the list of stack branches. This is especially helpful when someone makes changes before realizing they’ve been working on the wrong branch, and wants to easily continue editing without switching to the command line. Creating a new branch from the UI will preserve current changes and move them to a new branch. It is equivalent to git checkout -b.

Our primary goal was not to replicate a comprehensive Git workflow but to provide safeguards when conflicting changes arise and peace-of-mind to collaborate comfortably and without hesitation.

Rate Limiting Serverless Apps - Two Patterns
Sam Goldstein

Sam Goldstein | November 21, 2017

Rate Limiting Serverless Apps - Two Patterns

Diagram for an automatic brake patented by Luther Adams in 1873.

Many applications require rate-limiting and concurrency-control mechanisms and this is especially true when building serverless applications using FaaS components such as AWS Lambda. The scalability of serverless applications can lead them to quickly overwhelm other systems they interact with. For example, a serverless application that fetches data from an external API may overload that system or hit API usage limits enforced by the API provider. Steps must be taken to control the rate at which requests are made. AWS Lambda doesn’t provide any built-in mechanism for rate-limiting Lambda functions, so it’s necessary for engineers to design and implement these controls within their serverless application stack.

In this post I’ll discuss two patterns I’ve found particularly useful when building applications with rate-limiting requirements in a serverless architecture. I’ll walk through how each pattern can be used to control the load serverless applications generate against external resources, and walk through how to combine them in an example application to limit the rate at which it performs work.

Pattern 1: The Queue Filler

The Queue Filler moves work items into a worker queue at a configurable rate. For example a serverless application that fetches data through an external HTTPS API may be limited to making 600 requests per minute, but need to make a total of 6000 requests to access all necessary data. This can be achieved by triggering a Queue Filler function each minute which moves 600 items from a list of unfetched API URLs into a queue for Worker functions. By controlling the rate at which items are placed into the Worker Queue we limit the rate at which our application makes requests against the external API. There’s a wide variety of technologies, such as Kinesis streams, Redis, SQS, or DynamoDB that can be used as the queue backend.

Pattern 2: Spawn N Workers

The Spawn N Workers pattern is used to control how many Workers functions are triggered each minute. If, for example, the Worker function makes 1 request against an external API we can invoke 600 Workers functions per minute to achieve a throughput of 600 requests per minute. This pattern is also useful for controlling the concurrency of Workers which may have performance impact on shared resources such as databases.

Example Rate Limited Application

This digaram shows an example serverless architecture that implements the Queue Filler and Spawn N Workers patterns to control the load generated by Worker functions. The architecture consists of 2 Timers, 3 Functions, and a Redis ElasticCache Cache Cluster which is used to store rate limit state. Each minute a timer triggers the Fill Worker Queue function which implements the Queue Filler pattern by moving items from one Redis Set to another. Another timer triggers the Spawn N Workers function which triggers Worker function invocations based on a config value. This allows us to control the concurrency of Worker requests, and provides a convenient operational safety valve. We can turn the workers off by setting the spawn count to 0.

Fill Worker Queue Function

This function gets invoked once per minute and moves 600 queued items from the unqueuedItems Redis Set into the workerQueue Set. In some cases it’s desireable to fetch the numberToQueue value from Redis (or another data store) so it can be configured dynamically at runtime.

const stackery = require('stackery');
const Redis = require('ioredis');
// Setup a new redis client and connect.
const redis = new Redis(process.env['REDIS_PORT'], process.env['REDIS_DOMAIN']);

const unqueuedItemsKey = 'queue:unqueuedItems';
const queuedItemsKey = 'queue:workerQueue';

// How many items should be queued for Workers each minute.
// Usually it's better to fetch this number from Redis so you can
// control rate limits dynamically at runtime.
const numberToQueue = 600;

module.exports = function handler (message) {
  // move 600 items from the unqueued list to the queued list
  return redis.spop(unqueuedItemsKey, numberToQueue).then((items) => {
    redis.sadd(queuedItemsKey, items);
  }).then(() => {
    console.log(`Successfully queued ${numberToQueue} items`);

Spawn N Workers Function

This function gets invoked once per minute. It fetches a configuration value from Redis, which controls how many Worker functions it invokes. By storing this value in Redis we’re able to configure the Worker spawn rate dynamically, without redeploying. This function uses the stackery.output API call to trigger the Worker functions. The {waitFor: 'TRANSMISSION'} options means this function will exit after all Workers have been successfully triggered, rather than waiting until they’ve completed their work and returned.

const stackery = require('stackery');
const Redis = require('ioredis');
// Setup a new redis client and connect.
const redis = new Redis(process.env['REDIS_PORT'], process.env['REDIS_DOMAIN']);

// Redis key that stores how many worker functions should be spawned each
// minute.
const numberOfChildrenRedisKey = 'config:Concurrency';

module.exports = function handler (message) {
  // Fetch the configuration value of how many worker functions to spawn.
  return redis.get(numberOfChildrenRedisKey).then(numberOfChildren => {
    console.log(`Triggering ${numberOfChildren} functions.`);

    const promises = [];
    for (var i = 0; i < numberOfChildren; i++) {
      // Here we use stackery's output function to trigger downstream worker
      // functions.
      promises.push(stackery.output({}, { waitFor: 'TRANSMISSION' }));
    return Promise.all(promises);
  }).then(() => {
    console.log('Triggered successfully.');

Worker Function

Worker functions pull a work item off the worker queue and process it. It’s necessary to use an atomic locking mechnism to avoid multiple workers locking the same work item. In this example the Redis SPOP command is used for this purpose. It removes and returns a random item from the queue, or null if the queue is empty. If you want to track Worker failures for retrys or analysis you can use Redis SMOVE to move locked items to a locked Set and remove them on success.

const Redis = require('ioredis');
const redis = new Redis(process.env['REDIS_PORT'], process.env['REDIS_DOMAIN']);
const queuedItemsKey = 'queue:workerQueue';

module.exports = function handler (message) {
  // Use SPOP to atomically pull an item off the worker queue.
  return redis.spop(queuedItemsKey).then((item) => {
    // If we don't get an item fail.
    if (!item) {
      throw new Error("Couldn't acquire lock.");
    console.log(`got lock for ${item}`);
    return item;
  }).then((item) => {
    // Now that we've pulled a item from the worker queue we do our work.
    return doWork(item);

Building Ergonomic Controls

As you incorporate rate-limiting mechanisms into your application it’s important to consider how you expose the rate limit configuration values to yourself and other engineers on your team. These configuration values can provide a powerful operational tool, giving you the ability to scale your serverless application up and down on demand. Make sure you think through the ergonomics of your control points in various operational scenarios.

I’ve found that keeping configuration in a datastore which your application accesses is ideal. This allows you to dynamically configure the system, without redeploying, which is useful in a wide variety of situations. It provides DevOps engineers with the ability to quickly turn off functionality if problems are discovered or to gradually scale up load to avoid overwhelming shared resources. In advanced cases, teams can configure an application to dynamically respond to external indicators, for example decreasing load if an external API slows down or becomes disabled. Patterns like Queue Filler and Spawn N Workers are useful when determining how to build these core operational capabilities into servereless applications.

Flexible Deployment Workflow
Anna Yovandich

Anna Yovandich | November 02, 2017

Flexible Deployment Workflow

We’re excited to release an improved Dashboard workflow that provides more flexible editing and deployment capabilities. It’s crucial that developers have the ability to deploy one branch’s changes to one or more environments (e.g. feature-branch to development and testing), or vice versa, share common environment configuration among many branches.

When a branch is intrinsically tied to an environment, developers are forced to merge feature branch changes into an environment branch or instantiate a new stack, in order to deploy. In addition to a brittle workflow, it also becomes increasingly unwieldy to manage and track multiple features - creating conflicts in the development branch that are tough to resolve, and features that are complicated to isolate and test. We aimed to solve this.

By removing the hard link between branch and envrionment, developers are able to deploy a branch’s changeset to any environment. Along with decoupling the branch/environment relationship, Stackery’s Dashboard Sidebar functionality is now separated into two routed tabs, Edit and Deployments, for streamlined editing and decluttered deployment management.

The Edit tab defaults to showing the current stack - the latest commit of the selected branch - in the stack workspace. Choosing a different branch will update the stack nodes with that changeset. Selecting an environment won’t affect the workspace, but will determine which environment to use for deploying the selected branch’s changeset. Once a deployment is prepared, clicking ‘Deploy’ will open CloudFormation where ‘Execute’ can be triggered. When the deployment is executed, the prepared deployment will show feedback as the deployment proceeds. When the deployment completes, it becomes viewable in the Deployments tab as ‘Current Deployment’.

The Deployments tab displays current and previous deployments for the selected environment, showing the current deployment by default. Choosing a different environment will update the Sidebar with the deployments specific to it - which is useful for managing multiple deployments across environments. Clicking a deployment will update the nodes (in a read-only state) - showing a visual snapshot of the stack, event log, Git SHA, and timestamp for the deployment. Clicking individual nodes in this state will provide settings properties, links to logs, and monitoring metrics.

We have more iterations in store for the Sidebar workspace - aiming to improve editing capability and the deployment management. In the meantime, we hope your Stackery experience improves with the separation of branches and environments in the new and improved Sidebar.

Why You Should Use API Gateway Proxy Integration With Lambda
Chase Douglas

Chase Douglas | October 18, 2017

Why You Should Use API Gateway Proxy Integration With Lambda

Ben Kehoe recently wrote a post about AWS API Gateway to Lambda integration: How you should — and should not — use API Gateway proxy integration with Lambda. In his post, Ben gave a few reasons why he believes using API Gateway Proxy Integration is an anti-pattern.

Ben does a great job summarizing how the integration works. He writes:

The pattern that I am recommending against is the “API Gateway proxy integration” as shown in the API Gateway documentation here. It involves three separate features:

  • Greedy path variables: route multiple paths to a single integration
  • The ANY method: route any HTTP method to the integration
  • The Lambda proxy integration itself: instead of defining a transform from the web request to the Lambda input, and from the Lambda output to the web response, use a standard way of passing the request and response to and from the Lambda.

When these three features are put together, it can turn API Gateway into a passthrough, letting you use your favorite web framework in a single Lambda to do all your routing and processing.

Ben is a leader among serverless users and has great insights on the best approaches to building serverless apps. But I’m not sure I agree with the notion that API Gateway Lambda Proxy Integration is an anti-pattern. First, let’s look at his concerns about using the integration, then we’ll look at its benefits.

What’s “Wrong” With API Gateway Lambda Proxy Integration?

Ben lays out the following issues with API Gateway Proxy Integration:

  • It bloats your Lambda with web logic
  • It vastly increases your attack surface
  • Your API is less self-documenting

Let’s look at each of these concerns.

It Bloats Your Lambda With Web Logic

Your Lambda gets bloated with all the code for multiple logical paths.


You’re paying for your Lambda to run logic that API Gateway will do for free.

Both are true, but only to a point.

Putting many code paths into one Lambda function may seem like an anti-pattern. It’s not common in serverless examples. But it’s not necessarily “bad”. Ben’s point here is that separating code paths into different Lambda functions can provide for separation of concerns. Separation of concerns stresses modularization to achieve units of code that are easily understood as independent pieces of functionality. But modern programming languages have advanced facilities for providing separation of concerns. For example, Node.js has its module system, and Java and C# have packages. It is easy to architect, develop, and test individual code paths, whether or not they share the same physical compute unit (i.e. Lambda Function).

As for paying for the run time for logical code paths, there is some amount of efficiency to be gained by offloading the logic to API Gateway. However, the cost of an HTTP framework layer can be fairly minimal, especially at high loads where Lambda Function runtimes are cached. When using JITted languages (e.g. Node.js, Java, or C#), the cost can be broken down into cold start and general execution costs. Execution costs are likely to be on the order of single digit milliseconds for a typical routing layer. Cold start costs are higher because you have to compile code once before execution the first time. But there are ways to mitigate the overhead of cold starts, and as we’ve seen with Auth0 Webtasks, cold starts can be eliminated in theory and likely will become less of an issue over time as AWS and other providers improve their FaaS offerings. In addition, one of the core benefits of serverless is it allows more focus on business logic instead of figuring out how to build a scalable system. It’s reasonable for serverless engineers to accept some drawbacks (e.g. cold starts and slightly higher average latency) in order to achieve greater development velocity and faster time to market.

It Vastly Increases Your Attack Surface

You’ve increased your attack surface by allowing more requests through API Gateway into your own code and by relying on additional 3rd party libraries — whose security you are responsible for.

There are two issues Ben is pointing out. The first is that API Gateway can block improper requests without invoking the backing Lambda functions. API Gateway can save Lambda invocation costs this way and can also offload request validation from your Lambda function. However, cost savings are dampened by the fact that improper requests are most often due to a bug in the requesting client, which can usually be fixed quickly either in the client or in the API. And while offloading request validation to API Gateway is nice, you have to do it in an API Gateway-specific fashion. Most HTTP frameworks have validation mechanisms built-in, many of which have more flexibility than API Gateway.

The second issue Ben points out is the security of third-party libraries. This is a serious issue. You are only as strong as your weakest dependency.

However, your weakest dependency is usually the code that is least reviewed and least used across projects. Usually this means your own code, then uncommon modules your code uses. In contrast, common HTTP frameworks are some of the most highly consumed and analyzed pieces of code. As an example, the Express framework is the 5th-most depended-upon Node.js package. While Amazon employs super smart security engineers to analyze their code, I’ll wager that virtually no one outside of Amazon has analyzed their API Gateway code. I don’t mean to suggest that Express is more or less secure than API Gateway, rather that they are both fairly secure solutions via different code analysis methods.

To Ben’s point, if you are deciding whether to use API Gateway’s routing mechanism or rolling your own HTTP routing layer, you are probably better off using API Gateway.

Your API Is Less Self-Documenting

You’re missing out on the API documentation features that API Gateway provides [if you use Lambda Proxy Integration].

API Gateway does have a commendable amount of functionality for API documentation. They make it possible to import and export Swagger API definitions. But they’re not the only game in town. For example, if you choose to write your Lambda integration using one of the many common Node.js HTTP frameworks, then you can also use the swagger module to do the same thing.

Benefits Of API Gateway Lambda Proxy Integration

There are many benefits of consolidating code into fewer functions. I’ve written about these benefits before in my Serverless Function Architecture Principles post. At a high level, there are performance benefits of consolidating functions (fewer cold starts), and there are development benefits when it comes to integration testing and managing systemic complexity. Using one function with an HTTP framework facilitates the goal of functional consolidation.

But let’s focus on the key power of serverless technologies, as noted above: they enable engineers to focus on the business logic instead of scalability logic. Most web app engineers are familiar with HTTP frameworks they have already used in the past. In this sense, API Gateway is another framework that can be learned and used effectively. But is it objectively better than all the pre-existing frameworks? Aside from splitting hairs on single-digit milliseconds of latency difference, there are many reasons why one framework may be better than another for a given project. This is the base reason why we have so many different frameworks already!

In short summary, use a common HTTP framework that best fits your project needs, which may or may not be API Gateway’s built-in routing mechanism.

Serverless Function Architecture Principles
Chase Douglas

Chase Douglas | October 13, 2017

Serverless Function Architecture Principles

Serverless services are often called “Functions-as-a-Service”, or FaaS. The use of the term “function” arose because we needed a name for a compute unit that was smaller than a “server” or “container”. But how should you break your implementation down when building a serverless architecture? Unfortunately, thinking in terms of “functions” tends to create problems.

Let’s start by talking what what serverless “functions” actually are.

What is a serverless “function”?

A serverless “function” is simply code that runs in response to an event. That’s it.

The problem with calling these constructs “functions” comes when we start to design a service. Software engineers are trained to break complex tasks down into a series of small, testable components of code we also call “functions”. Should we apply the same heuristics to serverless “functions” as we do source code “functions”?

Applying source code “function” architecture is the approach many new serverless users take when building their services. I was reminded of this recently when someone asked me how to implement a service that did a few things in sequential order. Their thinking was to architect the service like they would a program by encapsulating each step of functionality in a separate serverless “function”. While conceptually it makes sense to break down the steps into independent units, in practice this is an anti-pattern.

Many Functions Leads To Many Problems

Let’s first tip-toe around the issue of complexity and separation of concerns when talking about how to architect a serverless service. We’ll come back to address these issues. Instead, let’s focus for now on concerns that can be objectively measured.

Imagine you have a pipeline of serverless “functions” to handle a web request. In its simplest form, this may look like an api endpoint that passes a request onto function A that then proxies the request onto one of two different functions, B and C, depending on some criteria. When either function responds, the response is proxied back through function A to be transmitted back to the client through the api gateway.

Function Pipeline

There are two objective cost increases associated with this architecture. First, for every request you are running twice as many functions instead of one, because function A will need to sit idle and wait until either function B or C handles the request and sends a response back. This doubles the compute cost compared to having one function do all the work. The second cost is latency. There is latency when invoking a function, which is even worse when the invocation causes a “cold start”. Calling two functions doubles this latency.

Most people do not attempt to build serverless services using the above architecture, but it’s not unheard of. Let’s take a closer look at a more prevalent paradigm: decomposing functionality into independent functions.

Independent Functions

Let’s say you have an API service with a mapping of endpoints to independentent serverless “functions”. Arguments can be made on whether decomposing endpoint functionality helps or hinders abstractions and composition of code. However, at an objective level we have added a cost to the system: more frequent “cold starts”. Cold starts occur when a function runtime isn’t cached and must be initialized. If all of an API service’s endpoints are handled by one “function”, then that “function” can be initialized, cached, and reused no matter which endpoint of the API is invoked. In contrast, when using independent functions each one will incur its own cold start penalty.

Resolving The Functions-as-a-Service Impedence Mismatch

Impedence mismatch is a term borrowed from an analogous concept in electrical engineering. It stands for a set of conceptual and technical difficulties that are often encountered when applying one set of concepts and implementations to another, like mapping a database schema to an object-oriented API.

The architectural issues outlined above follow from attempting to apply a “function” programming model to what is actually an event handling service. FaaS services are really event handling services that the term “function” has been grafted onto. Instead of trying to map our ingrained understanding of the term “function” onto event handling services, let’s instead architect our code around building a unit of functionality.

Let’s imagine the following service built with many black-box “functions” hooked together:

Decomposed Architecture

Let’s now transform it into a simpler architecture:

Consolidated Architecture

Now we can dive into a discussion of which approach is better. The better approach will facilitate the following:

  • Lower costs (both in money and in performance)
  • Ease of integration testing
  • Ease of other people understanding the architecture

The first two points are self-evident, but the last one is particularly important for teams of developers who must share a common understanding of the service architecture.

We discussed above how reducing the number of “functions” reduces the hard costs of serverless computing. Let’s focus now on the other two concerns.

Integration testing is difficult in general because you are attempting to test one unit of functionality you do control with other units you don’t control. While it’s tempting to believe you do control all the serverless “functions” in your service, you don’t really control the boundaries between them. By consolidating functionality into fewer “functions” you minimize the integration boundaries, which makes it easier to build more meaningful integration tests that cover larger amounts of functionality.

Lastly, when it comes to understanding service architectures it is almost always easier when there are fewer parts. This is why most services start out as monoliths. Obviously, there are times when services do need to be decomposed, but this generally occurs when one service has become too large for a single team of engineers to manage. At that point the decomposition occurs because the amount of code that makes up the service is orders of magnitude larger than the amount of code we are concerned with when we consider serverless “function” composition. But this does lead us to a question about how to architect a larger amount of code so it is still easy to comprehend.

Separation Of Concerns

Separation of concerns is a design principle that helps guide effective code structure. It stresses modularization to achieve units of code that are easily understood as independent pieces of functionality. If we consolidate functionality from modular serverless “functions” into more monolithic “functions”, aren’t we breaking this design principle?

It’s true that on one level consolidation of functionality means there may be less separation of concerns at the infrastructure level. However, that does not mean we have to foresake the principle altogether. One of the tenets of the Node.js ecosystem is that all functionality should be packaged into modules. Serverless functions don’t need to be written in Node.js, but the Node.js ecosystem provides a good example of the power of modularity in source code implementations. We can use this source code modularity to provide separation of concerns in our service.

Let’s look at one example where separation of concerns can be moved from infrastructure design to the source code. Let’s say we want to add A/B testing functionality to an API, where randomly some requests are executed with one of two different implementations. We want to have a separation of concerns where one part of the service handles the A/B test independently from each of the two implementations. This could be achieved with the following architecture:

A/B Testing Architecture

However, as we saw above this would add significant cost to the solution in terms of compute resources and latency. Instead, we could achieve the same result by putting each of the two code paths into separate modules and invoking them from within a single function. The two code paths could even be just different revisions of the same code, enabling a slow rollout of new functionality. While the mechanism is different in each programming language, there are means to enable referencing specific versions of modules in all modern runtimes. For example, a Node.js module may be kept in github and referenced with different versions by putting the following in package.json:

  "dependencies": {
    "a": "org/repo#v1.2",
    "b": "org/repo#v1.3"

Then the top level source code for the function would be something like:

module.exports = handler(message) {
  if (Math.random() < 0.9) {
    return require('a')(message);
  } else {
    return require('b')(message);

This approach is an example of how we can use source code architecture instead of infrastructure architecture to implement separation of concerns. Our top level function source is concerned only with the A/B test, and we leave the implementation up to the different module implementations.

Serverless Function Architecture Principles

The concerns above lead to a few key serverless “function” architecture principles:

  • Build each “function” so it performs as much as it can independently of other services
  • Evaluate objective, hard costs of choices above subjective costs related to “function” composition
  • Investigate functional composition via code implementation mechanisms when faced with separation of concerns issues

These principles will guide you to more efficient and manageable serverless implementations.

Function To Table Integration Guide
Chase Douglas

Chase Douglas | September 25, 2017

Function To Table Integration Guide

Recently we needed to build a simple stack to take form submissions and store them in a database for later use. This seemed like a perfect time to create a new guide showing how easy this is with Stackery! In the guide you’ll learn how to build a Rest Api endpoint that sends data to a Function where the data is uploaded to a Table.

You can find the new guide here.

As usual, using Stackery to provision with serverless techniques and horizontally scalable resources makes building production-ready services a snap!

Using Virtual Networks To Secure Your Resources
Chase Douglas

Chase Douglas | July 21, 2017

Using Virtual Networks To Secure Your Resources

In this post, I’m going to highlight one of Stackery’s more interesting nodes. The Virtual Network node provides a powerful mechanism for securely deploying resources inside a private network.

Why Do We Need Private Networks

As an example of what a private network enables, let’s take a look at how to secure a database. When you connect to most databases, you provide a username and password to gain access. But some databases are easy to set up without requiring user credentials to gain access. As an example, hundreds of millions of passwords were recently leaked via an unprotected database accessible on the internet. It is unfortunately too easy to misconfigure database security settings when initially setting up the database or when updating or changing settings.

But let’s say you have set up a database with proper credential-based access controls. This sounds like a good amount of security by itself. If you don’t have the proper credentials, you won’t be able to access the database. What could go wrong?

Unfortunately, relying only on credentials for database security presents many problems. If your database is accessible on the internet people will find ways to either break into it or cause other nasty problems. Many databases do not have effective countermeasures for brute force password attacks. You can easily find articles, like this one, demonstrating how to use common tools to perform brute force password attacks on databases.

But even if you use a strong password with a database that uses proper password hashing and salting techniques to prevent brute force attacks from being successful, you can still end up overloaded via a denial of service attack where many malicious clients attempt to connect to your database simultaneously and exhaust available resources.

The solution to these problems is simple: put your databases inside private networks that only your services can access.

Stackery Virtual Network Node

Stackery’s Virtual Network node makes it easy to place your databases and services inside private networks. When you add a Virtual Network node, Stackery creates a Virtual Network with private and public subnets. Resources placed in public subnets can be accessed from the internet, while resources placed in private subnets can only be accessed by other resources within the same Virtual Network.

As an example, when you place a Database node in the Virtual Network, the Database is provisioned inside a private subnet. The same is true of Docker Cluster nodes. But when you place a Load Balancer in the Virtual Network, the Load Balancer is provisioned inside a public subnet. This allows internet traffic to reach the Load Balancer, which then routes traffic to Docker Services running in private subnets of the same Virtual Network. For serverless use cases, Function nodes can also be placed inside Virtual Network nodes to ensure they execute within a private subnet of the Virtual Network.

Stackery Best Practices

The Virtual Network node is another example of how Stackery helps engineers go from concept to implementation using industry best practices. Under the covers, a Virtual Network node is implemented using over a dozen AWS resources. But the magic of Stackery ensures the Virtual Network and all the resources placed inside it are properly networked to provide the right level of security.

Ready to Get Started?

Contact one of our product experts to get started building amazing serverless applications quickly with Stackery.

To Top