Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Posts on Product Updates

Lambda Layers & Runtime API: More Modular, Flexible Functions
Sam Goldstein

Sam Goldstein | November 29, 2018

Lambda Layers & Runtime API: More Modular, Flexible Functions

Lambda layers and runtime API are two new feature of AWS Lambda which open up fun possibilities for customizing the Lambda runtime and enable decreased duplication of code across Lambda functions. Layers lets you package up a set of files and include them in multiple functions. Runtime API provides an API for interacting with the Lambda service function lifecycle events which lets you be much more flexible about what you run in your Lambda.

Layers is aimed at a common pain point teams hit as the number of Lambdas in their application grows. Today, we see customers performing gymastics in order to compile binaries or package reusable libraries inside functions. One downside of this behavior is that it is difficult to ensure all functions have the latest version of the dependency, leading to inconsistencies across environments or over-complicated error-prone packaging processes. For example, at Stackery we compile git and package it into some our functions to enable integration with GitHub, GitLab, and CodeCommit. Prior to layers upgrading that dependency involved each developer responsible for a function repackaging those files in each related function. With layers, it’s much easier to standardize those technical and human dependencies and the combination of layers and runtime API enables a cleaner separation of concerns between business logic function code and cross-cutting runtime concerns. In fact, in Stackery, adding a layer to a function is just a dropdown box. That feels like a little thing, but it opens up several interesting use cases:

1. Bring Your Own Runtime

AWS Lambda provides 6 different language runtimes (Python, Node, Java, C#, Go, and Ruby). Along with layers comes the ability to customize specific files that are hooked into the Lambda runtime. This means you can gasp run any language you want in AWS Lambda. We’ve been aware that there is no serverless “lock in” for some time now but with these new capabilities you are able to fully customize the Lambda runtime.

To implement your own runtime you create a file called bootstrap in either a layer or directly in your function. It must have executable permissions (chmod +x).

Your bootstrap custom runtime implementation must perform these steps:

  1. Load the function handler using the Lambda handler configuration. This is passed to bootstrap through the _HANDLER environment variable.

  2. Request the next event over http: curl "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next"

  3. Invoke the function handler and capture the result

  4. Send the response to the Lambda service over http:

curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$INVOCATION_ID/response" -d "$RESPONSE"

It’s pretty much guaranteed there will be a bunch of new languages for you to deploy any minute through layers. At Stackery we’re debating whether a PHP or Haskell layer would be of greater benefit.

2. Shared Binaries and Libraries

Serverless apps often rely on reusable libraries and commands which the business logic code calls into. For example, our engineering team runs git inside some of our functions, which we package alongside our node.js function code. Scientific libraries, shell scripts, and compiled binaries are a few other common examples. While it’s nice to be able to package any files along with our code when these dependencies are used across many functions, need to be compiled, or are updated frequently you can end up hitting increasing function build complexity and team distractions.

With layers you can extract these shared dependencies and register that package within the account. In Stackery’s function editor you’ll see a list of all the layers in your account and can apply them to that function. This simplifies the management and versioning of reusable libraries used by your functions.

The layers approach has the added benefit that it’s easier to keep dependencies in sync across all your functions and to upgrade these dependencies across your microservices. Layers provides a way to reduce duplication in your function code and shared libraries in layers are only counted once against AWS storage limits regardless of how many functions use the layer. Layers can also be made public so it’s likely we’ll see open source communities and companies publish Lambda layers to make it easier for developers to run software in Lambda.

Serverless Cross-Cutting Concerns

By now it should be clear that layers unlock some exciting possibilities. Let’s take a step back and note this is one aspect of a broader set of good operational hygene. Microservices have major benefits over monolithic architecture. The pieces of your system get simpler. They can be developed, deployed, and scaled independently. On the other hand, your system consists of many pieces, making it more challenging to keep the things that need to be consistent in sync. These cross-cutting concerns, such as security, quality, change management, error reporting, observability, configuration management, continuous delivery, and environment management (to name a few) are critical to success, but addressing them often feels at odds with the serverless team’s desires to focus on core business value and avoid doing undifferentiated infrastructure work.

Addressing cross-cutting concerns for engineering teams is something I’m passionate about since throughout my career I’ve seen the huge impact (both positive and negative) it has on an engineering orgs’ ability to deliver. Stackery accelerates serverless teams, by addressing the cross-cutting concerns that are inherent in serverless development. This drives technical consistency, increases engineering focus, and multiplies velocity. This is the reason I’m excited to integrate Lambda layers into Stackery; now improving the consistency of your Lambda runtime environments is as easy as selecting the right layers from a drop down. It’s the same reason we’re regularly adding new cross-cutting capabilities, such as Secrets Management, GraphQL API definition, and visual editing of existing serverless projects.

There’s a saying in software that if something hurts you should do it more often, and typically this applies to cross-cutting problems. Best practices such as automated testing, continuous integration, and continuous delivery all spring from this line of thought. Solving these “hard” cross-cutting problems is the key to unlocking high velocity engineering - moving with greater confidence towards your goals.

Serverless Secrets:  The Three Things Teams Get Wrong
Sam Goldstein

Sam Goldstein | November 07, 2018

Serverless Secrets: The Three Things Teams Get Wrong

Database passwords, account passwords, API keys, private keys, other confidential data… A modern cloud application with multiple microservices is filled with confidential data that needs to be separated and managed. In the process of researching how we would improve and automate secrets management for Stackery customers, I found much of what you find online is bad advice. For example, there are quite a few popular tutorials which suggest storing passwords in environment variables or AWS Parameter Store. These are bad ideas which make your serverless apps less secure and introduce scalability problems.

Here are the top 3 bad ideas for handling serverless secrets:

1. Storing Secrets in Environment Variables

Using environment variables to pass environment configuration information into your serverless functions is a common best practice for separating config from your source code. However, environment variables should never be used to pass secrets, such as passwords, api keys, credentials, and other confidential information.

Never store secrets in environment variables. The risk of accidental exposure of environment variables is exceedingly high. That’s why (just to be clear) you should never pass secrets in environment variables to Lambda functions. For example:

  • Many app frameworks print all environment variables for debugging or error reporting.
  • Application crashes usually result in environment variables getting logged in plain text.
  • Environment variables are passed down to child processes and can be used in unintended ways.
  • There have been many malicious packages found in popular package repositories which intentionally send environment variables to attackers.

At Stackery we never put secrets in environment variables. Instead we fetch secrets from AWS Secrets Manager at runtime and store them in local variables while they’re in use. This makes is very difficult for secrets to be logged or otherwise exfiltrated from the runtime environment.

2. Storing Secrets in the Wrong Places

If you’re dealing with secrets they should always be encrypted at rest and encrypted in transmission. By now we all know that keeping secrets in source code is a bad idea. Yes, secrets can’t live in git with your code so where should you keep them? There’s a lot of bad advice online, suggesting AWS Systems Manager Parameter Store (aka SSM) is a good place to store your secrets. Like environment variables Parameter Store is good for configuration, but terrible for secrets.

AWS Systems Manager Parameter Store falls short as a secrets backend in a few key areas:

  1. Parameters aren’t generally encrypted at rest and are often displayed in the AWS Console UI. Encryption only occurs for entries using the recently added Secure String type.
  2. Parameter Store is free but heavily rate limited. It doesn’t accommodate traffic spikes so you can’t rely on fetching secrets at runtime during traffic spikes. To avoid throttling your Lambdas you need to rely on environment variables to pass Parameter Store values in.
  3. You should never store secrets in environment variables.

At Stackery we use AWS Secrets Manager which stores secrets securely with fine grained access policies, auto-scales to handle traffic spikes, and is straightforward to query at runtime.

3. Bad IAM Permissions

Each function in your application should only have access to the secrets it needs to do its work. However it’s very common for teams to run configurations (often unintentionally) where every function by default is granted access to all secrets from all environments. These “/*” permissions mean a compromised function in a test environment can be used to fetch all production secrets from the secrets store. This is a bad idea for obvious reasons. Permission access should be tightly scoped by environment and usage, with functions defaulting to no secrets access.

At Stackery we automatically scope an IAM role per function and Fargate container tasks which limits AWS Secrets Manager access to the current environment the function is running in and the set of secrets required by that specific function.

Managing Serverless Environment Secrets with Stackery

Our team has learned a lot about how to manage serverless secrets, running production serverless applications, and working with many serverless teams and pioneers. We’ve integrated these best practices back into Stackery so serverless teams can easily layer secure secrets management onto their existing projects. If you are curious to read more about how Stackery handles secrets check out the Environment Secrets in the Stackery Docs.

Product Update: Accelerating Existing Serverless Projects
Sam Goldstein

Sam Goldstein | October 25, 2018

Product Update: Accelerating Existing Serverless Projects

We love helping serverless teams accelerate and manage projects and environments regardless of whether they started with Stackery or not. We’ve recently made improvements to importing projects and code for AWS SAM, Serverless Framework, and Gitlab. Here are the details:

Extending Stackery Capabilities to Serverless Framework

We’ve extended Stackery’s Infrastructure as Code (IaC) visual editing and serverless environment management capabilities to Serverless Framework (serverless.yml) projects. You can now visualize serverless.yml stacks in Stackery’s visual editor and quickly configure advanced cloud resources such as VPCs, GraphQL/AppSync, and Kinesis Streams. Serverless.yml apps can be deployed into AWS accounts using Stackery’s environment management and deployment automation. This provides a unified experience for teams managing both serverless.yml and AWS SAM apps throughout the software development process.

CLI-Based AWS Account Management

The Stackery Role, which acts as an extension to your AWS account, can now be managed through the stackery aws setup, stackery aws unlink, stackery aws accounts and stackery aws update-role. Read more about the Stackery Role in our docs.

Private GitLab Integration

Stackery now integrates with private GitLab instances in addition to private GitHub Enterprise instances. If you’re interested in connecting Stackery to your private GitLab instance just contact us.

React Single Page App Tutorial

There’s a new guide which walks through the process of deploying and hosting a single-page React application using the serverless approach. Check out the React Single Page App Tutorial in our docs.

Stackery’s Quickstart Just Got Quicker—and More Useful
Anna Spysz

Anna Spysz | October 15, 2018

Stackery’s Quickstart Just Got Quicker—and More Useful

If you’ve been over to our documentation site lately, you may have noticed some changes. We’ve got a new look and some new tutorials, but the latest upgrade is our new Quickstart tutorial.

While the first version of our Quickstart just got you up and running with Stackery, version 2.0 also has you deploying a static HTML portfolio page to an API endpoint:

Oooh, fancy!

Once you’ve followed the tutorial and deployed your static site, you can customize the HTML with your own information and links to your projects. You can then follow our serverless contact form tutorial to give the contact form on your site functionality as well.

Want a preview? This YouTube video walks you through the entire Quickstart tutorial:

And be sure to visit our docs site regularly, as we have several new tutorials in the works. Stay tuned for a React application with a serverless backend - coming soon!

Deploy GraphQL APIs with Stackery
Sam Goldstein

Sam Goldstein | October 03, 2018

Deploy GraphQL APIs with Stackery

It’s been a busy month in Stackery engineering. Here’s a quick recap of what’s new in the product this week.

You can now use Stackery to configure and provision AWS AppSync GraphQL APIs, which is a serverless pay-per-invocation service similar to API Gateway, but for GraphQL! GraphQL resolvers can be connected to backend data sources like DynamoDB tables, Lambda functions, or HTTP proxies. You can read more about the using Stackery with GraphQL in the Stackery docs.

Trigger Lambda Function on Deploy

Does your deployment processes involve multiple commands that need to be run in a certain order? Stackery now provides the ability to mark any function as “Trigger on First Deploy” or “Trigger on Every Deploy”, which provides a clean mechanism to handle database migration, ship single page apps, and handle custom deploy logic across all your environments. To make this work Stackery sets up a CloudFormation Custom Resource in your project’s SAM template which is used to invoke the function when the stack is deployed. Read more in the Stackery Function Docs.

Reference Existing Cloud Resources

Teams are often deploying serverless stacks into existing cloud infrastructures. What happens when your Function needs to subscribe to an existing DynamoDB stream or be placed in an existing VPC? Stackery provides the ability to replace resources in a stack with a pointer to an already provisioned resource. This can be specified per environment which enables you to provision mock resources in dev/test environments but reference central infrastructure in production. Check out the “Use Existing” flag on resources like DynamoDb Tables or Virtual Networks

GitHub and GitLab bulk project import

No one wants to set up a bunch of AWS Serverless Application Model (SAM) projects with Stackery one by one so we built a 1 click importer which locates all your projects with a valid SAM template file (template.yaml) and sets them up to deploy and edit with Stackery. It works for both GitHub and GitLab and you can find it on the Stackery Dashboard homepage at

GitLab + Stackery = Serverless CI/CD <3
Sam Goldstein

Sam Goldstein | August 28, 2018

GitLab + Stackery = Serverless CI/CD <3

GitLab is a git hosting solution which features a built-in CI/CD pipeline that automates the delivery process. We’ve seen more and more serverless development teams asking how they can integrate their GitLab with Stackery. I am happy to announce that today Stackery features full support for GitLab source code hosting and serverless CI/CD deployments.

By linking your Stackery account with GitLab you can quickly develop and deploy serverless apps. Stackery helps generate AWS SAM YAML infrastructure-as-code templates and manage their Lambda source code, integrating directly with GitLab’s source code hosting. However the bigger payoff is taking full advantage of Stackery’s serverless deployment automation which is intentionally simple to integrate into GitLab’s CI/CD release automation. Stackery’s CLI deployment tool is a cross-compiled Go binary with no external dependencies. It’s just one step to download and bundle it in your repo and you then it’s simple to invoke it from your GitLab project’s .gitlab-ci.yml.

Here’s a basic example showing how to integrate Stackery into you .gitlab-ci.yml

  - test
  - build
  - deploy

  stage: test
  script: echo "Running tests"

  stage: build
  script: echo "Building the app"

  stage: deploy
    - stackery deploy --stack-name "myStack" --env-name "staging" --git-ref "$CI_COMMIT_SHA"
    name: staging
  - master

By integrating Stackery and GitLab you can take advantage of a number of interesting features to take your serverless deployment automation to the next level. For example:

  • GitLab pipeline security can be used to provide automated production change control for serverless applications.
  • GitLab’s environments and deployments are straight-forward to integrate with stackery deploy and can be used to orchestrate sophisticated CI/CD pipelines across multiple AWS accounts and environments.
  • Serverless Deploy from Chat is great. You know you’re doing it right when you’re deploying serverless SAM applications by chat. 💬🦊λ 🙌

We hope you enjoy this new GitLab integration.

Stackery is Now Running on SAM (Serverless Application Model) from AWS
Garrett Gillas

Garrett Gillas | July 31, 2018

Stackery is Now Running on SAM (Serverless Application Model) from AWS

Amazon Web Services SAM is a developer-centric, cloud native, open source framework to define serverless applications faster and with better consistency. SAM provides a standard for defining the architecture of serverless projects. Stackery now supports this framework natively and provides the tools for maintaining best practices, streamlining workflows, and enforcing consistency.

Amazon continues to move the industry forward from static, monolithic technology to a much more distributed landscape. Because it’s an open source standard heavily supported by and aligned with AWS, SAM benefits from Amazon’s prominence in the industry and the ecosystem established around them. As a part of Amazon’s toolbox, SAM comes with the resources and support of SAR, the Serverless Application Repository.

Velocity and Efficiency

So what changes are expected from Stackery now that it deploys to AWS using SAM? On the surface it might not look like much of anything has changed. Stackery customers still have full control and everything will continue to be stored and accessible in the same AWS account.

SAM is a open source community supported standard, heavily backed by AWS, the clear leader for hosting serverless apps. Opting for SAM means that you’re building your apps on the leading open source standard with a vibrant community to turn to when you need support, and it gives you to access to all the tools and advantages of such an ecosystem, like SAM local development mode.

Stackery endows you with the power to generate and manage SAM templates directly. Maintain full control while increasing consistency and efficiency by automating the most repetitive infrastructure configuration tasks. Just a few clicks in the Stackery Operations Console can generate new infrastructure-as-code configurations — You can also manually edit existing SAM templates and alter them to fit your needs. Doing that any other way would require significantly higher time and work investments.

Consistency and Scalability

Move fast without the fear that you’ll make mistakes. Engineering time is better spent working on core problems than small, monotonous tasks but the nature of writing software means consistency and attention to detail are vital. Every organization wants to move fast — leadership and engineering teams can always agree on that much — and Stackery with SAM enables higher velocity without sacrificing consistency. By building with Stackery on SAM, developers can enjoy the benefits of Lambda’s scalability while consistently shipping stable code and minimizing the serverless learning curve.

At Stackery, we want people to be able to use their own tools and write their applications the way they want. For developers who want to move fast, SAM makes that possible without creating new abstractions that cause trouble in the long run. Move fast with stable infrastructure.

Stackery 2018 Product Updates
Sam Goldstein

Sam Goldstein | May 16, 2018

Stackery 2018 Product Updates

Our product engineering team ships every single day.

That means Stackery’s product gets better every single day. Stackery engineers commit code into git which marches into our continuous delivery pipeline. We promote each version of our microservices, frontend, and CLI through multiple testing environments, rolling shiny new features into production or notifying the team of failures. This is the best way we know to develop modern software and explains why our team is able to ship so much functionality so rapidly.

However, because we’re constantly shipping, it means we need to pause periodically to take note of new features and improvements. In this post I’ll summarize some of the most significant features and changes from our product team over the past few months. For a more detailed list of changes, you can read and/or follow Stackery’s Release Notes.

Referenced Resource

One of the best things about microservice architecture is the degree which you can encapsulate and reuse functionality. For example, if you need to check if a user is authorized to perform a certain action, there’s no need to scatter permissioning code throughout your services. Put it all in one place (an AuthorizationService perhaps), and call out to that in each service that needs to check permissions.

Stackery’s Referenced Resource nodes let’s you reference existing infrastructure resources (be they Lambda functions, S3 buckets, VPCs, you name it) by their AWS ARN and seamlessly integrate these into your other services.

One of the best uses I’ve seen for Referenced Resources is using it as the mechanism to implement centralized error reporting for serverless architectures. Write one central Lambda function that forwards exceptions into your primary error reporting and alerting tool. Configure every other stack to send error events to this central handler. Viola! Complete visiblity into all serverless application errors.

Support for Multiple AWS Accounts

Every company we work with uses multiple AWS accounts. Sometimes there’s one for production and one for everything else. In Stackery’s engineering team each engineer has multiple accounts for development and testing, as well as shared access to accounts for integration testing, staging, and production. Splitting your infrastructure across multiple accounts has major benefits. You can isolate permissions and account-wide limits, minimizing risk to critical accounts (e.g. production).

However, managing deployment of serverless architectures across multiple accounts is often a major PITA. This is why working across multiple accounts is now treated as a first class concern across all of Stackery’s functionality. Multiple AWS accounts can be registered within a Stackery account using our CLI tool. Stackery environments are tied to an AWS accounts, which maps flexibly into the vast majority of AWS account usage patterns.

Managing multiple AWS accounts is a key part of most organizations’ cloud security strategy. Stackery supports this by relying on your existing AWS IAM policies and roles when executing changes. If the individual executing the change doesn’t have permission in that AWS account, the action will fail. This makes it straightforward to set up workflows where engineers have full control to make changes in development and testing environments, but can only propose changes in the production account, which are then reviewed and executed by an authorized individual or automation tool.

You can read more in our knowledge base article about Working with multiple AWS accounts in Stackery

CloudFormation Resource Nodes

Sometimes you need to do something a little different, which is why we built custom CloudFormation Resource nodes. You can use these to provision any AWS resource and take advantage of the full power and flexibility of CloudFormation, for situations when that’s required or desireable.

What’s been coolest about rolling this feature out is the variety of creative uses we’ve seen it used. For example use CloudFormation Resource nodes to automatically configure and seed a database the first time you deploy to a new environment. You can also use them to automatically deploy an HTML front end to CloudFront each time you deploy your backend serverless app. The possibilities are endless.

AWS Resource Tagging

Resource Tagging may not be the most glamorous of features, but it’s a critical part of most organizations’ strategies for tracking cost, compliance, and ownership across their infrastructure. Stackery now boasts first class support for tagging provisioned resources. We also provide the ability to require specific tags prior to deployment, making it orders of magnitude to get everyone on the same page on how to correctly tag resources.

Always Shipping

Our goal is to always be shipping. We aim to push out valuable changes every day. Customer’s gain more control and visiblity over their serverless applications each day, so they can ship faster and more frequently too. Look out for more great changes rolling out each day in the product, and watch this blog for regular announcements summarizing our progress. We also love to hear what you think so if you have wants or needs managing your serverless infrastructure, don’t hesitate to let us know.

Get the Serverless Development Toolkit for Teams

now and get started for free. Contact one of our product experts to get started building amazing serverless applications today.

To Top