Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Posts on Product Updates

Serverless Secrets:  The Three Things Teams Get Wrong
Sam Goldstein

Sam Goldstein | November 07, 2018

Serverless Secrets: The Three Things Teams Get Wrong

Database passwords, account passwords, API keys, private keys, other confidential data… A modern cloud application with multiple microservices is filled with confidential data that needs to be separated and managed. In the process of researching how we would improve and automate secrets management for Stackery customers, I found much of what you find online is bad advice. For example, there are quite a few popular tutorials which suggest storing passwords in environment variables or AWS Parameter Store. These are bad ideas which make your serverless apps less secure and introduce scalability problems.

Here are the top 3 bad ideas for handling serverless secrets:

1. Storing Secrets in Environment Variables

Using environment variables to pass environment configuration information into your serverless functions is a common best practice for separating config from your source code. However, environment variables should never be used to pass secrets, such as passwords, api keys, credentials, and other confidential information.

Never store secrets in environment variables. The risk of accidental exposure of environment variables is exceedingly high. That’s why (just to be clear) you should never pass secrets in environment variables to Lambda functions. For example:

  • Many app frameworks print all environment variables for debugging or error reporting.
  • Application crashes usually result in environment variables getting logged in plain text.
  • Environment variables are passed down to child processes and can be used in unintended ways.
  • There have been many malicious packages found in popular package repositories which intentionally send environment variables to attackers.

At Stackery we never put secrets in environment variables. Instead we fetch secrets from AWS Secrets Manager at runtime and store them in local variables while they’re in use. This makes is very difficult for secrets to be logged or otherwise exfiltrated from the runtime environment.

2. Storing Secrets in the Wrong Places

If you’re dealing with secrets they should always be encrypted at rest and encrypted in transmission. By now we all know that keeping secrets in source code is a bad idea. Yes, secrets can’t live in git with your code so where should you keep them? There’s a lot of bad advice online, suggesting AWS Systems Manager Parameter Store (aka SSM) is a good place to store your secrets. Like environment variables Parameter Store is good for configuration, but terrible for secrets.

AWS Systems Manager Parameter Store falls short as a secrets backend in a few key areas:

  1. Parameters aren’t generally encrypted at rest and are often displayed in the AWS Console UI. Encryption only occurs for entries using the recently added Secure String type.
  2. Parameter Store is free but heavily rate limited. It doesn’t accommodate traffic spikes so you can’t rely on fetching secrets at runtime during traffic spikes. To avoid throttling your Lambdas you need to rely on environment variables to pass Parameter Store values in.
  3. You should never store secrets in environment variables.

At Stackery we use AWS Secrets Manager which stores secrets securely with fine grained access policies, auto-scales to handle traffic spikes, and is straightforward to query at runtime.

3. Bad IAM Permissions

Each function in your application should only have access to the secrets it needs to do its work. However it’s very common for teams to run configurations (often unintentionally) where every function by default is granted access to all secrets from all environments. These “/*” permissions mean a compromised function in a test environment can be used to fetch all production secrets from the secrets store. This is a bad idea for obvious reasons. Permission access should be tightly scoped by environment and usage, with functions defaulting to no secrets access.

At Stackery we automatically scope an IAM role per function and Fargate container tasks which limits AWS Secrets Manager access to the current environment the function is running in and the set of secrets required by that specific function.

Managing Serverless Environment Secrets with Stackery

Our team has learned a lot about how to manage serverless secrets, running production serverless applications, and working with many serverless teams and pioneers. We’ve integrated these best practices back into Stackery so serverless teams can easily layer secure secrets management onto their existing projects. If you are curious to read more about how Stackery handles secrets check out the Environment Secrets in the Stackery Docs.

Product Update: Accelerating Existing Serverless Projects
Sam Goldstein

Sam Goldstein | October 25, 2018

Product Update: Accelerating Existing Serverless Projects

We love helping serverless teams accelerate and manage projects and environments regardless of whether they started with Stackery or not. We’ve recently made improvements to importing projects and code for AWS SAM, Serverless Framework, and Gitlab. Here are the details:

Extending Stackery Capabilities to Serverless Framework

We’ve extended Stackery’s Infrastructure as Code (IaC) visual editing and serverless environment management capabilities to Serverless Framework (serverless.yml) projects. You can now visualize serverless.yml stacks in Stackery’s visual editor and quickly configure advanced cloud resources such as VPCs, GraphQL/AppSync, and Kinesis Streams. Serverless.yml apps can be deployed into AWS accounts using Stackery’s environment management and deployment automation. This provides a unified experience for teams managing both serverless.yml and AWS SAM apps throughout the software development process.

CLI-Based AWS Account Management

The Stackery Role, which acts as an extension to your AWS account, can now be managed through the stackery aws setup, stackery aws unlink, stackery aws accounts and stackery aws update-role. Read more about the Stackery Role in our docs.

Private GitLab Integration

Stackery now integrates with private GitLab instances in addition to private GitHub Enterprise instances. If you’re interested in connecting Stackery to your private GitLab instance just contact us.

React Single Page App Tutorial

There’s a new guide which walks through the process of deploying and hosting a single-page React application using the serverless approach. Check out the React Single Page App Tutorial in our docs.

Stackery’s Quickstart Just Got Quicker—and More Useful
Anna Spysz

Anna Spysz | October 15, 2018

Stackery’s Quickstart Just Got Quicker—and More Useful

If you’ve been over to our documentation site lately, you may have noticed some changes. We’ve got a new look and some new tutorials, but the latest upgrade is our new Quickstart tutorial.

While the first version of our Quickstart just got you up and running with Stackery, version 2.0 also has you deploying a static HTML portfolio page to an API endpoint:

Oooh, fancy!

Once you’ve followed the tutorial and deployed your static site, you can customize the HTML with your own information and links to your projects. You can then follow our serverless contact form tutorial to give the contact form on your site functionality as well.

Want a preview? This YouTube video walks you through the entire Quickstart tutorial:

And be sure to visit our docs site regularly, as we have several new tutorials in the works. Stay tuned for a React application with a serverless backend - coming soon!

Deploy GraphQL APIs with Stackery
Sam Goldstein

Sam Goldstein | October 03, 2018

Deploy GraphQL APIs with Stackery

It’s been a busy month in Stackery engineering. Here’s a quick recap of what’s new in the product this week.

You can now use Stackery to configure and provision AWS AppSync GraphQL APIs, which is a serverless pay-per-invocation service similar to API Gateway, but for GraphQL! GraphQL resolvers can be connected to backend data sources like DynamoDB tables, Lambda functions, or HTTP proxies. You can read more about the using Stackery with GraphQL in the Stackery docs.

Trigger Lambda Function on Deploy

Does your deployment processes involve multiple commands that need to be run in a certain order? Stackery now provides the ability to mark any function as “Trigger on First Deploy” or “Trigger on Every Deploy”, which provides a clean mechanism to handle database migration, ship single page apps, and handle custom deploy logic across all your environments. To make this work Stackery sets up a CloudFormation Custom Resource in your project’s SAM template which is used to invoke the function when the stack is deployed. Read more in the Stackery Function Docs.

Reference Existing Cloud Resources

Teams are often deploying serverless stacks into existing cloud infrastructures. What happens when your Function needs to subscribe to an existing DynamoDB stream or be placed in an existing VPC? Stackery provides the ability to replace resources in a stack with a pointer to an already provisioned resource. This can be specified per environment which enables you to provision mock resources in dev/test environments but reference central infrastructure in production. Check out the “Use Existing” flag on resources like DynamoDb Tables or Virtual Networks

GitHub and GitLab bulk project import

No one wants to set up a bunch of AWS Serverless Application Model (SAM) projects with Stackery one by one so we built a 1 click importer which locates all your projects with a valid SAM template file (template.yaml) and sets them up to deploy and edit with Stackery. It works for both GitHub and GitLab and you can find it on the Stackery Dashboard homepage at app.stackery.io.

GitLab + Stackery = Serverless CI/CD <3
Sam Goldstein

Sam Goldstein | August 28, 2018

GitLab + Stackery = Serverless CI/CD <3

GitLab is a git hosting solution which features a built-in CI/CD pipeline that automates the delivery process. We’ve seen more and more serverless development teams asking how they can integrate their GitLab with Stackery. I am happy to announce that today Stackery features full support for GitLab source code hosting and serverless CI/CD deployments.

By linking your Stackery account with GitLab you can quickly develop and deploy serverless apps. Stackery helps generate AWS SAM YAML infrastructure-as-code templates and manage their Lambda source code, integrating directly with GitLab’s source code hosting. However the bigger payoff is taking full advantage of Stackery’s serverless deployment automation which is intentionally simple to integrate into GitLab’s CI/CD release automation. Stackery’s CLI deployment tool is a cross-compiled Go binary with no external dependencies. It’s just one step to download and bundle it in your repo and you then it’s simple to invoke it from your GitLab project’s .gitlab-ci.yml.

Here’s a basic example showing how to integrate Stackery into you .gitlab-ci.yml

stages:
  - test
  - build
  - deploy

test:
  stage: test
  script: echo "Running tests"

build:
  stage: build
  script: echo "Building the app"

deploy_staging:
  stage: deploy
  script:
    - stackery deploy --stack-name "myStack" --env-name "staging" --git-ref "$CI_COMMIT_SHA"
  environment:
    name: staging
    url: https://staging.example.com
  only:
  - master

By integrating Stackery and GitLab you can take advantage of a number of interesting features to take your serverless deployment automation to the next level. For example:

  • GitLab pipeline security can be used to provide automated production change control for serverless applications.
  • GitLab’s environments and deployments are straight-forward to integrate with stackery deploy and can be used to orchestrate sophisticated CI/CD pipelines across multiple AWS accounts and environments.
  • Serverless Deploy from Chat is great. You know you’re doing it right when you’re deploying serverless SAM applications by chat. 💬🦊λ 🙌

We hope you enjoy this new GitLab integration.

Stackery is Now Running on SAM (Serverless Application Model) from AWS
Garrett Gillas

Garrett Gillas | July 31, 2018

Stackery is Now Running on SAM (Serverless Application Model) from AWS

Amazon Web Services SAM is a developer-centric, cloud native, open source framework to define serverless applications faster and with better consistency. SAM provides a standard for defining the architecture of serverless projects. Stackery now supports this framework natively and provides the tools for maintaining best practices, streamlining workflows, and enforcing consistency.

Amazon continues to move the industry forward from static, monolithic technology to a much more distributed landscape. Because it’s an open source standard heavily supported by and aligned with AWS, SAM benefits from Amazon’s prominence in the industry and the ecosystem established around them. As a part of Amazon’s toolbox, SAM comes with the resources and support of SAR, the Serverless Application Repository.

Velocity and Efficiency

So what changes are expected from Stackery now that it deploys to AWS using SAM? On the surface it might not look like much of anything has changed. Stackery customers still have full control and everything will continue to be stored and accessible in the same AWS account.

SAM is a open source community supported standard, heavily backed by AWS, the clear leader for hosting serverless apps. Opting for SAM means that you’re building your apps on the leading open source standard with a vibrant community to turn to when you need support, and it gives you to access to all the tools and advantages of such an ecosystem, like SAM local development mode.

Stackery endows you with the power to generate and manage SAM templates directly. Maintain full control while increasing consistency and efficiency by automating the most repetitive infrastructure configuration tasks. Just a few clicks in the Stackery Operations Console can generate new infrastructure-as-code configurations — You can also manually edit existing SAM templates and alter them to fit your needs. Doing that any other way would require significantly higher time and work investments.

Consistency and Scalability

Move fast without the fear that you’ll make mistakes. Engineering time is better spent working on core problems than small, monotonous tasks but the nature of writing software means consistency and attention to detail are vital. Every organization wants to move fast — leadership and engineering teams can always agree on that much — and Stackery with SAM enables higher velocity without sacrificing consistency. By building with Stackery on SAM, developers can enjoy the benefits of Lambda’s scalability while consistently shipping stable code and minimizing the serverless learning curve.

At Stackery, we want people to be able to use their own tools and write their applications the way they want. For developers who want to move fast, SAM makes that possible without creating new abstractions that cause trouble in the long run. Move fast with stable infrastructure.

Stackery 2018 Product Updates
Sam Goldstein

Sam Goldstein | May 16, 2018

Stackery 2018 Product Updates

Our product engineering team ships every single day.

That means Stackery’s product gets better every single day. Stackery engineers commit code into git which marches into our continuous delivery pipeline. We promote each version of our microservices, frontend, and CLI through multiple testing environments, rolling shiny new features into production or notifying the team of failures. This is the best way we know to develop modern software and explains why our team is able to ship so much functionality so rapidly.

However, because we’re constantly shipping, it means we need to pause periodically to take note of new features and improvements. In this post I’ll summarize some of the most significant features and changes from our product team over the past few months. For a more detailed list of changes, you can read and/or follow Stackery’s Release Notes.

Referenced Resource

One of the best things about microservice architecture is the degree which you can encapsulate and reuse functionality. For example, if you need to check if a user is authorized to perform a certain action, there’s no need to scatter permissioning code throughout your services. Put it all in one place (an AuthorizationService perhaps), and call out to that in each service that needs to check permissions.

Stackery’s Referenced Resource nodes let’s you reference existing infrastructure resources (be they Lambda functions, S3 buckets, VPCs, you name it) by their AWS ARN and seamlessly integrate these into your other services.

One of the best uses I’ve seen for Referenced Resources is using it as the mechanism to implement centralized error reporting for serverless architectures. Write one central Lambda function that forwards exceptions into your primary error reporting and alerting tool. Configure every other stack to send error events to this central handler. Viola! Complete visiblity into all serverless application errors.

Support for Multiple AWS Accounts

Every company we work with uses multiple AWS accounts. Sometimes there’s one for production and one for everything else. In Stackery’s engineering team each engineer has multiple accounts for development and testing, as well as shared access to accounts for integration testing, staging, and production. Splitting your infrastructure across multiple accounts has major benefits. You can isolate permissions and account-wide limits, minimizing risk to critical accounts (e.g. production).

However, managing deployment of serverless architectures across multiple accounts is often a major PITA. This is why working across multiple accounts is now treated as a first class concern across all of Stackery’s functionality. Multiple AWS accounts can be registered within a Stackery account using our CLI tool. Stackery environments are tied to an AWS accounts, which maps flexibly into the vast majority of AWS account usage patterns.

Managing multiple AWS accounts is a key part of most organizations’ cloud security strategy. Stackery supports this by relying on your existing AWS IAM policies and roles when executing changes. If the individual executing the change doesn’t have permission in that AWS account, the action will fail. This makes it straightforward to set up workflows where engineers have full control to make changes in development and testing environments, but can only propose changes in the production account, which are then reviewed and executed by an authorized individual or automation tool.

You can read more in our knowledge base article about Working with multiple AWS accounts in Stackery

CloudFormation Resource Nodes

Sometimes you need to do something a little different, which is why we built custom CloudFormation Resource nodes. You can use these to provision any AWS resource and take advantage of the full power and flexibility of CloudFormation, for situations when that’s required or desireable.

What’s been coolest about rolling this feature out is the variety of creative uses we’ve seen it used. For example use CloudFormation Resource nodes to automatically configure and seed a database the first time you deploy to a new environment. You can also use them to automatically deploy an HTML front end to CloudFront each time you deploy your backend serverless app. The possibilities are endless.

AWS Resource Tagging

Resource Tagging may not be the most glamorous of features, but it’s a critical part of most organizations’ strategies for tracking cost, compliance, and ownership across their infrastructure. Stackery now boasts first class support for tagging provisioned resources. We also provide the ability to require specific tags prior to deployment, making it orders of magnitude to get everyone on the same page on how to correctly tag resources.

Always Shipping

Our goal is to always be shipping. We aim to push out valuable changes every day. Customer’s gain more control and visiblity over their serverless applications each day, so they can ship faster and more frequently too. Look out for more great changes rolling out each day in the product, and watch this blog for regular announcements summarizing our progress. We also love to hear what you think so if you have wants or needs managing your serverless infrastructure, don’t hesitate to let us know.

Serverless Health Status Dashboard
Sam Goldstein

Sam Goldstein | February 08, 2018

Serverless Health Status Dashboard

Stackery’s Operations Console is the place DevOps teams go to manage their serverless infrastructure and applications. This week we’re announcing the general availability of Serverless Health Dashboards which surfaces realtime health status data for deployed serverless applications. As early adopters of microservice and serverless architectures, we’ve experienced first hand how complexity shifts away from monolithic codebases towards integrating (and reasoning about) many distributed components. That’s why we designed Serverless Health Dashboards to provide visibility into the realtime status of serverless applications, surfacing the key data needed to identify production problems and understand the health of serverless applications.

Once you’ve setup a Stackery account you’ll see a list of all the Cloudformation stacks that you’ve deployed within your AWS account. When you drill into a stack we display a visual representation that shows the stack’s provisioned resources and architectural relationships. I personally love this aspect of the console, since it’s challenging to track the many moving parts of a microservices architecture. Having an always-up-to-date visualization of how all the pieces fit together is incredibly valuable to keeping a team coordinated and up to speed on the systems they manage.

Within the stack visualization we surface key health metrics related to each node. This enables you to assess the operational health of the stack at a glance, and quickly drilldown on the parts of the stack experiencing errors or other problems. When you need to dig deeper to understand complex interactions between different stack components you can access detailed logs, historical metrics, and X-Ray transaction traces through the node’s properties panel.

Getting access to Stackery’s Serverless Health Dashboards requires creating a free Stackery account. You’ll immediately be able to see health status for any application that’s been deployed via AWS CloudFormation, Serverless Framework, or Stackery Deployment Pipeline. We hope you’ll try it out and enjoy the increased visibility into the health and status of your serverless infrastructure.

Get the Serverless Development Toolkit for Teams

Sign up now for a 60-day free trial. Contact one of our product experts to get started building amazing serverless applications today.

To Top