Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Posts on Engineering

What Successful Serverless Teams Know
Nate Taggart

Nate Taggart | October 10, 2018

What Successful Serverless Teams Know

Shipping serverless applications feels good. And it should! Serverless lets us focus on our software and ignore the tedium of managing servers. You download a framework, write a little code, and deploy your first Lambda function. Congrats! You’re a serverless developer!

But, as you run through that first “Hello, world” serverless tutorial, you might notice that you’re cutting a few corners that you can’t really cut in a professional setting. Security? Permissions? Secrets management? Dev environments? Testing? CI/CD? Version control? And the other two hundred little details that matter when you’re doing professional software development with a team.

On the one hand, these are solvable problems. On the other hand, though, if you have to re-invent the wheel for the development and operations cycle, maybe you won’t get to focus on the code as much as you thought…

Successful Serverless Teams

Successful serverless teams use software tools to solve these challenges. They deliver projects on time and reliably by automating the manual, error-prone parts of serverless development. While we could write a book on all of the best team ergonomics for serverless, let’s focus on the big three areas where you’ll want a tool: configuration, release automation, and visibility.


Regardless of which framework you choose, once you get past your first “Hello, World” function, you’re going to have to start writing configuration code. Congrats! You’re a serverless YAML developer!

You (and everyone else on your team) will need to learn to configure every single cloud resource you want to use down to the smallest details. Event streams, VPCs, API gateways, datastores, etc, etc. And I mean really down in the weeds here – like, the be ready to map your IP Routing Tables kind-of-in-the-weeds…

The right tooling can automate this configuration for you and let you pull pre-configured resources off the shelf and into your framework automatically. That’s trickier than it sounds! Most “resources” are actually a collection of services. It’s not enough just to say “I need an API,” you’ll be configuring IAM roles as part of the assembly process, unless you have professional tooling.

Oh, and um, this is awkward… everyone on your team is going to have their own configuration file. Each developer will need to sandbox their own resource instances with scoped IAM roles and namespace their resources so you don’t overwrite each other with collisions. Even with master-level git-fu, this is really hard. That’s coming from me, and I came to Stackery from GitHub.

Release Automation

For serverless release automation, we’re going to need to figure out how to solve a few specific challenges: defining deployment stages, managing permissions, and integrating into a central CI/CD pipeline.

Once you’ve got your application built and your infrastructure configured, you’re ready to deploy. For your first app, that probably meant giving your framework God-like privileges in your personal AWS account. Yeah, ok, no, we’re not going to do that at work, in production. Right?

For serverless release automation, we’re going to need to figure out how to solve a few specific challenges: defining deployment stages, managing permissions, and integrating into a central CI/CD pipeline.

Managing deployment stages is a very similar problem to juggling your multiple configuration files. In fact, you could just define each stage in that one file… except that now when you make a configuration change, you have to remember to make it in every environment. I’m not pointing fingers here, but it’ll probably get messed up by someone at some point. And that will suck. Plus, these environments each have their own secrets and environment parameters which you’ll want to keep out of version control (and out of your config file) but available to the newly provisioned resources.

We’ll also want to create limited access roles for provisioning which, unfortunately, some frameworks just don’t support. This is why Stackery’s CLI leverages your existing user roles to enforce your access policy, rather than requiring admin rights to your AWS account like other tools.

Finally, while you could write your own scripts, scripting up serverless deployments can be tricky and brittle. With the right CLI tool, you can simply drop it into your CI/CD pipeline and have it automatically support your deployment stages and environment parameters.

Serverless Visibility

When you’re developing an application to run on static infrastructure (you know, the old way with servers), it’s pretty easy to visualize the architecture in your head. There’s an app; it’s on a server. If someone makes a change, the architecture remains stable. If there’s an error, it’ll show up in the server logs. Need metrics? Dropping a library or agent in one place will do the trick. Pretty straightforward.

With serverless, visibility suddenly becomes way more important. The dynamic architecture changes as your team builds more functions. Errors and performance bottlenecks can get distributed to other services. Logs and metrics collection need to be in place in advance – once that function instance dies, it and its data are gone forever.

It may not be obvious in advance, but the day will come when having a place to quickly glance and see a real-time view of your application architecture and performance will save you. Plan accordingly.

Get Back to Development

While we focused on three big challenges, the truth is that there are a lot. Centralized build process, dependency management, standardized instrumentation, error monitoring, and on and on. Pioneering teams have solved most of the above, for the rest of you, we’re making sure you can do all of the above without having to build it yourself.

The leading serverless teams today spent the last two or three years solving these challenges. Again, they are solvable. But if you’re trying to deliver your application and meet your deadlines (and not create a bunch of extra risk for your organization in the process), you have three choices:

  1. Give up the velocity advantages of serverless and go back to legacy software development.
  2. Delay the velocity advantages of serverless and spend the next several sprints trying to invent your own patterns (and then the subsequent ones refining them and training everyone on how to do it your way) and roll your own tooling scripts.
  3. Embrace the velocity advantages of serverless and plug in a software tool to manage these challenges and get back to development.

And really, that’s a pretty easy choice. Smart companies will always stand on the shoulders of giants and focus their efforts on building problems unique to their business. Try Stackery today and get back to development.

Deploy GraphQL APIs with Stackery
Sam Goldstein

Sam Goldstein | October 03, 2018

Deploy GraphQL APIs with Stackery

It’s been a busy month in Stackery engineering. Here’s a quick recap of what’s new in the product this week.

You can now use Stackery to configure and provision AWS AppSync GraphQL APIs, which is a serverless pay-per-invocation service similar to API Gateway, but for GraphQL! GraphQL resolvers can be connected to backend data sources like DynamoDB tables, Lambda functions, or HTTP proxies. You can read more about the using Stackery with GraphQL in the Stackery docs.

Trigger Lambda Function on Deploy

Does your deployment processes involve multiple commands that need to be run in a certain order? Stackery now provides the ability to mark any function as “Trigger on First Deploy” or “Trigger on Every Deploy”, which provides a clean mechanism to handle database migration, ship single page apps, and handle custom deploy logic across all your environments. To make this work Stackery sets up a CloudFormation Custom Resource in your project’s SAM template which is used to invoke the function when the stack is deployed. Read more in the Stackery Function Docs.

Reference Existing Cloud Resources

Teams are often deploying serverless stacks into existing cloud infrastructures. What happens when your Function needs to subscribe to an existing DynamoDB stream or be placed in an existing VPC? Stackery provides the ability to replace resources in a stack with a pointer to an already provisioned resource. This can be specified per environment which enables you to provision mock resources in dev/test environments but reference central infrastructure in production. Check out the “Use Existing” flag on resources like DynamoDb Tables or Virtual Networks

GitHub and GitLab bulk project import

No one wants to set up a bunch of AWS Serverless Application Model (SAM) projects with Stackery one by one so we built a 1 click importer which locates all your projects with a valid SAM template file (template.yaml) and sets them up to deploy and edit with Stackery. It works for both GitHub and GitLab and you can find it on the Stackery Dashboard homepage at

Disaster Recovery in a Serverless World - Part 2
Apurva Jantrania

Apurva Jantrania | September 17, 2018

Disaster Recovery in a Serverless World - Part 2

This is part two of a multi-part blog series. In the previous post, we covered Disaster Recovery planning when building serverless applications. In this post, we’ll discuss the systems engineering needed for an automated solution in the AWS cloud.

As I started looking into implementing Stackery’s automated backup solution, my goal was simple: In order to support a disaster recovery plan, we needed to have a system that automatically creates backups of our database to a different account and to a different region. This seemed like a straightforward task, but I was surprised to find that there was no documentation on how to do this in an automated, scalable solution - all existing documentation I could find only discussed partial solutions and were all done manually via the AWS Console. Yuck.

I hope that this post will make help fill that void and help you understand how to implement an automated solution for your own disaster recovery solution. This post does get a bit long so if that’s not your thing, see the tl;dr.

The Initial Plan

AWS RDS has automated backups which seemed like the perfect platform to base this automation upon. Furthermore, RDS even emits events that seem ideal for using to kick off a lambda function that will then copy the snapshot to the disaster recovery account.


The first issue I discovered was that AWS does not allow you to share automated snapshots - AWS requires that you first make a manual copy of the snapshot before you can share it with another account. I initially thought that this wouldn’t be a major issue - I can easily make my lambda function first kick off a manual copy. According to the RDS Events documentation, there is an event RDS-EVENT-0042 that would fire when a manual snapshot was created. I could then use that event to then share the newly created manual snapshot to the disaster recovery account.

This leads to the second issue - while RDS will emit events for snapshots that are created manually, it does not emit events for snapshots that are copied manually. The AWS docs aren’t clear about this and it’s an unfortunate feature gap. This means that I have to fall back to a timer based lambda function that will search for and share the latest available snapshot.

Final Implementation Details

While this ended up more complicated than initially envisioned, Stackery still makes it easy to add all the needed pieces for fully automated backups. My implementation ended up looking like this:

The DB Event Subscription resource is a CloudFormation Resource in which contains a small snippet of CloudFormation that subscribes the DB Events topic to the RDS database

Function 1 - dbBackupHandler

This function will receive the events from the RDS database via the DB Events topic. It then creates a copy of the snapshot with an ID that identifies the snapshot as an automated disaster recovery snapshot

const AWS = require('aws-sdk');
const rds = new AWS.RDS();

const DR_KEY = 'dr-snapshot';
const ENV = process.env.ENV;

module.exports = async message => {
  // Only run DB Backups on Production and Staging
  if (!['production', 'staging'].includes(ENV)) {
    return {};

  let records = message.Records;
  for (let i = 0; i < records.length; i++) {
    let record = records[i];

    if (record.EventSource === 'aws:sns') {
      let msg = JSON.parse(record.Sns.Message);
      if (msg['Event Source'] === 'db-snapshot' && msg['Event Message'] === 'Automated snapshot created') {
        let snapshotId = msg['Source ID'];
        let targetSnapshotId = `${snapshotId}-${DR_KEY}`.replace('rds:', '');

        let params = {
          SourceDBSnapshotIdentifier: snapshotId,
          TargetDBSnapshotIdentifier: targetSnapshotId

        try {
          await rds.copyDBSnapshot(params).promise();
        } catch (error) {
          if (error.code === 'DBSnapshotAlreadyExists') {
            console.log(`Manual copy ${targetSnapshotId} already exists`);
          } else {
            throw error;

  return {};

A couple of things to note:

  • I’m leveraging Stackery Environments in this function - I have used Stackery to define process.env.ENV based on the environment the stack is deployed to
  • Automatic RDS snapshots have an id that begins with ‘rds:’. However, snapshots created by the user cannot have a ‘:’ in the ID.
  • To make future steps easier, I append dr-snapshot to the id of the snapshot that is created

Function 2 - shareDatabaseSnapshot

This function runs every few minutes and shares any disaster recovery snapshots to the disaster recovery account

const AWS = require('aws-sdk');
const rds = new AWS.RDS();

const DR_KEY = 'dr-snapshot';
const DR_ACCOUNT_ID = process.env.DR_ACCOUNT_ID;
const ENV = process.env.ENV;

module.exports = async message => {
  // Only run on Production and Staging
  if (!['production', 'staging'].includes(ENV)) {
    return {};

  // Get latest snapshot
  let snapshot = await getLatestManualSnapshot();

  if (!snapshot) {
    return {};

  // See if snapshot is already shared with the Disaster Recovery Account
  let data = await rds.describeDBSnapshotAttributes({ DBSnapshotIdentifier: snapshot.DBSnapshotIdentifier }).promise();
  let attributes = data.DBSnapshotAttributesResult.DBSnapshotAttributes;

  let isShared = attributes.find(attribute => {
    return attribute.AttributeName === 'restore' && attribute.AttributeValues.includes(DR_ACCOUNT_ID);

  if (!isShared) {
    // Share Snapshot with Disaster Recovery Account
    let params = {
      DBSnapshotIdentifier: snapshot.DBSnapshotIdentifier,
      AttributeName: 'restore',
      ValuesToAdd: [DR_ACCOUNT_ID]
    await rds.modifyDBSnapshotAttribute(params).promise();

  return {};

async function getLatestManualSnapshot (latest = undefined, marker = undefined) {
  let result = await rds.describeDBSnapshots({ Marker: marker }).promise();

  result.DBSnapshots.forEach(snapshot => {
    if (snapshot.SnapshotType === 'manual' && snapshot.Status === 'available' && snapshot.DBSnapshotIdentifier.includes(DR_KEY)) {
      if (!latest || new Date(snapshot.SnapshotCreateTime) > new Date(latest.SnapshotCreateTime)) {
        latest = snapshot;

  if (result.Marker) {
    return getLatestManualSnapshot(latest, result.Marker);

  return latest;
  • Once again, I’m leveraging Stackery Environments to populate the ENV and DR_ACCOUNT_ID environment variables.
  • When sharing a snapshot with another AWS account, the AttributeName should be set to restore (see the AWS RDS SDK)

Function 3 - copyDatabaseSnapshot

This function will run in the Disaster Recovery account and is responsible for detecting snapshots that are shared with it and making a local copy in the correct region - in this example, it will make a copy in us-east-1.

const AWS = require('aws-sdk');
const rds = new AWS.RDS();

const sourceRDS = new AWS.RDS({ region: 'us-west-2' });
const targetRDS = new AWS.RDS({ region: 'us-east-1' });

const DR_KEY = 'dr-snapshot';
const ENV = process.env.ENV;

module.exports = async message => {
  // Only Production_DR and Staging_DR are Disaster Recovery Targets
  if (!['production_dr', 'staging_dr'].includes(ENV)) {
    return {};

  let [shared, local] = await Promise.all([getSourceSnapshots(), getTargetSnapshots()]);

  for (let i = 0; i < shared.length; i++) {
    let snapshot = shared[i];
    let fullSnapshotId = snapshot.DBSnapshotIdentifier;
    let snapshotId = getCleanSnapshotId(fullSnapshotId);
    if (!snapshotExists(local, snapshotId)) {
      let targetId = snapshotId;

      let params = {
        SourceDBSnapshotIdentifier: fullSnapshotId,
        TargetDBSnapshotIdentifier: targetId
      await rds.copyDBSnapshot(params).promise();

  return {};

// Get snapshots that are shared to this account
async function getSourceSnapshots () {
  return getSnapshots(sourceRDS, 'shared');

// Get snapshots that have already been created in this account
async function getTargetSnapshots () {
  return getSnapshots(targetRDS, 'manual');

async function getSnapshots (rds, typeFilter, snapshots = [], marker = undefined) {
  let params = {
    IncludeShared: true,
    Marker: marker

  let result = await rds.describeDBSnapshots(params).promise();

  result.DBSnapshots.forEach(snapshot => {
    if (snapshot.SnapshotType === typeFilter && snapshot.DBSnapshotIdentifier.includes(DR_KEY)) {

  if (result.Marker) {
    return getSnapshots(rds, typeFilter, snapshots, result.Marker);

  return snapshots;

// Check to see if the snapshot `snapshotId` is in the list of `snapshots`
function snapshotExists (snapshots, snapshotId) {
  for (let i = 0; i < snapshots.length; i++) {
    let snapshot = snapshots[i];
    if (getCleanSnapshotId(snapshot.DBSnapshotIdentifier) === snapshotId) {
      return true;
  return false;

// Cleanup the IDs from automatic backups that are prepended with `rds:`
function getCleanSnapshotId (snapshotId) {
  let result = snapshotId.match(/:([a-zA-Z0-9-]+)$/);

  if (!result) {
    return snapshotId;
  } else {
    return result[1];
  • Once again, leveraging Stackery Environments to populate ENV, I ensure this function only runs in the Disaster Recovery accounts

TL;DR - How Automated Backups Should Be Done

  1. Have a function that will manually create an RDS snapshot using a timer and lambda. Use a timer that makes sense for your use case
    • Don’t bother trying to leverage the daily automated snapshot provided by AWS RDS.
  2. Have a second function, that monitors for the successful creation of the snapshot from the first function and shares it to your disaster recovery account.

  3. Have a third function that will operate in your disaster recovery account that will monitor for snapshots shared to the account, and then create a copy of the snapshot that will be owned by the disaster recovery account, and in the correct region.
Empathy Through Observation: A User Testing Reality Check
Anna Yovandich

Anna Yovandich | September 13, 2018

Empathy Through Observation: A User Testing Reality Check

Engineers, by the nature of their work, cannot objectively experience their product like a legitimate user. While busy cranking out new features and pushing the product forward, it’s common to accrue some technical debt in the codebase. However, sprints to the finish line are likely increasing debt in an arguably more critical area: usability.

Usability is the measure of ease-of-use and learnability. In product design, it’s the glue that keeps people coming back and compels them to tell others about their indispensable discovery. Without it, they’ll give up, move on, and may very well feel insulted on the way out - all in the blink of an eye, pound of a key, or in the quiet defeat of not knowing what the hell to do next. It’s easy, on a product team, to build and pave our workflows with blinders on (makes sense on my machine!). We know how the thing works so well that our empathy becomes lost in the assumption that everyone will find their way through our tried and not-so-tested paths.

Empathy - the ability to understand and share feelings - is increasingly obscured when we don’t check in with the people for whom we’re building these tools. Examining new perspectives and understanding different personas are key to detaching from our own well worn assumptions. By observing user behavior in a few test sessions, interaction patterns begin to surface that expose design flaws varying from simple fix to total redo: words that go to waste, workflows that frustrate, and features that intimidate. These are a few common challenges facing product usability that engineers may fail to notice, or inadvertently train themselves to ignore.

Words That Go to Waste

Words often go unread. The more words there are, the less likely someone is to read them. When features and functionality are explained left and right, users feel fatigued. Verbose dialog causes people to turn the other way, assume it’s complicated, and disconnect. Maybe they’ll just ignore that feature and carry on with vague dissatisfaction, or maybe they’re one step closer to signing out forever. Describing a feature isn’t inherently bad, but if it comes with an instruction manual it’s likely to be ignored. If the message isn’t short and sweet, save it for the docs.

Workflows That Frustrate

Many times, users are forced through a set of instructions that might not make sense to them, in order to serve needs of the app (e.g. data gathering and 3rd party integration). Users forced into a prerequisite workflow - requiring them to perform actions before they can explore - may result in frustration as the first impression or ultimately, a rage quit. Instead, maximize functionality by minimizing restriction. How far can a person get in the experience just by signing in? Reducing barriers enables experimentation and learning. Isolating constraints to an atomic level - the moment of need - liberates the user experience and showcases what the product can do.

Features That Intimidate

Features require discovery and learning. Ideally, that occurs intuitively and without cognitive awareness. One of the toughest pills to swallow during usability testing is finding that robust functionality is cryptic, intimidating, or elusive. A test participant waves their mouse over an entire area: “I don’t know what thiiisss does.” It’s likely doing too much or lacks useful context. The single responsibility principle that shapes sturdy application development can provide a solid construct for product design as well. When a feature is overloaded with functionality, consider ways to split it into smaller parts, provide clear context, and support intuitive discovery.

These are some common pitfalls we found through our own user testing that will help inform design decisions going forward. Usability testing isn’t an end-all to holistic product design but it is a necessary practice for gaining insights into pain points, drop offs, and blind spots. In addition to revealing design flaws, these tests are imperative in reminding us that people will use an app much differently than those who build it.

How to Write 200 Lines of YAML in 1 Minute
Anna Spysz

Anna Spysz | September 11, 2018

How to Write 200 Lines of YAML in 1 Minute

Last month, our CTO Chase wrote about why you should stop YAML engineering. I completely agree with his thesis, though for slightly different reasons. As a new developer, I’ve grasped that it’s crucial to learn and do just what you need and nothing more - at least when you’re just getting started in your career.

Now, I’m all about learning for learning’s sake - I have two now-useless liberal arts degrees that prove that. However, when it comes to being a new developer, it’s very easy to get overwhelmed by all of the languages and frameworks out there, and get caught in paralysis as you jump from tutorial to tutorial and end up not learning anything very well. I’ve certainly been there - and then I decided to just get good at the tools I’m actually using for work, and learn everything else as I need it.

Which is what brings us to YAML - short for “YAML Ain’t Markup Language”. I started out as a Python developer. When I needed to, I learned JavaScript. When my JavaScript needed some support, I learned a couple of front-end frameworks, and as much Node.js as I needed to write and understand what my serverless functions were doing. As I got deeper into serverless architecture, it seemed like learning YAML was the next step - but if it didn’t have to be, why learn it? If I can produce 200+ lines of working YAML without actually writing a single line of it, in much less time than it would take me to write it myself (not counting the hours it would take to learn a new markup language), then that seems like the obvious solution.

So if a tool allows me to develop serverless apps without having to learn YAML, I’m all for that. Luckily, that’s exactly what Stackery does, as you can see in the video below:

Why You Should Stop YAML Engineering
Chase Douglas

Chase Douglas | August 30, 2018

Why You Should Stop YAML Engineering

Here’s some sample JavaScript code:

let foo = 5;

Do you know register held the value 5 when executing that code? Do you even know what a register is or what are plausible answers to the question?

The correct answer to this question is: No, and I don’t care. Registers are locations in a CPU that can hold numerical values. Your computing device of choice calculates values using registers millions of times each second.

Way back when computers were young people programmed them using assembly languages. The languages directly tell CPUs what to do: load values into registers from memory, calculate other values in the registers, save values back to memory, etc.

Between 1943 and 1945, Konrad Zuse developed the first (or at least one of the first) high-level programming languages: Plankalkül. It wasn’t pretty by modern standards, but its thesis could be boiled down to: you will program more efficiently and effectively using logical formulas. A more blunt corollary might be: Stop programming CPU registers!

A Brief History Of Computing Abstractions

Software engineering is built upon the practice of developing higher-order abstractions for common tasks. Allow me to evince this pattern:

1837 — Charles Babbage: Stop computing by pencil and paper! (Analytical Engine)

1938 — Conrad Zuse: Stop computing by hand! (Z1, first electrical computer)

1950 — UNIVAC: Stop repeating yourself! (UNIVAC 1101, first computer with programs stored in memory)

1985 — Bjarne Stroustrup: Stop treating everything like primitives! (C++ language, introduced object-oriented-programming to the C language)

1995 — James Gosling: Stop tracking memory! (Java language makes garbage collection mainstream)

2006 — Amazon: Stop managing data centers! (AWS EC2 makes virtual servers easy)

2009 — Ryan Dahl: Stop futzing with threads! (Node.js introduces event/callback-based semantics)

2011 — Amazon: Stop provisioning resources manually! (AWS CloudFormation makes Infrastructure-As-Code easier)

2014 — Amazon: Stop managing servers! (AWS Lambda makes services “functional”)

Software engineers are always finding ways to make whatever “annoying thing” they have to deal with go away. They do this by building abstractions. It’s now time to build another abstraction.

2018: Stop YAML Engineering!

The combination of infrastructure-as-code and serverless apps means developers are inundated with large, complex infrastructure templates, mostly written in YAML.

It’s time we learn from the past. We are humans who work at a logical, higher-order abstraction level. Infrastructure-as-code is meant to be consumed by computers. We need to abstract away the compiling of our logic into infrastructure code.

That’s what Stackery does. It turns this…

…into Serverless Application Model (SAM) YAML without asking the user to do anything more than drag and wire resources in a canvas. Stackery’s own backend is over 2,000 lines of YAML. We wouldn’t be able to manage it all without a tool that helps us both maintain and diagram the relationships between resources. It even performs this feat in both directions: you can take your existing SAM applications, import them into Stackery, and instantly visualize and extend the infrastructure architecture.

You may be a principal architect leading adoption of serverless in your organization. You may be perfectly capable of holding thousands of lines of infrastructure configuration in your head. But can everyone on your team do the same? Would your organization benefit from the automatic visualization and relationship/dependency management of a higher-order infrastructure-as-code abstraction?

As can be seen throughout the history of software engineering, the industry will move (and already is moving) to abstract away this lower level of engineering. Check out Stackery if you want to stay ahead of the curve.

GitLab + Stackery = Serverless CI/CD <3
Sam Goldstein

Sam Goldstein | August 28, 2018

GitLab + Stackery = Serverless CI/CD <3

GitLab is a git hosting solution which features a built-in CI/CD pipeline that automates the delivery process. We’ve seen more and more serverless development teams asking how they can integrate their GitLab with Stackery. I am happy to announce that today Stackery features full support for GitLab source code hosting and serverless CI/CD deployments.

By linking your Stackery account with GitLab you can quickly develop and deploy serverless apps. Stackery helps generate AWS SAM YAML infrastructure-as-code templates and manage their Lambda source code, integrating directly with GitLab’s source code hosting. However the bigger payoff is taking full advantage of Stackery’s serverless deployment automation which is intentionally simple to integrate into GitLab’s CI/CD release automation. Stackery’s CLI deployment tool is a cross-compiled Go binary with no external dependencies. It’s just one step to download and bundle it in your repo and you then it’s simple to invoke it from your GitLab project’s .gitlab-ci.yml.

Here’s a basic example showing how to integrate Stackery into you .gitlab-ci.yml

  - test
  - build
  - deploy

  stage: test
  script: echo "Running tests"

  stage: build
  script: echo "Building the app"

  stage: deploy
    - stackery deploy --stack-name "myStack" --env-name "staging" --git-ref "$CI_COMMIT_SHA"
    name: staging
  - master

By integrating Stackery and GitLab you can take advantage of a number of interesting features to take your serverless deployment automation to the next level. For example:

  • GitLab pipeline security can be used to provide automated production change control for serverless applications.
  • GitLab’s environments and deployments are straight-forward to integrate with stackery deploy and can be used to orchestrate sophisticated CI/CD pipelines across multiple AWS accounts and environments.
  • Serverless Deploy from Chat is great. You know you’re doing it right when you’re deploying serverless SAM applications by chat. 💬🦊λ 🙌

We hope you enjoy this new GitLab integration.

To Do Serverless Right, You Need A New IAM Approach
Nate Taggart

Nate Taggart | April 12, 2018

To Do Serverless Right, You Need A New IAM Approach

Identity and Access Management (IAM) is an important tool for cloud infrastructure and user management. It governs access control for both cloud services and users, and can incorporate features around auditing, authentication policies, and governance.

Use of IAM involves a multiple-step process of creating roles and permissions and then assigning those roles and permissions to users, groups, and resources. In static (or relatively stable) environments, like those on legacy infrastructure, this is a task that can be configured once and only periodically updated. For a critical, once-and-done type task like this, it has historically been a responsibility of a highly-privileged operations team which could own this responsibility and develop IAM permissioning as a core competency. In serverless environments, however, manual provisioning and assignment of IAM roles and permissions can have a dramatically negative impact on team velocity – one of the key advantages of serverless infrastructure.

Serverless Velocity and IAM

Serverless infrastructure is highly dynamic and prone to frequent change. As developers develop functions for deployment into a FaaS-style architecture, they’re fundamentally creating new infrastructure resources which must be governed. Since these changes can occur several times per day, waiting for an operations team to create and assign IAM policies is an unnecessary and highly impactful bottleneck to the application delivery cycle.

As a further challenge, FaaS architectures are difficult (if not impossible) to recreate in local environments. This means that the development cycle is likely to involve iterating and frequently deploying into a development account or environment. Having an operations team manually creating IAM policies in the course of this development cycle is prohibitively challenging.

These bottlenecks notwithstanding, IAM policies continue to play a critical role in security, governance, and access control. Organizations must find a way to create and assign IAM policies without blocking the product development team from their high-velocity serverless application lifecycle.

The New Serverless IAM Strategy

There are generally two approaches to IAM policy-making for serverless. The first is to extend the responsibility from your specialized operations team to your entire development group. This approach has a number of drawbacks, including the need for extensive training, a human-error risk, a reduction in development velocity, and a broad extension of access which dramatically reduces control.

The second, and preferred, solution is to automatically provision IAM policies based on a rule-set of best-practices and governance standards. In this scenario, a company would either develop their own release tooling or purchase a pre-built solution like Stackery’s Serverless Operations Console. This software would then be responsible for encapsulating principles of “Least Privilege,” environment management, policy creation, and policy application for all serverless stacks.

In this way, your product engineering team can focus on developing code and can have permissions to provision their services into development environments which are automatically sandboxed and isolated. Once development has been satisfied, this software can promote the new service into a new sandboxed environment for integration testing and QA. Your CI/CD pipeline can continue to promote the service all the way to production, using appropriate roles and permissions at each step, thereby ensuring both IAM policy compliance and high-velocity through automation.

This automatic creation and assignment of IAM policies reduces the risk for human error, ensures that resources are appropriately locked down in all stages of release, and encapsulates DevOps best practices for both high velocity and consistent control.

If you’re still manually creating and assigning IAM policies in your serverless development cycle, I encourage you to consider the advantages of modernizing this workflow with specialized serverless operations software.

Get the Serverless Development Toolkit for Teams

Sign up now for a 60-day free trial. Contact one of our product experts to get started building amazing serverless applications today.

To Top