Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Posts on Serverless

How to Serverless Locally: The Serverless Dev Workflow Challenge
Sam Goldstein

Sam Goldstein | April 24, 2019

How to Serverless Locally: The Serverless Dev Workflow Challenge

One of the biggest challenges when adopting serverless today is mastering the developer workflow. “How do I develop locally?”, “How should I test?”, “Should I mock AWS services?”. These are common serverless questions, and the answers out there have been unsatisfying. Until now.

There are a few reasons that developer workflow is a common pain point and source of failure for serverless developers:

  1. Serverless architecture relies primarily on cloudside managed services which cannot be run on a development laptop (for example, DynamoDB).

  2. Since infrastructure dependencies can’t be run locally, creating a consistent local development environment is challenging.

  3. As a result, serverless developers are frequently forced to resort to suboptimal workflows where they must deploy each code change to a cloudside environment in order to execute it.

  4. Deploying each code change forces developers to spend time waiting, which is bad for developer productivity and happiness.

Comic by Randall Munroe of

“Getting a smooth developer workflow is a challenge with serverless.” Says iRobot’s Richard Boyd, who also appeared as a guest on a recent episode of Stackery’s weekly livestream. “There are some people who think we should mock the entire cloud on our local machine and others who want to develop in the real production environment. Finding the right balance between these two is a very hard problem that I’m not smart enough to solve.”

The Developer’s Inner Loop

Developers spend the bulk of their coding time following a workflow that looks something like this:

  1. Change the code

  2. Run the code

  3. Analyze the output of running the code

  4. Rinse and Repeat

In a traditional local development environment, this loop can be completed in 3-10 seconds, meaning a developer can make and test many code changes in a minute. Today many serverless developers are forced into a slow inner loop which takes several minutes to complete. When this happens developers spend most of their time waiting for new versions of their code to deploy to the cloud where it can be run and analyzed (before making the next change). This cuts down on the number of code changes a developer can make in a day and results in context switching and frustration.

The Outer Loop

The outer loop changes in service configurations and application architectures occur less rapidly— maybe a few times a day for apps under active development. This is an area where Stackery has a long history in helping developers express, deploy, and manage the pipeline of promoting applications to production.

Common Suboptimal Inner Loop Approaches

Serverless architecture is still relatively new so best practices are evolving and not widely known. However, there are a few common approaches to the serverless developer workflow which we know results in suboptimal developer workflows and productivity.

Suboptimal Approach #1: Push each change to the cloud

One common approach is to avoid developing locally and deploy each change to the cloud to run in the cloudside environment. The primary disadvantage of this is that developers must wait for the deploy process to complete for each minor code change. In some cases, these minor code change deploy times can be made faster, but only to a point and often at the cost of consistency. For example, Serverless Framework’s sls deploy -f quickly packages and deploys an individual function.

While this does improve the inner loop cycle time it is still significantly slower than executing in a local environment and creates inconsistency in the cloudside environment. This makes it more likely problems will emerge in upstream environments.

Suboptimal Approach #2: Mock cloud services locally

Another common approach is to create mock versions of the cloud services to be used in the local environment. Many developers figure out ways to mock AWS service calls to speed up their dev cycles and there are even projects such as localstack which attempt to provide a fully functional local AWS cloud environment. While this approach may initially appear attractive it comes with a number of significant downsides. Most significantly, fully replicating AWS’ rapidly evolving suite of cloud services is a monumental task.

Inevitably there will be inconsistencies between the local mocks and real cloudside services, especially for recently added features. With this approach, the local environment is not consistent with the cloudside environment, and somewhat cruelly, any bugs resulting from these inconsistencies will still need to be debugged and resolved cloudside which means more time spent following Suboptimal Approach #1.

The Optimal Local Serverless Dev Workflow

Pull Cloudside Services to Your Local Code

There is a best practice today for creating a serverless dev environment. The benefits of this approach are:

  1. A tight inner loop (3-7 seconds)

  2. High consistency with the cloudside environment

  3. No need to maintain localhost mocks of cloudside services

Following this approach, you will execute local versions of your function code in the context of a deployed cloudside environment. A key component of this approach is to query the environment and IAM configuration directly from your cloudside development environment. This avoids the need for keeping a local copy of environment variables and other forms of bookkeeping needed to integrate with cloudside services.

The result? Consistency between how you code behaves locally and the cloudside environment. For example, one common debugging headache results from different service to service permissions and configurations between local and cloudside resources. This setup cuts off that entire class of problems. The bottom line? Fast local development cycles, reduced debugging friction, and the ability to ship new functionality faster as a result.

There are a few steps required to achieve this setup:

  1. Create a cloudside environment which can be used for development (eventually you will want additional environments such as test, staging, and production)

  2. Deploy the desired cloud services and architecture to your cloudside development environment.

  3. Iterate on function code locally ensuring that:

  • Deployed function environment variables are identical to local ones

  • Deployed function IAM permissions are identical to local ones

  • Environment-specific parameters and function configuration are identical to local ones

  • The language runtime environment is identical to AWS Lambda (… We use SAM local for this bit if you are wondering how the two differ.)

This approach sets you up to rapidly iterate on code locally, deploying to your dev environment only when they need to make environment configuration changes or they have made significant progress on function code. Once the code is ready to PR from the development environment it can be deployed to other environments (test, staging, prod) via CI/CD automation or manually triggered commands.

Setting Up Local Cloudside Development

We’ve been iterating on serverless code for a few years now at Stackery, sharpening the saw, and honing in on a prescriptive workflow to optimize our development workflow. Several of us nosed our way into the local cloudside approach. As we shared our experience with other pioneering serverless engineers we found a growing consensus that this was the “right way to serverless” and the productivity and flexibility it unlocked was game-changing.

Today we’re announcing the release of the stackery local invoke which unlocks this more efficient and logical workflow for serverless projects using AWS Lambda. If you’ve spent any time in serverless development, you know how overdue and crucial this is.

Catch a sneak peek below and sign up for a free Stackery account if you haven’t already. Be sure to check out our doc on Local Cloudside Development, too!

Finally, don’t miss these upcoming Stackery Livestreams on May 1st and 15th where we’ll dive into the topic of Stackery Local and intelligent serverless workflows; Wednesdays at 10 AM PDT. Register here.

Your Development Environment is Missing
Nate Taggart

Nate Taggart | April 22, 2019

Your Development Environment is Missing

It’s hard to believe, but 10 years ago AWS had only five products. Chief among them, of course, was EC2. Although it feels a little quaint now, back then EC2 was an incredible offering. Anyone could fire up a server in seconds, install some code, and transform that generic server into any service one could imagine.

And for most of the history of the internet, this has been the pattern. Take a server, write or download code to turn it into a service, and ship it. We’ve periodically introduced new abstractions with virtualization (and, if we’re being accurate, EC2 is itself a virtual abstraction), but whether we called it a “server” or a “VM” or a “container” it was still pretty much the same– a generic building block for our service to run upon.

The big drawback here is that lots of services are pretty similar. Most code isn’t really blazing a new engineering trail, we’re just assembling known solutions into newly composed products. This isn’t to say that we don’t add our own unique value to this process, but rather that our custom business logic is a relatively small percentage of the overall code that we have to ship to deliver something meaningful.

If you think about that for a minute, you’ll realize that’s actually pretty bizarre. Imagine if a pastry chef first had to farm wheat before they could bake a cake. Sure there are certain circumstances when controlling the fundamental building blocks is very important, but in general wheat is a commodity which is fairly interchangeable. The same is true of technology. There are times when you’ll need to build a very bespoke solution, and in those times you’ll create a lot of value, but most of the time the value isn’t derived from creating the building blocks– it comes from assembling them together. The master chef doesn’t churn the butter, but they know which butter is best and how much is right. We should think the same way.

If you look at AWS’s product catalog today, you’ll notice that they have a huge variety of pre-built, use-case-driven products that you can pull off the shelf, configure with code, pay-per-use, and use immediately. The fundamental building blocks are services, not servers. (And yes, the services run on servers. Understood.)

The bottom line is: the model has changed and we’re not really talking about it. And we should, because it has some pretty big ramifications. The biggest of which is the development environment.

Cloudside Development

If you’ve been writing software for a while you probably don’t even think about this very much, but your laptop is a server. We kind of take that for granted. We’ve been building on servers for servers for so long that it’s become subconscious. You can configure your laptop to mirror a server so closely that you can build transportable applications that move directly from your local development environment out to a production server with relatively little risk. But, and this is a big but, your laptop is not a cloud provider.

It’s essentially impossible to replicate every AWS service locally. There are some attempts here and there to mock or fake services locally, but AWS releases products at a lightning pace and it’s ultimately a losing battle. When services become the building blocks, local laptop development stops being a helpful approximation of production. And this breaks pretty much everything we take for granted in a server-centric world.

The right solution here is not to try to parody AWS on your laptop, but rather to embrace the fact that building against cloudside services is fundamentally done in the cloud. I worry that people will hear “cloudside development” and think I’m advocating for some kind of SaaS IDE where you’re stuck writing code in your browser and deathly afraid of hitting the ‘back’ button. That’s not it. Write your code wherever you like! But you have to write it against real cloud services.

The ideal cloudside development lets your local code interact directly with real cloud services in a live environment. It requires that developers have access to sandboxed developer environments in their cloud accounts. It necessetates sophisticated permissions schemes, parameterization of the configuration of these cloud services, and a relatively short cycle between editing code and knowing if it works. If it takes several minutes to deploy between each iteration, it’s simply too slow.

Server-centric vs Service-centric

Amazon Web Services has passed $30 billion in annual revenue and is growing at 40%. Every single one of their fastest growing products are pre-built services. You can’t argue with the numbers. It’s official: we’ve transitioned from a fundamentally server-centric model, to a modern service-centric model.

Still trying to build applications locally before deploying? Then you’re fundamentally missing a real development environment. Set up your cloudside development environment and prioritize more efficient workflows with Stackery today.

5 Common Misconceptions About Serverless in 2019
Gracie Gregory

Gracie Gregory | April 18, 2019

5 Common Misconceptions About Serverless in 2019

At Stackery, our engineers live and breathe serverless development every day. Because of this, we are constantly evaluating the current soundbites about it; when a field is expanding this quickly, it’s not uncommon to hear a generous handful of misguided assumptions. So, despite the increasing influence of cloudside development, there are still a number of declarations published every week that seem to amplify some common and outdated misconceptions.

It’s important for us to say that these misconceptions are understandable and blameless. We don’t sit around theorizing that there’s a creepy Serverless Myth Machine spreading propaganda— although then our daily work would consist of plotting a supervillain overthrow, which would be epic. Instead, we recognize that the serverless community is still relatively “new” (stay tuned, because we’re about to challenge ourselves there). As such, it’s growing constantly, which can be difficult to keep pace with if you’re not using it daily. Essentially, myths are a predictable symptom of a new chapter in any field. But we’re here to challenge them… For the safety of the galaxy. (Sorry.)

1. “Serverless is a new frontier”

Serverless isn’t unprecedented. In fact, the road to serverless has been paved and ready to ride for decades, so it makes sense that we’ve landed here. We’ve been talking about the concepts of agile software development, microservices, and cloud infrastructure for years, and well before that, key concepts like virtualization set the stage.

Get a bird’s eye view of these milestones and the overall journey to serverless in our infographic that our whole team contributed to and discover precisely how we’ve all been working towards the current era for, well, eras.

2. “Serverless=Functions”

In AWS world, when we talk about functions, we’re talking about Lambdas. As outlined beautifully by Toby Fee in this recent Stackery blog, Lambdas are the dominant form of serverless functions and are essentially lines of code that AWS will run for you in a virtualized environment, sans configuration. So isn’t this essentially serverless? Nope. Serverless takes it a step further. A serverless app is actually made up of a function (Lambda), the resource architecture that lets it behave like a production-grade app, and the secrets to authenticate the database. By resource architecture, we’re referring to cloud services like databases, API gateways, authentication services, IOT, Machine Learning, and container tasks. Without all of these three components, a Lambda/function alone wouldn’t be able to communicate with the world outside of AWS and what kind of serverless app would that be?! Trick question… it wouldn’t be one at all.

It’s really always been true that our web applications are more than just the application code: your web app couldn’t run without a configured server, a populated database, and maybe a caching service. In the world of serverless function code, that requirement is more explicit: your functions are just tiny pieces of logic inside a larger system. There are even viable serverless applications that don’t use functions at all for routing or handling! Therefore, it’s possible that the future of serverless won’t include functions at all.

And if you’d like to see how users are really building applications beyond simply using Lambdas, be sure to take a look at Chris Munn’s Serverless is Dead presentation (…around slide 52 for this specific topic).

Hungry for even more info on why we need to consider a serverless app holistically? Catch this recent article for The New Stack, again by Stackery’s Community Developer, Toby Fee:

3. “Serverless is a security nightmare!”

This concern is prudent- You should always heavily consider security when adopting any new toolset- but it’s also something of a logical fallacy. In fact, serverless isn’t more secure than traditional computing but it certainly isn’t less secure. It’s a different model entirely. But that probably isn’t enough to assuage your concerns. Some teams hear the words “serverless” and immediately get sweaty, thinking of an enormous total attack surface due to the fact that REST API functions still run on a server and utilize layers on layers of code to parse API requests. These teams think that, since serverless functions (Lambdas) are able to accept events from dozens of different sources within AWS, this means they would be “extra vulnerable” using serverless. Right? Not so much.

Instead, you should rely on trusted outside tools, like Twistlock. Used in tandem with Stackery, Twistlock allows developers to increase velocity, observability, policy enforcement, secrets management, and more. As serverless has expanded rapidly, it’s smart to keep application security at the forefront of your team’s minds. But know that your options for serverless innovation- and security- have evolved at a similar rate. Think of serverless as a new landscape. Would you arrive, hunting for the same scary intersection as you had in your hometown (i.e. worrying about needing to patch servers)? No. Instead, arrive in this new landscape equipped with car insurance for a different intersection as you admire the scenery.

For more on how Stackery and Twistlock keeps serverless security in check, take a look at this brief:

4. “Serverless is super cheap”

As soon as serverless boarded the Hype-Cycle Express™, one area of perennial debate has been the cost of cloudside development. The good? We get to tell idle server-payments to scram so you’ll no longer have to funnel money into a framework when nobody is requesting data from the server. For software teams eager to explore their serverless options, it’s tempting to use this as a selling point for the powers that be. But this is a misguided approach because it’s not the full picture and nobody wants surprise fees. Serverless might not equal running your own servers but it sure does equal managing and using services as you see fit. For instance, if you are storing data on AWS EC2, transfer rates apply when your app initiates external transfers. There are a number of services involved when you decide to go serverless and it’s crucial to be transparent and frank about this when you’re evaluating your serverless options.

So no, serverless isn’t some kind of development loophole that inherently saves money, but it does equip you with the power to choose what you pay for.

Take a look at this cool serverless cost calculator from Peter Sbarski and the A Cloud Guru team to get a look under the hood of what your serverless strategy is costing you.

From our friend Corey Quinn via Twitter

5. “Serverless is super expensive”

On the flip side, many teams looking into adopting serverless get very caught up in the fear of paying for a million services (micro sometimes doesn’t feel so micro) and don’t focus enough on the positive impact not paying for idle servers will have on their bottom line. For one thing, as your business grows, you won’t be paying for new equipment, because a new serverless app can handle the same amount of traffic as it would in a parallel universe full of tricked out servers. Also expensive? Not getting your apps to market on schedule. With serverless, the cost of a server maintenance team is similarly eliminated and the rule is: get your app to market, and then optimize it. Both of these save a significant amount. James Beswick contributed a lot of valuable information on this overall topic in one of the recent Stackery Wednesday livestreams. Replay it on-demand.

Regarding the myriad of (potentially expensive) services you can take advantage of in serverless development, there is a solution: CloudWatch. This AWS service gathers data in the form of logs, metrics, and events, allowing for a comprehensive view under the hood of your AWS resources, applications, and services. Stackery’s integration with CloudWatch allows for all changes to be saved as Git commits so you’ll get a panoramic view into every application’s history and underlying infrastructure. And yes, you’ll only pay for what you use with CloudWatch as well.

The takeaway should be that with serverless, you get fine-grained control over what you spend, only paying for services when you actually need them.

As for the cost of Stackery, we’ve recently introduced a free tier for developers and hobbyists that removes the barrier for entry. Our CEO Nate Taggart dives into the particulars in this blog.

What myths have you heard about serverless? What challenges surrounding cloudside development or Stackery have got you stumped? We clearly like to stay on top of such things so let us know on Twitter by linking to this article.

Click here to share it with your Twitter cohorts.

Our weekly livestreams are also a great place to discover what misconceptions you might have— and challenge our engineers with your toughest questions. Visit our livestream homepage to register for the next session!
The Future of Serverless is… Functionless?
Chase Douglas

Chase Douglas | April 11, 2019

The Future of Serverless is… Functionless?

I’m in a position where I converse with our customers and cloud service providers, and I keep track of conversations happening through blogs and social media. I then sift through all this data to identify patterns and trends. Lately, I’ve seen some talk about an architectural pattern that I believe will become prevalent in the near future.

I first heard about this pattern a few years ago at a ServerlessConf from a consultant who was helping a “big bank” convert to serverless. They needed to ingest data from an API and put it in a DynamoDB table. The typical way this is implemented looks like this:

Image description

There’s nothing inherently wrong with this approach. It will scale just fine… unless you hit your account-wide Lambda limit. Oh, and you’re also paying for invocations of the Save Record function that isn’t really providing business value intrinsically. Also, we now have added maintenance liability for the code running in Save Record. What if that’s Node.js 6.10, which is approaching EOL for AWS Lambda?

A Functionless Approach

The “big bank” consultant was ahead of the curve and helped them implement a better approach. What if, instead, we could do the following:

Image description

This may seem magical, but it’s possible using advanced mechanisms built into AWS API Gateway. Let’s step back and think about what happens when you integrate an API route with a Lambda Function. We’re used to using frameworks like AWS SAM that abstract away how the integration is implemented under the covers, but in simple terms, the API Gateway Route is set up to make an HTTP request to the AWS Lambda service and wait for the response. Some lightweight transformations are used to enable passing request parameters to the Function and to pass response parts (status code, headers, and body) from the Function response back to the HTTP client.

The same techniques can be used to integrate an API Gateway Route with any other AWS service. API Gateway can handle authentication itself, meaning as long as you can do a small transformation on the incoming API request to generate a request to an AWS service you don’t need a Lambda Function for many API route actions.

While this functionality has been obscured in API Gateway (for a multitude of reasons), it’s front and center in AppSync: AWS’s fully-managed GraphQL service. With AppSync, DynamoDB Tables, SQL Databases (via Aurora Serverless), Lambda Functions, and ElasticSearch domains have all been elevated as first-class “Data Sources” for GraphQL resolvers. Here’s an example API built using these default “Data Sources”:

Image description

This API can query a stock price from a third-party API (Alpha Vantage and record trades in a DynamoDB table, all without needing to write code for, nor provision, Lambda Functions.

What Skills Do Engineers Need For This New Technique?

All this sounds great, but how do you build and operate API-Integration driven applications? Because this is such a new technique, there aren’t a lot of examples to learn from, and the documentation available is mostly of the “reference” variety rather than “how-tos” or “use cases”.

Developers tend to be comfortable with SDK contracts: “When the API route is invoked, my Lambda Function will get this data in JSON, and I can call that AWS service using their SDK with public docs.” Unfortunately, direct integrations are currently a bit more difficult to build.

Engineers need specific new skills and information. In particular how to:

  • Write Apache Velocity macros to translate API requests to AWS service actions. This is a standard mechanism for declaratively transforming HTTP requests in not only AWS API Gateway and AppSync, but many other contexts, including Web Application Firewalls

  • Construct infrastructure templates (e.g. CloudFormation / SAM to integrate API resources with other AWS services (e.g. DynamoDB Tables and Aurora Serverless Databases)

  • “Operate” a functionless, API-integration-driven application (i.e. where are request logs, how are they structured, how will errors be surfaced and acted upon, etc.)

At Stackery, we help our customers figure out the above. If you want to try your hand at this type of development, sign-up for Stackery and don’t be afraid to reach out if you have any questions or need help on your own serverless journey! Drop us a line via email (, fill out our contact form.

Also, be sure to join the Stackery livestream on this subject on April 24th at 10 AM PDT. I’ll be hosting it, alongside iRobot’s Richard Boyd. We’ll dive a bit deeper on Lambda Functions and REST APIs and answer any questions you might have!

You're Clouding — But are you Clouding Properly?
Abner Germanow

Abner Germanow | April 08, 2019

You're Clouding — But are you Clouding Properly?

If you even partly believe Marc Andreessen’s 2011 “software is eating the world” comment, it stands to reason that companies who are good at software will be the winners in a digital world. Given this, I find it ironic that little large-scale research has gone into what it takes to be good at software. Despite the $6B a year spent on IT research, there is only one research company with a long-term focus on developers (RedMonk) and one research team with a long-term focus on what it takes to successfully run a world-class software organization (DORA). All the other firms are playing catch-up.

If you aren’t familiar with DORA’s work, you should be. Stemming from the State of DevOps research originally sponsored by Puppet Labs in 2014, the annual study quickly grew in sample size, breadth, and a connection to business outcomes by looking at the financial results of public companies. This set of research includes data from both the annual public survey of software teams along with data from a private benchmarking service. It’s fair to say, Dr. Nicole Forsgren, Jez Humble and Co. have successfully collected more data on software team behaviors than anyone else in the world.

Defining proper clouding

Among the headlines of the 2018 study mangled by overly excited people on Twitter was the notion that teams using cloud are 23 times as likely to be elite performers relative to other software teams.

Check out this video where Nicole and Jez troll an auditorium full of software leaders on the truth of that line:

According to the NIST definition that Nicole and Jez rightly subscribe, what is the actual outcome and definition of using cloud well? From an outcomes perspective, you are 23 times more likely to be an elite performing software team if you are using the cloud properly. Which, by the NIST definition, means your use of cloud services should follow this list of attributes:

1. On-demand self-service. Anyone in the organization can self-service the cloud resources they want on demand.

2. Broad network access. Access cloud resources on any device.

3. Resource pooling. Compute resources should be managed efficiently.

4. The appearance of infinite resources. Or as Dr. Forsgen says, “Bursty like magic”

5. Measured services. You only pay for what you use.

Serverless and clouding properly

Everyone wants to be an elite performer, so let’s look at this list through the lens of serverless and Stackery. I’m going to reverse the order because #5 and #4 carry the core definitions of what I mean when I say the word serverless.

5. Measured services: you only get charged for what you use.

Check plus on this one. The number of services now available on a charge-by-use basis is skyrocketing. Serverless databases, API gateways, storage, CDNs, secrets management, GraphQL, data streams, containers, functions, and more. These services represent both a focal point of cloud service provider innovation and undifferentiated burden for most companies. When these services are used as application building blocks, it significantly reduces the amount of code a team needs to write in order to deliver an application into production.

Another often overlooked aspect of these pay-for-use services is that they are configured, connected, and managed with infrastructure as code. Stackery makes the process of composing these services into an application architecture super easy, enabling teams to test and swap out the services best suited to the behaviors of their application.

4. The appearance of infinite resources. “Bursty like magic.”

Again, another check plus. Not only are all those services in the prior section evolving and innovating like mad, but most of them can also automatically burst way past the capabilities of what most enterprise cloud ops teams can support. Most can scale right down to zero, too. The nature of this scaling behavior even shifts how developers prioritize how they write code.

See James Beswick’s take on saving time and money with AWS Lambda using asynchronous programming: Part 1 and Part 2.

3. Resource pooling.

Check plus plus? With serverless, resource pooling isn’t even a thing anymore. When you build apps on foundational building blocks of serverless databases, storage, functions, containers, and whatever else you need, resource pooling is the cloud provider’s problem.

2. Broad network access. Access cloud resources on any device.

Ok, sure. I’ll admit, I think this one is intended to throw a wet blanket on private cloud-ish datacenters where resource access is limited to black Soviet-era laptops. Otherwise, the public cloud, including all the serverless offerings, checks the box on this one.

1. On-demand self-service. Anyone in the organization can self-service the cloud resources they want on demand.

With Stackery? Check plus.

Without Stackery, serverless offerings have made some efforts to solve this problem, but as soon as you add in existing databases or multiple cloud accounts, things get pretty tough to manage as you scale up the number of services and collaborators working on the application.

When building server-centric applications, developers replicate a version of the services running in production on their laptops; databases, storage, streaming, and other dependencies. They then test and iterate on the app until it works on the laptop. When developing serverless apps, that localhost foundation shifts to a cloud services foundation where the application code is still cycled on the developer’s laptop, but the rest of the stack and the app as a whole needs to be tested and iterated cloud-side.

This is the opposite of many organizations, where access to a cloud provider account requires an official act from on high as a remnant from the days when compute resources were really expensive. This is also why developers at those same companies have personal cloud accounts. While I’m sure that’s fine from a security perspective (not), even in companies that provision developer accounts, cloud providers don’t have native ways of managing dev/test/prod environments.

That’s where Stackery comes in to automate the namespacing, versioning, and attributes of every environment. For example, dev and test environments should access test databases while prod should access the production database. Stackery customers embracing serverless development generally provision each developer with two AWS dev environments, and then a set of team dev, test, staging, and production environments across several AWS accounts.

Anyone can become an elite performer

As Dr. Forsgren says, being an elite performer is accessible to all, you just have to execute. With a Stackery and AWS account, your existing IDE, Git Repo, and CI/CD, you too can be on your way to being an elite performer. Get started today.

And make sure you go take the 2019 survey!

Can't-miss Serverless Sessions for AWS Summit Santa Clara
Toby Fee

Toby Fee | March 26, 2019

Can't-miss Serverless Sessions for AWS Summit Santa Clara

AWS Summit Santa Clara is one day away and you’ve barely looked at the agenda. You’re not alone. Since there isn’t a dedicated serverless track, we built one for you.

Here are our five recommendations:

1) To find sessions and navigate the event, download the AWS Summits mobile app. Trust us on this one.

2) Tell your boss and architects to grab a box lunch before the noon session and then put these two sessions on their calendars:

These talks will let you understand the serverless landscape and engineer apps that work at scale.

12:00 PM - 01:00 PM A culture of rapid innovation with DevOps, microservices, & serverless


Send your boss to see David Richardson explore how your team can apply DevOps, microservices, and serverless to innovate faster at scale. A version of this talk from re:invent last year is on YouTube; check it out if you want to be ready with questions.

1:15 PM - 02:15 PM Twelve-factor serverless applications


Learn best practices for building modern, cloud-native applications from Chris Munns. Chris has forgotten more about serverless than most of us have learned, and he’s absolutely worth your time. How Serverless apps can also follow 12-factor app design principles is a subject that we’ve written about extensively at Stackery.

3) Will serverless change your testing and CI/CD workflow? You should and it will.

12:30 PM - 01:00 PM Testing serverless applications


A really basic question I hear all the time at meetups is “Cool, but how do you test it?” Aleksandar Simovic explores how serverless changes integration and unit testing. Testing is possible with serverless, come and learn how!

3:45 PM - 04:45 PM CI/CD best practices for building modern applications


Leo Zhandovsky will walk you through best practices for building CI/CD workflows to allow you to manage your serverless and containerized applications.

Will Stackery get a shoutout for its awesome CI/CD integrations? Only time will tell! A previous version of this talk is online to check out now.

4) Want to iterate and build SAM or serverless.yml templates 70% faster? Or manage your cloud-side dev/test/prod environments in half the time? That’s just the beginning.

We don’t have a booth, but schedule a demo with us and we’ll show you how to create amazing apps with confidence. As a thank for your 15 minutes with us, we’ll hook you up with a $10 Starbucks card or a $15 donation to

have a coffee with us

5) MongoDB, serverless, and looking to beat the traffic on the 101? Or more interested in databases than 12-Factor App architecture? These two sessions are for you:

8:00 AM - 08:30 AM Expedite the development lifecycle with MongoDB and serverless

DEM02 Join Ben Perlmutter, senior solutions architect, to learn how to quickly build a blog website backed by MongoDB while utilizing a serverless backend-as-a-service. And then add a bunch more features because that’s what serverless is all about.

1:15 PM - 02:15 PM What’s new in Amazon Aurora

ADB 204

This session conflicts with 12-factor architecture above, so make your choice between high-level planning or nitty-gritty DB tooling.

Amazon Aurora hosts PostGreSQL and MySQL and promises to be both much faster and cheaper than commercial DBs. If relational is in your wheelhouse, it’s worth a look. Presented by Kamal Gupta and Sirish Chandrasekara.

These features were announced just over a year ago. If you want to see a quick tour from that announcement, it’s online here. This talk should cover features that are already released.

We hope this has been helpful.

Go build.

The Lazy Programmer’s Guide to Web Scrapers
Anna Spysz

Anna Spysz | March 21, 2019

The Lazy Programmer’s Guide to Web Scrapers

I am a proud lazy programmer. This means that if I can automate a task, I will absolutely do it. Especially if it means I can avoid doing the same thing more than once. Luckily, as an engineer, my laziness is an asset - because this week, it led me to write an HTML scraper for our Changelog, just so I wouldn’t have to manually update the Changelog feed on our new app homepage (btw, have you seen our new app homepage? It’s pretty sweet).

Automate the Boring Stuff book cover

If only I could also make robots mow my lawn...

Why automation?

I am a firm believer in lazy programming. That doesn’t mean programming from the couch in your PJs (though that’s what work from home days are for, amirite?). I mean working smarter rather than harder, and finding ways to automate repeated processes.

One of our more recent processes was adding a Changelog to our Docs page, to let our users know what’s going on with Stackery and so we can announce major updates. However, the initial Changelog update process was anything but lazy:

  • Create a Changelog doc
  • Yell at engineers for not updating it*

Yup, room for improvement.

A better way of doing things

After much thought and some trial and error, we came up with a better system. Here’s what our new Changelog process looks like, in a nutshell:

  • Each engineer updates an internal Changelog file in an individual repo as part of the PR process
  • When the PR is merged, a scraper Lambda function is triggered by a Git webhook and compiles the new Changelog item into a single doc
  • That doc is then reviewed a few times a week and the most interesting items are written up in our public Changelog (this is the unavoidably manual part of the process, until we build an AI that writes puns, anyway)
  • The public Changelog is scraped by our changelog scraper Lambda and its four most recent items are pulled, reformatted, and displayed on the app homepage

Here’s the scraper function I built that does the last part of the process:

const AWS = require('aws-sdk');
const cheerio = require('cheerio');
const rp = require('request-promise');
const changelogUrl = '';
const s3 = new AWS.S3();

exports.handler = async () => {
  // options for cheerio and request-promise
  const options = {
    uri: changelogUrl,
    transform: function (html) {
      return cheerio.load(html, {
        normalizeWhitespace: true,
        xmlMode: true
  // remove emojis as they don't show up in the output HTML :(
  function removeEmojis (string) {
    const regex = /(?:[\u2700-\u27bf]|(?:\ud83c[\udde6-\uddff]){2}|[\ud800-\udbff][\udc00-\udfff]|[\u0023-\u0039]\ufe0f?\u20e3|\u3299|\u3297|\u303d|\u3030|\u24c2|\ud83c[\udd70-\udd71]|\ud83c[\udd7e-\udd7f]|\ud83c\udd8e|\ud83c[\udd91-\udd9a]|\ud83c[\udde6-\uddff]|[\ud83c[\ude01\uddff]|\ud83c[\ude01-\ude02]|\ud83c\ude1a|\ud83c\ude2f|[\ud83c[\ude32\ude02]|\ud83c\ude1a|\ud83c\ude2f|\ud83c[\ude32-\ude3a]|[\ud83c[\ude50\ude3a]|\ud83c[\ude50-\ude51]|\u203c|\u2049|[\u25aa-\u25ab]|\u25b6|\u25c0|[\u25fb-\u25fe]|\u00a9|\u00ae|\u2122|\u2139|\ud83c\udc04|[\u2600-\u26FF]|\u2b05|\u2b06|\u2b07|\u2b1b|\u2b1c|\u2b50|\u2b55|\u231a|\u231b|\u2328|\u23cf|[\u23e9-\u23f3]|[\u23f8-\u23fa]|\ud83c\udccf|\u2934|\u2935|[\u2190-\u21ff])/g;
    return string.replace(regex, '');
  // parse the scraped HTML, then make arrays from the scraped data
  const makeArrays = ($) => {
    const titleArray = [];
    const linkArray = [];
    $('div > .item > h2').each(function(i, elem) {
      titleArray[i] = removeEmojis($(this).text());
    $('div > .item > .anchor').each(function(i, elem) {
      linkArray[i] = $(this).attr('id');
    return makeOutputHtml(titleArray, linkArray);
  // format the arrays into the output HTML we need
  const makeOutputHtml = (titleArray, linkArray) => {
    let output = `<h2>Stackery Changelog</h2>
      <p>We're always cranking out improvements, which you can read about in the <a href=''  target='_blank' rel='noopener noreferrer'>Stackery Changelog</a>.</p>
      <p>Check out some recent wins:</p>
    for (let [i, title] of titleArray.entries()) {
      // only get the latest four entries
      if (i < 4) {
        output += `<li><a href='${linkArray[i]}' target='_blank' rel='noopener noreferrer'>${title}</a></li>
    output += `</ul>`;
    return output;
  // request the changelog HTML
  const response = rp(options)
    .then( async ($) => {
      // create output HTML
      const outputHtml = makeArrays($);
      // set the bucket parameters
      const params = {
        ACL: 'public-read', // permissions of bucket
        Body: outputHtml, // data being written to bucket
        Bucket: process.env.BUCKET_NAME, // name of bucket in S3 - BUCKET_NAME comes from env vars
        Key: 'changelog/changelog.html' // path to the object you're looking for in S3 (new or existing)
      // put the output HTML in a bucket
      try {
        const s3Response = await s3.putObject(params).promise();
        // if the file is uploaded successfully, log the response
        console.log('Great success!', s3Response);
      } catch (err) {
        // log the error if the file is not uploaded
        const message = `Error writing object ${params.Key} to bucket ${params.Bucket}. Make sure they exist and your bucket is in the same region as this function.`;
      } finally {
      // build an HTTP response
      const res = {
        statusCode: 200,
        headers: {
          'Content-Type': 'text/html'
        body: outputHtml
      return res;
    .catch((err) => {

  return response;

Of course, any lazy serverless developer will use Stackery to avoid the tedium of writing their own YAML and configuring permissions, so naturally, I built this scraper in our app:

The stack in Stackery

It includes three cloud resources:

  • A Timer (AWS CloudWatch schedule event) that triggers the scraper function once a day
  • A Lambda function that pulls HTML content from our public Changelog, trims the data to just titles and links, outputs to a new HTML file and puts it in the designated S3 bucket
  • An S3 bucket that holds our HTML file, which is fetched by the app and displayed on the homepage

If you’d like to implement your own web scraper that outputs to an S3 bucket, grab the code from our GitHub repository - after all, the best lazy programmers start with others’ code!

* Just kidding. There’s no yelling at Stackery, only hugs.

Cloud-Side Development For All with Stackery's Free Tier
Nate Taggart

Nate Taggart | February 26, 2019

Cloud-Side Development For All with Stackery's Free Tier

New Pricing & Free Tier

Today, I’m thrilled to announce our new free tier and pricing model for teams adopting modern serverless application development. I believe cloud-side development is the future and I want everyone to experience it at their own pace. We also now offer a model that can scale with either teams or workflows depending on how aggressively you decide to adopt cloud-side development.

How Software Development in the Cloud Has Changed

While we’ve been at this for a while, it’s worth reviewing where development workflows came from and what’s changing.

When “the cloud” first emerged, the prevailing development pattern was to spin up a generic (EC2) server and install software on it to customize its behavior. These customized behaviors might include event streams, data stores, database tables, APIs, or whatever else an application requires. That software, however, is now offered by the cloud providers as a pay-per-use capability that can be configured and requested with infrastructure-as-code (IaC).

Cloud providers have released hundreds of new services which, at their core, are purpose-built, use-case-driven, software-managed services. It’s no longer necessary to spin up a cluster of servers to stand up Kafka if you need a message streaming service, because it’s much faster and arguably cheaper (in terms of people, overhead and maintenance) to simply pull a streaming service like Kinesis or a managed Kafka service off the shelf of the Amazon Web Services menu.

You Can’t Replicate AWS on Your Laptop

The rise of these managed cloud services has fundamentally changed the model for modern application development.

The rise of these managed cloud services has fundamentally changed the model for modern application development.

Of course, the core advantage of this model is that it has become easy to architect at-scale systems in the cloud with very little operational overhead. The consequence of this change, however, is that the software development lifecycle has fundamentally changed. No longer can your laptop act as a development server (localhost). This was a great tool for replicating and testing server-like behavior in local development when the fundamental infrastructural underpinning everything was a server. But now, rather than raw servers, applications are composed of a collection of managed cloud services. Localhost has become a poor representation of the production environment, as it is impossible to replicate all of the functionality of AWS locally on a laptop.

This is driving a shift toward cloud-side development. This doesn’t mean you need to write code through a web browser; your favorite IDE will still work for your application code. But to test and iterate on the full application stack through the development cycle, you must now stand up development instances of the managed services you’re using to compose an application. Crucially, cloud-side development is about service composition: composing your application architecture from off-the-shelf services to accelerate at-scale application development and rapidly iterating on a cloud-native implementation of your application.

What does this tell us? Cloud-side development isn’t just the future, it’s now and it’s big. How big? At re:Invent 2018, AWS executives proclaimed hundreds of thousands of developers are actively developing with AWS’s menu of managed cloud services and Lambda. That’s big.

What tooling does cloud-side development require?

Here is the good news, your IDE, code repository, and CI/CD systems don’t change. What changes? How you manage stacks in the cloud and how you build and iterate stacks with your team.

Stackery now offers easy to consume tooling and environment management capabilities to every organization trying to deliver faster. To build Stackery, we’ve thought about, experienced, and built safeguards around the ways teams could get into trouble composing applications out of managed cloud services. All while keeping every output in standard CloudFormation in case you decide to go back to doing things the hard way.

Managing Active Stacks in the Cloud

Cloud-side development tools must automate and accelerate the iterative nature of development work on top of cloud managed services. This includes rapidly configuring, deploying, sandboxing, namespacing, and managing individual instances of cloud services for each developer involved in the development. At Stackery, we call these active stacks. Cloud-side tools will include automation around packaging and building your code, version controlling your IaC, managing developer environments, instrumentation, governance, and automating the release process across multiple environments and cloud accounts.

Building Stacks

Until recently, cloud-side development of complex applications using managed cloud services was limited to engineers dedicated to cloud innovation (and YAML.) That human investment is still useful but should be applied to setting patterns instead of troubleshooting misplaced characters. Infrastructure as code is the new assembly language. It is machine-readable and unforgiving, which means tooling needs to help developers do things like attaching a handler to a resource in seconds while properly setting all the correct permissions and more. Speaking of resources…

New! Amazon Aurora Serverless and Amazon Cognito

We owe a lot of kudos to our earliest customers who pushed us to add the most popular services needed to visually compose modern applications. Most recently, Amazon Aurora Serverless Database and Amazon Amazon Cognito(user authentication) (user authentication). We’ve also just added the “Anything Resource” that enables our users to add any AWS CloudFormation service beyond the (now 20!!) resource types currently available in the Stackery resource pallet. We like to say it takes a serverless team to keep up with a serverless team.

The Stackery Developer & Teams Plans

And now, with the introduction of our free Developer plan, we’re excited to unleash the possibilities of cloud-side development to everyone who wants to experience the power of the cloud. The Stackery Developer plan includes 6 free active stacks, which is plenty to get a side-project or proof of concept up and running. After you consume the first six stacks or if you want more support or collaborators in the account, additional active stacks can be added for $10 a month per stack. More details here.

Bring your own IDE, Git repository (blank or existing AWS SAM or serverless.yml files), AWS account, and your CI/CD system if you like - Stackery will accelerate you into cloud-development. It’s time to go build.

Further Reading On Cloud-Side Development:

The Anatomy of a Serverless App

We call an application deployed into a cloud service provider an active stack. This stack has three primary components - the functions where the business logic resides, the managed cloud services that serve as the building blocks of the application and then the environmental elements that define the specific dependencies and credentials of for a particular instance of the first two components. This anatomy of a serverless application post goes into the full detail of what serverless teams will build and manage.

Our friends at Lumigo on the need to test cloud-side (and some slower and manual non-Stackery methods for doing so).

Corey Quinn of Last Week in AWS (sign up for the snark, stay for the news, pay for the bill reduction) sparked this conversation on twitter.

Likewise, this “localhost is dead to me” rant by Matt Weagle, organizer of the Seattle Serverless Days, won him a shiny new Stackery account. This thread also garnered some helpful nuance and commentary from Amazon engineers James Hood, Preston Tamkin, and iRobot’s Ben Kehoe.

Get the Serverless Development Toolkit for Teams

now and get started for free. Contact one of our product experts to get started building amazing serverless applications today.

To Top