Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

AWS ReInvent: Serverless, Stackery, and Cory Quinn of LastWeekInAWS
Abner Germanow

Abner Germanow | November 14, 2018

AWS ReInvent: Serverless, Stackery, and Cory Quinn of LastWeekInAWS

Welcome savvy builder. If you’ve made it to our corner of the Internet and headed to re:invent, you are in the right place.

We want you to leave Las Vegas with the savvy to choose how and when to apply the growing menu of serverless capabilities to your initiatives. To help you, we’re sending our serverless-first engineers to Las Vegas with three goals.

  1. Share experiences building AWS serverless apps
  2. Show app builders how Stackery + AWS serverless offerings accelerate velocity and confidence
  3. Connect the AWS serverless community

Sharing Our Serverless Experience

As we build our serverless-first service for teams, we examine the developer and operations experience to make the experience faster and safer for our customers. We’ve learned a few things along the way about what makes serverless awesome, when to insert container services into the mix, and how workflows differ from services we’ve built in the past.

At our booth we’ll be holding demonstrations walking through what we’ve learned and where we find developers and teams working differently. Keep an eye on twitter for exact timing.

Booth Talks Include:

  • ICYMI: Cory Quinn From Last Week In AWS (Thurs @ 2:15)
  • PSA: Permission Scoping Accounts, Services, and Humans
  • Namespacing for fun and dev/test/prod environments
  • A look at the new AWS [REDACTED]
  • Where and when we use containers in our serverless-first apps
  • Using existing resources with serverless applications
  • How to build state machines around serverless apps
  • Instrumentation and monitoring serverless apps with Stackery and Epsagon
  • Testing serverless apps
  • Secrets Manager vs Parameter Store
  • Lambda@Edge vs Lambda What You Should Know
  • Systems Manager Parameter Store

Show off Stackery’s Serverless Acceleration Software

If you are new to Stackery or an old pro, a lot has changed just in the last month!

“We don’t need a whiteboard, I’ll mock it up in Stackery.” -Customer using Stackery to break a monolith into microservices.

We’ve made it even easier to visually mock up the architectural intent of your app with a new template to visual architecture toggle editor that you can take straight to packaging and deployment. GraphQL, the ability to import projects in SAM or Serverless.com frameworks, Lambda@Edge, and much more.

Drop by to see the latest, or better yet, sign up for a slot and we’ll make sure our engineers are dedicated to you.

Cory Quinn of Last Week In AWS and Connecting the Community

AWS moves fast. Almost as fast as serverless-first teams. On Thursday at 1:00, Cory Quinn of the Last Week In AWS Newsletter will be at our booth for an exclusive ICYMI to review announcements you probably missed. You can get his snark-a-take (I just made that up) on the keynotes, serverless, and more.

Our invite only serverless insiders party is designed to connect the pioneers with those who are ramping up in 2019. If you are interested in an invite drop us a note.

Finally, like all serverless teams, we abhor repeating writing, so for a guide to serverless sessions, check out these guides:

See you in Vegas!

How to find us at ReInvent: Booth: #2032 - We’re about 40 feet from the dev lounge in the Sands/Venetian Hall B. Contact our team: reinvent@stackery.io

The Case for Minimalist Infrastructure
Garrett Gillas

Garrett Gillas | November 13, 2018

The Case for Minimalist Infrastructure

If your company could grow its engineering organization by 40% without increasing costs, would they do it? If your DevOps team could ship more code and features with fewer people, would they want to? Hopefully, the answer to both of these questions is ‘yes’. At Stackery, we believe in helping people create the most minimal application infrastructure possible.

Let me give you some personal context. Last year, I was in charge of building a web application integrated with a CMS that required seven virtual machines, three MongoDBs, a MySQL database and CDN caching for production. In addition, we had staging and dev environments with similar quantities of infrastructure. Over the course of 2 weeks, we were able to work with our IT-Ops team to get our environments up and running and start building the application relatively painlessly.

At Stackery, we saw a big opportunity that allows software teams to spend less time on infrastructure, and more time building software.

After we got our application running, something happened. Our IT-Ops team went through their system hardening procedure. For those outside the cybersecurity industry, system hardening can be defined as “securing a system by reducing its surface of vulnerability”. This often includes things like changing default passwords, removing unnecessary software, unnecessary logins, and the disabling or removal of unnecessary services. This sounds fairly straightforward, but it isn’t.

In our case, it involved checking our system against a set of rules like this one for Windows VMs and this one for Linux. Because we cared about security, this included closing every single port on every single applicant that was not in use. As the project lead, I discovered three things by the end.

  • We had spent much more people-hours on security and ops than on development.
  • Because there were no major missteps, this was nobody’s fault.
  • This should never happen.

Every engineering manager should have a ratio in their head of work hours spent in their organization on software engineering vs other related tasks (ops, QA, product management, etc…). The idea is that organizations that spend the majority of their time actually shipping code will perform better than groups that spend a larger percentage of their time on operations. At this point, I was convinced that there had to be a better way.

Serverless Computing

There have been many attempts since the exodus to the cloud to make infrastructure easier to manage in a way that requires fewer personnel hours. We came from bare-metal hardware to datacenter VMs, then VMs in the cloud and later containers.

In November 2014 Amazon Web Services announced AWS Lambda. The purpose of Lambda was to simplify building on-demand applications that are responsive to events and data. At Stackery, we saw a big opportunity that allows software teams to spend less time on infrastructure, and more time building software. We have made it our mission to make it easier for software engineers to build highly-scalable applications on the most minimal, modern cloud infrastructure available.

Webhooks Made Easy with Stackery
Anna Spysz

Anna Spysz | November 08, 2018

Webhooks Made Easy with Stackery

Webhooks are about as close as you can get to the perfect serverless use case. They are event-driven and generally perform just one (stateless) function. So of course we wanted to show how easy it is to implement webhooks using Stackery.

Our newest tutorial, the Serverless Webhooks Tutorial, teaches you to create a GitHub webhook and connect it to a Lambda function through an API Gateway. Or, to put it simple terms: when your GitHub repository does a thing, your function does another thing. What that second thing does is completely up to you.

Here are some possible use cases of a GitHub webhook:

  • Connect your webhook to the Slack API and have Slack ping your team members when someone has opened a PR
  • Have your function deploy another stack when its master branch is updated
  • Expanding on that, you can even have your function deploy to multiple environments depending on which branch has been updated
  • Write an Alexa Skill that plays a certain song when your repository has been starred - the possibilities are endless!

The best part is, GitHub allows you to be very specific in what events you subscribe to, and you can further narrow down events in your function logic.

So for example, do you want to be notified by text message every time Jim pushes a change to the master branch of your repository, because Jim has been known to push buggy code? You can set that up using webhooks and Stackery, and never have master go down again (sorry, Jim).

Check out the tutorial to see what else you can build!

Serverless Secrets:  The Three Things Teams Get Wrong
Sam Goldstein

Sam Goldstein | November 07, 2018

Serverless Secrets: The Three Things Teams Get Wrong

Database passwords, account passwords, API keys, private keys, other confidential data… A modern cloud application with multiple microservices is filled with confidential data that needs to be separated and managed. In the process of researching how we would improve and automate secrets management for Stackery customers, I found much of what you find online is bad advice. For example, there are quite a few popular tutorials which suggest storing passwords in environment variables or AWS Parameter Store. These are bad ideas which make your serverless apps less secure and introduce scalability problems.

Here are the top 3 bad ideas for handling serverless secrets:

1. Storing Secrets in Environment Variables

Using environment variables to pass environment configuration information into your serverless functions is a common best practice for separating config from your source code. However, environment variables should never be used to pass secrets, such as passwords, api keys, credentials, and other confidential information.

Never store secrets in environment variables. The risk of accidental exposure of environment variables is exceedingly high. That’s why (just to be clear) you should never pass secrets in environment variables to Lambda functions. For example:

  • Many app frameworks print all environment variables for debugging or error reporting.
  • Application crashes usually result in environment variables getting logged in plain text.
  • Environment variables are passed down to child processes and can be used in unintended ways.
  • There have been many malicious packages found in popular package repositories which intentionally send environment variables to attackers.

At Stackery we never put secrets in environment variables. Instead we fetch secrets from AWS Secrets Manager at runtime and store them in local variables while they’re in use. This makes is very difficult for secrets to be logged or otherwise exfiltrated from the runtime environment.

2. Storing Secrets in the Wrong Places

If you’re dealing with secrets they should always be encrypted at rest and encrypted in transmission. By now we all know that keeping secrets in source code is a bad idea. Yes, secrets can’t live in git with your code so where should you keep them? There’s a lot of bad advice online, suggesting AWS Systems Manager Parameter Store (aka SSM) is a good place to store your secrets. Like environment variables Parameter Store is good for configuration, but terrible for secrets.

AWS Systems Manager Parameter Store falls short as a secrets backend in a few key areas:

  1. Parameters aren’t generally encrypted at rest and are often displayed in the AWS Console UI. Encryption only occurs for entries using the recently added Secure String type.
  2. Parameter Store is free but heavily rate limited. It doesn’t accommodate traffic spikes so you can’t rely on fetching secrets at runtime during traffic spikes. To avoid throttling your Lambdas you need to rely on environment variables to pass Parameter Store values in.
  3. You should never store secrets in environment variables.

At Stackery we use AWS Secrets Manager which stores secrets securely with fine grained access policies, auto-scales to handle traffic spikes, and is straightforward to query at runtime.

3. Bad IAM Permissions

Each function in your application should only have access to the secrets it needs to do its work. However it’s very common for teams to run configurations (often unintentionally) where every function by default is granted access to all secrets from all environments. These “/*” permissions mean a compromised function in a test environment can be used to fetch all production secrets from the secrets store. This is a bad idea for obvious reasons. Permission access should be tightly scoped by environment and usage, with functions defaulting to no secrets access.

At Stackery we automatically scope an IAM role per function and Fargate container tasks which limits AWS Secrets Manager access to the current environment the function is running in and the set of secrets required by that specific function.

Managing Serverless Environment Secrets with Stackery

Our team has learned a lot about how to manage serverless secrets, running production serverless applications, and working with many serverless teams and pioneers. We’ve integrated these best practices back into Stackery so serverless teams can easily layer secure secrets management onto their existing projects. If you are curious to read more about how Stackery handles secrets check out the Environment Secrets in the Stackery Docs.

Why 2019 is a Bad Year to Start Learning Linux
Toby Fee

Toby Fee | November 05, 2018

Why 2019 is a Bad Year to Start Learning Linux

They don’t teach it in coding boot camps, but everyone talks about it. Every decent hacking sequence in a movie involves screenshots of it. You see the stickers on mysterious black laptops at meetups, and XKCD can’t stop talking up about it. Linux, it seems, is the shibboleth of the true nerd.

But if you’re in your first two years of development, you really shouldn’t waste your time on it. Like robotics or home brewing, Linux has become a symbol of nerdy acumen, without actually entailing any of the skills you need to be a web developer. Right now in an IT department, someone is setting down a mug with a Gnome logo to angrily type that Linux is the UNDERLAyER of the internet.

That’s true! Nearly all the servers that serve content online are some flavor of Linux, including the servers on which highly abstracted “serverless” apps are hosted.

But does that obligate us, the humble full-stack engineers, to learn Linux? About its memory management, how it handles overlarge log files, and the CPU instruction kernel?

Despite the popularity of hobbyist robotics with developers, I would challenge you to find a web developer who can clearly explain voltage gradients, or even apply Ohm’s law. Electronics, too, are basic to the operation of every server, but we are comfortable saying that this information is “too basic” for a beginner.

Even the term “too basic” is misleading: these skills are implicitly related in their application but not in their day-to-day use. Learning to play piano really won’t make you a better roadie.

So I don’t have to use the command line?

Where did I say that? NPM, Grunt, Gulp, AWS and Git all require using the command line, and these days the commands on a windows machine can emulate those on a unix command line like you find on OSX. Command line tools are power tools, and a later post will cover some of the key command line skills using Unix commands. So yes, some form of command line/shell tools will still be in use 10 years from now.

What should you learn instead of Linux?

Since the first Heroku free tier service, Linux has been on its way out as a key web developer skill. The future is “highly managed” environments where you control the code that gets run but aren’t wasting time configuring the storage, network layer, and other essential steps of managing a real server.

It’s worth noting that every 1.5 years or so there is an AWS failure across multiple availability zones and every time someone is right there to insist we need to reconsider AWS as a solution. The same people are notably mum when privately hosted services go down every month or so.

Serverless is the Future

The first time in this post I wrote serverless I wrapped in in scare quotes. It’s true that any kind of hosted or ‘cloud based’ tool does require the caveat that every bit of code we run is running on a computer somewhere.

But the image of ‘just someone else’s computer’ makes us imagine a single fitful machine in a corner somewhere. While using a serverless architecture does lock you in to a single vendor (we’re probably discussing AWS), the reality is that the reliability of any of these services is much higher than any self-hosted tool. Exponentially so. AWS experiences outages but never in multiple availability zones.

How to Skip Linux

Recently I wrote about how to go from frontend to full-stack, the general concept is quite simple for being a full-stack developer that doesn’t use Linux:

  • Create API endpoints to receive requests from your frontend.
  • Lambdas to perform business logic and get stuff to and from the DB (one per api endpoint).
  • DynamoDB to store data.

If you having a working react app, maybe even served from Amazon S3, the three pieces above are all you need! While the above can deployed in a few hours using the AWS console, a more scalable, team-based approach without needing to learn YAML, Stackery can be up and running in 10 minutes.

Conclusions

Great developers are not monomaniacal. Great developers take breaks, weekends off, and sick days. Great developers have big white gaps in their Github commit history. But great developers are curious, and they learn things that require programming-like thinking that don’t have anything to do with work.

So take some time to learn to knit! Do some drone photography, or get really good at Beyblades. If you’re looking for a new hobby, Linux can be fun! A great weekend project for those who love open source but don’t like MP3’s.

Scaling to the Challenges
Farrah Campbell

Farrah Campbell | November 01, 2018

Scaling to the Challenges

Can you create something by creating the appearance of something? Can a team increase its output by looking busy? Does “fake it til you make it” work with team culture?

It sounds ridiculous to suggest pretending can make it real, but smiling can make a person feel happier, and Botox can make a person less able to feel emotions they are unable to express.

When it comes to team culture, many managers work hard to create an appearance of inclusivity without creating the environment to support it. Writing mission statements is a lot easier than letting go of employees because they won’t use other team members’ preferred pronouns.

While this strategy isn’t always self-defeating, after spending years working on and building teams that support each other at every level, there’s one thing they all have in common: leaders who actively embrace humility.

More than anything else, a focus on humility and learning is at the heart of teams that function well:

  • Allows teams to shift their culture where it needs to go.
  • Makes change less intimidating.
  • Creates more opportunities to make the right career choices, as well as to assist others in making those choices.

Shifting to an Inclusive Culture

When I started the role I’m in now I felt woefully underprepared, but in retrospect I was being overly self-critical and a realistic self-assessment wasn’t really possible.

Teams that strive to hire people from different backgrounds can do the hard work when the team culture needs to improve. When I started the role I’m in now I felt woefully underprepared, but in retrospect I was being overly self-critical and a realistic self-assessment wasn’t really possible.

When we decide to put on a brave face and tell everyone we’re ready for a big challenge, the trick is being aware of it. When you first go out for lead engineer, you’re aware that you’ll be stretching your muscles, and the same should be true of your team.

You Don’t Have to “Fake it ‘Till You Make it”

Instead of misrepresenting where your team is at, try the honest version: “This is new territory for us but we’re excited to figure it out.” Or: “We’re aware that other companies don’t do a great job of this and we want to do better. We don’t have all the answers but we’re willing to learn.”

When you look at yourself as a pioneer ready to meet the challenges of new territory, pushing yourself to do better than your competition, team culture can be just as much of an ‘edge’ as technical innovation. So while I wouldn’t recommend dishonesty, I love it when teams admit they’re up for a challenge.

Creating Career Opportunities

It can seem so arrogant to overreach! To push ourselves to do something harder than before. But the awareness of that reach is what’s driven me to career success.

Because in the end when I was in unfamiliar territory, trying to make new and exciting business relationships happen for my team, I was fully aware that I was out of my comfort zone. That awareness led me to do the work and stay humble. It’s perfectly fine to convince others that we know what we’re doing, it’s when we convince ourselves that this is easy that humility can slip into arrogance.

Humility really is the key: if we maintain self-awareness then we know that success is not assured. We’re willing to do the research, to take good advice, and when all else fails to just ask for what we want or need to know.

No one writes a new web service that can take a million users on its first day. Success isn’t about doing the things we’re prepared to do, it’s about scaling to the challenges we’re brave enough to face.

Five Ways Serverless Changes DevOps
Sam Goldstein

Sam Goldstein | October 31, 2018

Five Ways Serverless Changes DevOps

I spent last week at DevOps Enterprise Summit in Las Vegas where I had the opportunity to talk with many people from the world’s largest companies about DevOps, serverless, and the ways they are delivering software faster with better stability. We were encouraged to hear of teams using serverless from cron jobs to core bets on accelerating digital transformation initiatives.

Lots of folks had questions about what we’ve learned running the serverless engineering team at Stackery, how to ensure innovative serverless projects can coexist with enterprise standards, and most frequently, how serverless changes DevOps workflows. Since I now have experience building developer enablement software out of virtual machines, container infrastructures, and serverless services I thought I’d share some of the key differences with you in this post.

Developers Need Cloud-Side Environments to Work Effectively

At its core, serverless development is all about combining managed services in the cloud to create applications and microservices. The serverless approach has major benefits. You can move incredibly fast, outsourcing tons of infrastructure friction and orchestration complexity.

However, because your app consists of managed services, you can’t run it on your laptop. You can’t run the cloud on your laptop.

Let’s pause here to consider the implications of this. With VMs and containers, deploying to the cloud is part of the release process. New features get developed locally on laptops and deployed when they’re ready. With serverless, deploying to the cloud becomes part of the development process. Engineers need to deploy as part of their daily workflow developing and testing functionality. Automated testing generally needs to happen against a deployed environment, where the managed service integrations can be fully exercised and validated.

This means the environment management needs of a serverless team shift significantly. You need to get good at managing a multitude of AWS accounts, developer specific environments, avoiding namespace collisions, injecting environment specific configuration, and promoting code versions from cloud-side development environments towards production.

Note: While there are tools like SAM CLI and localstack that enable developers to invoke functions and mimic some managed services locally, they tend to have gaps and behave differently than a cloud-side environment.

Infrastructure Management = Configuration Management

The serverless approach focuses on leveraging the cloud provider do more of the undifferentiated heavy lifting of scaling the IT infrastructure, freeing your team to maintain laser focus on the unique problems which your organization solves.

To repeat what I wrote a few paragraphs ago, serverless teams build applications by combining managed services that have the most desirable scaling, cost, and durability characteristics. However, here’s another big shift. Developers now need familiarity with a hefty catalog of services. They need to understand their pros and cons, when to use each service, and how to configure each service correctly.

A big part of solving this problem is to leverage Infrastructure as Code (IaC) to define your serverless infrastructure. For serverless teams this commonly takes the form of an AWS Serverless Application Model (SAM) template, a serverless.yml, or a CloudFormation template. Infrastructure as Code provides the mechanism to declare the configuration and relationships between the managed services that compose your serverless app. However, because serverless apps typically involve coordinating many small components (Lambda functions, IAM permissions, API & GraphQL gateways, datastores, etc.) the YAML files containing the IaC definition tend to balloon to hundreds (or sometimes thousands) of lines, making it tedious to modify and hard to keep consistent with good hygiene. Multiply the size and complexity of a microservice IaC template by your dev, test, and prod environments, engineers on the team, and microservices; you quickly get to a place where you will want to carefully consider how they’ll manage the IaC layer and avoid being sucked into YAML hell.

Microservice Strategies Are Similar But Deliver Faster

Serverless is now an option for both new applications and refactoring monoliths into microservices. We’ve seen teams deliver highly scalable, fault-tolerant services in days instead of months to replace functionality in monoliths and legacy systems. We recently saw a team employ the serverless strangler pattern to transition a monolith to GraphQL serverless microservices, delivering a production ready proof of concept in just under a week. We’ve written about the Serverless Strangler Pattern before on the Stackery blog, and I’d highly recommend you consider this approach to technical transformation.

A key difference with serverless is the potential to eliminate infrastructure and platform provisioning cycles completely from the project timeline. By choosing managed services, you’re intentionally limiting yourself to a menu of services with built-in orchestration, fault tolerance, scalability, and defined security models. Building scalable distributed systems is now focused exclusively on the configuration management of your infrastructure as code (see above). Just whisper the magic incantation (in 500-1000 lines of YAML) and microservices spring to life, configured to scale on demand, rather than being brought online through cycles of infrastructure provisioning.

Regardless of platform, enforcing cross-cutting operational concerns when the number of services increases is a (frequently underestimated) challenge. With microservices it’s easy to keep the pieces of your system simple, but it’s hard to keep them all consistent as the number of pieces grows.

What cross-cutting concerns need to be kept in sync? It’s things like:

  • access control
  • secrets management
  • environment configuration
  • deployment
  • rollback
  • auditability
  • so many other things…

Addressing cross-cutting concerns is an area many serverless teams struggle, sometimes getting bogged down in a web of inconsistent tooling, processes, and visibility. However the serverless teams that do master cross-cutting effectively are able to deliver on microservice transformation initiatives much faster than those using other technologies.

Serverless is Innovating Quickly

Just like serverless teams, the serverless ecosystem is moving fast. Cloud providers are pushing out new services and features every day. Serverless patterns and best practices are undergoing rapid, iterative evolution. There are multiple AWS product and feature announcements every day. It’s challenging to stay current on the ever expanding menu of cloud managed services, let alone best practices.

Our team at Stackery is obsessed with tracking changes in the serverless ecosystem, identifying best practices, and sharing these with the serverless community. AWS Secrets Manager, easy authorization hooks for REST APIs in AWS SAM, 15 minute Lambda timeouts, and AWS Fargate Containers are just a few examples of recent serverless ecosystem changes our team is using. Only a serverless team can keep up with a serverless team. We have learned a lot of lessons, some of them the hard way, about how to do serverless right. We’ll keep refining our serverless approach and can honestly say we’re moving faster and with more focus than we’d ever thought possible.

Patching and Capacity Distractions Go Away (Mostly)

Raise your hand if the productivity of your team ever ground to a halt because you needed to fight fires or were blocked waiting for infrastructure to be provisioned. High profile security vulnerabilities are being discovered all the time. The week Heartbleed was announced a lot of engineers dropped what they had been working on to patch operating systems and reboot servers. Serverless teams intentionally don’t manage OS’s. There’s less surface area for them to patch, and as a result they’re less likely to get distracted by freshly discovered vulnerabilities. This doesn’t completely remove a serverless team’s need to track vulnerabilities in their dependencies, but it does significantly scope them down.

Capacity constraints are a similar story. Since serverless systems scale on demand, it’s not necessary to plan capacity in a traditional sense, managing a buffer of (often slow to provision) capacity to avoid hitting a ceiling in production. However serverless teams do need to watch for a wide variety AWS resource limits and request increases before they are hit. It is important to understand how your architecture scales and how that will effect your infrastructure costs. Instead of your system breaking, it might just send you a larger bill so understanding the relationship between scale, reliability, and cost is critical.

As a community we need to keep pushing the serverless envelope and guiding more teams in the techniques to break out of technical debt, overcome infrastructure inertia, embrace a serverless mindset, and start showing results they never knew they could achieve.

Building a Single-Page App With Stackery & React
Jun Fritz

Jun Fritz | October 30, 2018

Building a Single-Page App With Stackery & React

After completing this tutorial, you’ll have a serverless SPA built using Stackery and React. Stackery will be used to configure, deploy, and host our application which will be built using the React library.

The newest tutorial on our documentation site guides you through the process of building a Serverless Single-Page App using Stackery and React.

You’ll be using Stackery to set up the cloud resources needed to deploy, host, and distribute your single-page application. You’ll configure a Lambda function, an S3 Bucket, and a CloudFront CDN in this tutorial with the goal of keeping this application within AWS Free Tier limits.

By the end of this tutorial, you’ll have a fully-scalable backend and an organized React front-end to add to, and grow your application. Watch part one of the tutorial below to see what we’re building, or follow along with the plain-text version here.

Stay tuned for more serverless tutorials from Stackery!

Get the Serverless Development Toolkit for Teams

Sign up now for a 60-day free trial. Contact one of our product experts to get started building amazing serverless applications today.

To Top