Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Posts on Stackery

Your Development Environment is Missing
Nate Taggart

Nate Taggart | April 22, 2019

Your Development Environment is Missing

It’s hard to believe, but 10 years ago AWS had only five products. Chief among them, of course, was EC2. Although it feels a little quaint now, back then EC2 was an incredible offering. Anyone could fire up a server in seconds, install some code, and transform that generic server into any service one could imagine.

And for most of the history of the internet, this has been the pattern. Take a server, write or download code to turn it into a service, and ship it. We’ve periodically introduced new abstractions with virtualization (and, if we’re being accurate, EC2 is itself a virtual abstraction), but whether we called it a “server” or a “VM” or a “container” it was still pretty much the same– a generic building block for our service to run upon.

The big drawback here is that lots of services are pretty similar. Most code isn’t really blazing a new engineering trail, we’re just assembling known solutions into newly composed products. This isn’t to say that we don’t add our own unique value to this process, but rather that our custom business logic is a relatively small percentage of the overall code that we have to ship to deliver something meaningful.

If you think about that for a minute, you’ll realize that’s actually pretty bizarre. Imagine if a pastry chef first had to farm wheat before they could bake a cake. Sure there are certain circumstances when controlling the fundamental building blocks is very important, but in general wheat is a commodity which is fairly interchangeable. The same is true of technology. There are times when you’ll need to build a very bespoke solution, and in those times you’ll create a lot of value, but most of the time the value isn’t derived from creating the building blocks– it comes from assembling them together. The master chef doesn’t churn the butter, but they know which butter is best and how much is right. We should think the same way.

If you look at AWS’s product catalog today, you’ll notice that they have a huge variety of pre-built, use-case-driven products that you can pull off the shelf, configure with code, pay-per-use, and use immediately. The fundamental building blocks are services, not servers. (And yes, the services run on servers. Understood.)

The bottom line is: the model has changed and we’re not really talking about it. And we should, because it has some pretty big ramifications. The biggest of which is the development environment.

Cloudside Development

If you’ve been writing software for a while you probably don’t even think about this very much, but your laptop is a server. We kind of take that for granted. We’ve been building on servers for servers for so long that it’s become subconscious. You can configure your laptop to mirror a server so closely that you can build transportable applications that move directly from your local development environment out to a production server with relatively little risk. But, and this is a big but, your laptop is not a cloud provider.

It’s essentially impossible to replicate every AWS service locally. There are some attempts here and there to mock or fake services locally, but AWS releases products at a lightning pace and it’s ultimately a losing battle. When services become the building blocks, local laptop development stops being a helpful approximation of production. And this breaks pretty much everything we take for granted in a server-centric world.

The right solution here is not to try to parody AWS on your laptop, but rather to embrace the fact that building against cloudside services is fundamentally done in the cloud. I worry that people will hear “cloudside development” and think I’m advocating for some kind of SaaS IDE where you’re stuck writing code in your browser and deathly afraid of hitting the ‘back’ button. That’s not it. Write your code wherever you like! But you have to write it against real cloud services.

The ideal cloudside development lets your local code interact directly with real cloud services in a live environment. It requires that developers have access to sandboxed developer environments in their cloud accounts. It necessetates sophisticated permissions schemes, parameterization of the configuration of these cloud services, and a relatively short cycle between editing code and knowing if it works. If it takes several minutes to deploy between each iteration, it’s simply too slow.

Server-centric vs Service-centric

Amazon Web Services has passed $30 billion in annual revenue and is growing at 40%. Every single one of their fastest growing products are pre-built services. You can’t argue with the numbers. It’s official: we’ve transitioned from a fundamentally server-centric model, to a modern service-centric model.

Still trying to build applications locally before deploying? Then you’re fundamentally missing a real development environment. Set up your cloudside development environment and prioritize more efficient workflows with Stackery today.

5 Common Misconceptions About Serverless in 2019
Gracie Gregory

Gracie Gregory | April 18, 2019

5 Common Misconceptions About Serverless in 2019

At Stackery, our engineers live and breathe serverless development every day. Because of this, we are constantly evaluating the current soundbites about it; when a field is expanding this quickly, it’s not uncommon to hear a generous handful of misguided assumptions. So, despite the increasing influence of cloudside development, there are still a number of declarations published every week that seem to amplify some common and outdated misconceptions.

It’s important for us to say that these misconceptions are understandable and blameless. We don’t sit around theorizing that there’s a creepy Serverless Myth Machine spreading propaganda— although then our daily work would consist of plotting a supervillain overthrow, which would be epic. Instead, we recognize that the serverless community is still relatively “new” (stay tuned, because we’re about to challenge ourselves there). As such, it’s growing constantly, which can be difficult to keep pace with if you’re not using it daily. Essentially, myths are a predictable symptom of a new chapter in any field. But we’re here to challenge them… For the safety of the galaxy. (Sorry.)

1. “Serverless is a new frontier”

Serverless isn’t unprecedented. In fact, the road to serverless has been paved and ready to ride for decades, so it makes sense that we’ve landed here. We’ve been talking about the concepts of agile software development, microservices, and cloud infrastructure for years, and well before that, key concepts like virtualization set the stage.

Get a bird’s eye view of these milestones and the overall journey to serverless in our infographic that our whole team contributed to and discover precisely how we’ve all been working towards the current era for, well, eras.

2. “Serverless=Functions”

In AWS world, when we talk about functions, we’re talking about Lambdas. As outlined beautifully by Toby Fee in this recent Stackery blog, Lambdas are the dominant form of serverless functions and are essentially lines of code that AWS will run for you in a virtualized environment, sans configuration. So isn’t this essentially serverless? Nope. Serverless takes it a step further. A serverless app is actually made up of a function (Lambda), the resource architecture that lets it behave like a production-grade app, and the secrets to authenticate the database. By resource architecture, we’re referring to cloud services like databases, API gateways, authentication services, IOT, Machine Learning, and container tasks. Without all of these three components, a Lambda/function alone wouldn’t be able to communicate with the world outside of AWS and what kind of serverless app would that be?! Trick question… it wouldn’t be one at all.

It’s really always been true that our web applications are more than just the application code: your web app couldn’t run without a configured server, a populated database, and maybe a caching service. In the world of serverless function code, that requirement is more explicit: your functions are just tiny pieces of logic inside a larger system. There are even viable serverless applications that don’t use functions at all for routing or handling! Therefore, it’s possible that the future of serverless won’t include functions at all.

And if you’d like to see how users are really building applications beyond simply using Lambdas, be sure to take a look at Chris Munn’s Serverless is Dead presentation (…around slide 52 for this specific topic).

Hungry for even more info on why we need to consider a serverless app holistically? Catch this recent article for The New Stack, again by Stackery’s Community Developer, Toby Fee:

https://thenewstack.io/the-12-factor-app-4-serverless-is-more-than-functions/

3. “Serverless is a security nightmare!”

This concern is prudent- You should always heavily consider security when adopting any new toolset- but it’s also something of a logical fallacy. In fact, serverless isn’t more secure than traditional computing but it certainly isn’t less secure. It’s a different model entirely. But that probably isn’t enough to assuage your concerns. Some teams hear the words “serverless” and immediately get sweaty, thinking of an enormous total attack surface due to the fact that REST API functions still run on a server and utilize layers on layers of code to parse API requests. These teams think that, since serverless functions (Lambdas) are able to accept events from dozens of different sources within AWS, this means they would be “extra vulnerable” using serverless. Right? Not so much.

Instead, you should rely on trusted outside tools, like Twistlock. Used in tandem with Stackery, Twistlock allows developers to increase velocity, observability, policy enforcement, secrets management, and more. As serverless has expanded rapidly, it’s smart to keep application security at the forefront of your team’s minds. But know that your options for serverless innovation- and security- have evolved at a similar rate. Think of serverless as a new landscape. Would you arrive, hunting for the same scary intersection as you had in your hometown (i.e. worrying about needing to patch servers)? No. Instead, arrive in this new landscape equipped with car insurance for a different intersection as you admire the scenery.

For more on how Stackery and Twistlock keeps serverless security in check, take a look at this brief: https://www.stackery.io/pdf/twistlock-stackery_SB.pdf

4. “Serverless is super cheap”

As soon as serverless boarded the Hype-Cycle Express™, one area of perennial debate has been the cost of cloudside development. The good? We get to tell idle server-payments to scram so you’ll no longer have to funnel money into a framework when nobody is requesting data from the server. For software teams eager to explore their serverless options, it’s tempting to use this as a selling point for the powers that be. But this is a misguided approach because it’s not the full picture and nobody wants surprise fees. Serverless might not equal running your own servers but it sure does equal managing and using services as you see fit. For instance, if you are storing data on AWS EC2, transfer rates apply when your app initiates external transfers. There are a number of services involved when you decide to go serverless and it’s crucial to be transparent and frank about this when you’re evaluating your serverless options.

So no, serverless isn’t some kind of development loophole that inherently saves money, but it does equip you with the power to choose what you pay for.

Take a look at this cool serverless cost calculator from Peter Sbarski and the A Cloud Guru team to get a look under the hood of what your serverless strategy is costing you.

Twitter
From our friend Corey Quinn via Twitter

5. “Serverless is super expensive”

On the flip side, many teams looking into adopting serverless get very caught up in the fear of paying for a million services (micro sometimes doesn’t feel so micro) and don’t focus enough on the positive impact not paying for idle servers will have on their bottom line. For one thing, as your business grows, you won’t be paying for new equipment, because a new serverless app can handle the same amount of traffic as it would in a parallel universe full of tricked out servers. Also expensive? Not getting your apps to market on schedule. With serverless, the cost of a server maintenance team is similarly eliminated and the rule is: get your app to market, and then optimize it. Both of these save a significant amount. James Beswick contributed a lot of valuable information on this overall topic in one of the recent Stackery Wednesday livestreams. Replay it on-demand.

Regarding the myriad of (potentially expensive) services you can take advantage of in serverless development, there is a solution: CloudWatch. This AWS service gathers data in the form of logs, metrics, and events, allowing for a comprehensive view under the hood of your AWS resources, applications, and services. Stackery’s integration with CloudWatch allows for all changes to be saved as Git commits so you’ll get a panoramic view into every application’s history and underlying infrastructure. And yes, you’ll only pay for what you use with CloudWatch as well.

The takeaway should be that with serverless, you get fine-grained control over what you spend, only paying for services when you actually need them.

As for the cost of Stackery, we’ve recently introduced a free tier for developers and hobbyists that removes the barrier for entry. Our CEO Nate Taggart dives into the particulars in this blog.


What myths have you heard about serverless? What challenges surrounding cloudside development or Stackery have got you stumped? We clearly like to stay on top of such things so let us know on Twitter by linking to this article.

Click here to share it with your Twitter cohorts.


Our weekly livestreams are also a great place to discover what misconceptions you might have— and challenge our engineers with your toughest questions. Visit our livestream homepage to register for the next session!
Building Slack Bots for Fun: A Serverless Release Gong
Anna Spysz

Anna Spysz | November 16, 2018

Building Slack Bots for Fun: A Serverless Release Gong

We have a running joke at Stackery regarding our tiny little gong that’s used to mark the occasion when we get a new customer.

sad tiny gong

So tiny.

And while I’m all about the sales team celebrating their successes (albeit with a far-too-small gong), I felt like the dev team needed its own way to commemorate major product releases and iterations.

Then I saw that Serverless Framework is doing its No Server November challenge, and I thought, what a perfect way to show off our multiple framework support while iterating on our Github Webhooks Tutorial to support Serverless Framework projects!

Starting from Scratch…Almost

Stackery makes it easy to import an existing stack or create new a stack based on an existing template. And, conveniently, I had already build a GitHub webhook listener just the week before as part of the webhook tutorial. However, the rules of the competition specifically state that “to qualify, the entry must use the Serverless Framework and a serverless backend” - and I was curious to see the differences when building out my app using that framework as compared to our default (AWS SAM).

So the first thing I did was create an empty Serverless Framework template I could use to build my app on. This was quite simple - I just created a serverless.yml file in a new directory and added the following:

service: serverless-gong

frameworkVersion: ">=1.4.0 <2.0.0"

provider:
  name: aws
  runtime: nodejs8.10

I initialized a new git repository, and added, committed and pushed the serverless.yml file to it.

Building in Stackery

Now it was time to import my new Serverless Framework boilerplate into Stackery so I could start adding resources. In the Stackery App, I navigated to my Stacks page, and clicked the Create New Stack button in the upper right, filling it out like so:

screenshot

Then, in the Stackery Dashboard, I created an API Gateway resource with a POST route with a /webhook path and a Function resource named handleGong, and connected them with a wire. All of this, including saving and using environment variables for your GitHub secret, is documented in the webhook tutorial, so I won’t go through it again. In the end, I had a setup very similar to that found at the end of that tutorial, with the exception of having a serverless.yml file rather than a template.yml for the configuration, and having everything in one directory (which was fine for a small project like this, but not ideal in the long run).

With the added resources, my serverless configuration now looked like this:

service: serverless-gong
frameworkVersion: '>=1.4.0 <2.0.0'
provider:
  name: aws
  runtime: nodejs8.10
functions:
  handleGong:
    handler: handler.gongHandler
    description:
      Fn::Sub:
        - 'Stackery Stack #{StackeryStackTagName} Environment #{StackeryEnvironmentTagName} Function #{ResourceName}'
        - ResourceName: handleGong
    events:
      - http:
          path: /webhook
          method: POST
    environment:
      GITHUB_WEBHOOK_SECRET:
        Ref: StackeryEnvConfiggithubSecretAsString
      SLACK_WEBHOOK_URL:
        Ref: StackeryEnvConfigslackWebhookURLAsString
resources:
  Parameters:
    StackeryStackTagName:
      Type: String
      Description: Stack Name (injected by Stackery at deployment time)
      Default: serverless-gong
    StackeryEnvironmentTagName:
      Type: String
      Description: Environment Name (injected by Stackery at deployment time)
      Default: dev
    StackeryEnvConfiggithubSecretAsString:
      Type: AWS::SSM::Parameter::Value<String>
      Default: /Stackery/Environments/<StackeryEnvId>/Config/githubSecret
    StackeryEnvConfigslackWebhookURLAsString:
      Type: AWS::SSM::Parameter::Value<String>
      Default: /Stackery/Environments/<StackeryEnvId>/Config/slackWebhookURL
  Metadata:
    StackeryEnvConfigParameters:
      StackeryEnvConfiggithubSecretAsString: githubSecret
      StackeryEnvConfigslackWebhookURLAsString: slackWebhookURL
plugins:
  - serverless-cf-vars

Look at all that yaml I didn't write!

And my Dashboard looked like so:

screenshot

Since I had already written a webhook starter function that at the moment logged to the console, it didn’t feel necessary to reinvent the wheel, so I committed in Stackery, then git pulled my code to see the updates, and created a handler.js file in the same directory as the serverless.yml. In it, I pasted the code from my previous webhook function - this was going to be my starting point:

const crypto = require('crypto');
function signRequestBody(key, body) {
  return `sha1=${crypto.createHmac('sha1', key).update(body, 'utf-8').digest('hex')}`;
}
// The webhook handler function
exports.gongHandler = async event => {
  // get the GitHub secret from the environment variables
  const token = process.env.GITHUB_WEBHOOK_SECRET;
  const calculatedSig = signRequestBody(token, event.body);
  let errMsg;
  // get the remaining variables from the GitHub event
  const headers = event.headers;
  const sig = headers['X-Hub-Signature'];
  const githubEvent = headers['X-GitHub-Event'];
  const body = JSON.parse(event.body);
  // this determines username for a push event, but lists the repo owner for other events
  const username = body.pusher ? body.pusher.name : body.repository.owner.login;
  const message = body.pusher ? `${username} pushed this awesomeness/atrocity through (delete as necessary)` : `The repo owner is ${username}.`
  // get repo variables
  const { repository } = body;
  const repo = repository.full_name;
  const url = repository.url;

  // check that a GitHub webhook secret variable exists, if not, return an error
  if (typeof token !== 'string') {
    errMsg = 'Must provide a \'GITHUB_WEBHOOK_SECRET\' env variable';
    return {
      statusCode: 401,
      headers: { 'Content-Type': 'text/plain' },
      body: errMsg,
    };
  }
  // check validity of GitHub token
  if (sig !== calculatedSig) {
    errMsg = 'X-Hub-Signature incorrect. Github webhook token doesn\'t match';
    return {
      statusCode: 401,
      headers: { 'Content-Type': 'text/plain' },
      body: errMsg,
    };
  }

  // print some messages to the CloudWatch console
  console.log('---------------------------------');
  console.log(`\nGithub-Event: "${githubEvent}" on this repo: "${repo}" at the url: ${url}.\n ${message}`);
  console.log('Contents of event.body below:');
  console.log(event.body);
  console.log('---------------------------------');

  // return a 200 response if the GitHub tokens match
  const response = {
    statusCode: 200,
    body: JSON.stringify({
      input: event,
    }),
  };

  return response;
};

At this point, I prepared and did the initial deploy of my stack in order to get the Rest API endpoint for the GitHub webhook I needed to set up. Again, the webhook tutorial runs through the deployment and webhook setup process step by step, so I won’t repeat it here.

Using the Rest API /webhook url, I created a webhook in our Stackery CLI repo that was now listening for events, and I confirmed in my CloudWatch logs that it was indeed working.

Bring on the Gong

The next step was to modify the function so it “gonged” our Slack channel when our Stackery CLI repo was updated with a new release. To do that, I had to create a custom Slack app for our channel and set up its incoming webhooks. Luckily, Slack makes that really easy to do, and I just followed the step-by-step instructions in Slack’s webhook API guide to get going.

I set up a #gong-test channel in our Slack for testing so as to not annoy my co-workers with incessant gonging, and copied the URL Slack provided (it should look something like https://hooks.slack.com/services/T00000000/B00000000/12345abcde).

Before editing the Lambda function itself, I needed a way for it to reference that URL as well as my GitHub secret without hard-coding it in my function that would then be committed to my public repo (because that is a Very Bad Way to handle secrets). This is where Stackery Environments come in handy.

I saved my GitHub secret and Slack URL in my environment config like so:

screenshot

Then referenced it in my function:

screenshot

And will add it to my function code in the next step, using process.env.GITHUB_WEBHOOK_SECRET and process.env.SLACK_WEBHOOK_URL as the variables.

Final Ingredient

Since we’re automating our gong, what’s more appropriate than an automated gong? After a somewhat frustrating YouTube search, I found this specimen:

A auto-gong for our automated app? Perfect! Now let’s use our function to send that gong to our Slack channel.

Here’s the code for the final gongHandler function in handler.js:

const crypto = require('crypto');
const Slack = require('slack-node');

// validate your payload from GitHub
function signRequestBody(key, body) {
  return `sha1=${crypto.createHmac('sha1', key).update(body, 'utf-8').digest('hex')}`;
}
// webhook handler function
exports.gongHandler = async event => {
  // get the GitHub secret from the environment variables
  const token = process.env.GITHUB_WEBHOOK_SECRET;
  const calculatedSig = signRequestBody(token, event.body);
  let errMsg;
  // get the remaining variables from the GitHub event
  const headers = event.headers;
  const sig = headers['X-Hub-Signature'];
  const githubEvent = headers['X-GitHub-Event'];
  const body = JSON.parse(event.body);
  // get repo variables
  const { repository, release } = body;
  const repo = repository.full_name;
  const url = repository.url;
  // set variables for a release event
  let releaseVersion, releaseUrl, author = null;
  if (githubEvent === 'release') {
    releaseVersion = release.tag_name;
    releaseUrl = release.html_url;
    author = release.author.login;
  }

  // check that a GitHub webhook secret variable exists, if not, return an error
  if (typeof token !== 'string') {
    errMsg = 'Must provide a \'GITHUB_WEBHOOK_SECRET\' env variable';
    return {
      statusCode: 401,
      headers: { 'Content-Type': 'text/plain' },
      body: errMsg,
    };
  }
  // check validity of GitHub token
  if (sig !== calculatedSig) {
    errMsg = 'X-Hub-Signature incorrect. Github webhook token doesn\'t match';
    return {
      statusCode: 401,
      headers: { 'Content-Type': 'text/plain' },
      body: errMsg,
    };
  }

  // if the event is a 'release' event, gong the Slack channel!
  const webhookUri = process.env.SLACK_WEBHOOK_URL;

  const slack = new Slack();
  slack.setWebhook(webhookUri);

  // send slack message
  if (githubEvent === 'release') {
    slack.webhook({
      channel: "#gong-test", // your desired channel here
      username: "gongbot",
      icon_emoji: ":gong:", // because Slack is for emojis
      text: `It's time to celebrate! ${author} pushed release version ${releaseVersion}. See it here: ${releaseUrl}!\n:gong:  https://youtu.be/8nBOF5sJrSE?t=11` // your message
    }, function(err, response) {
      console.log(response);
      if (err) {
        console.log('Something went wrong');
        console.log(err);
      }
    });
  }

  // (optional) print some messages to the CloudWatch console (for testing)
  console.log('---------------------------------');
  console.log(`\nGithub-Event: "${githubEvent}" on this repo: "${repo}" at the url: ${url}.`);
  console.log(event.body);
  console.log('---------------------------------');

  // return a 200 response if the GitHub tokens match
  const response = {
    statusCode: 200,
    body: JSON.stringify({
      input: event,
    }),
  };

  return response;
};

Finally, I needed to add a package.json file so that I could use dependencies. When creating a function using an AWS SAM template, Stackery would do this for your automatically, but in this case I had to create the file and add the following myself:

{
  "private": true,
  "dependencies": {
    "aws-sdk": "~2",
    "slack-node": "0.1.8"
  }
}

I added, committed and pushed the new code, re-deployed my Serverless Framework stack, then added another GitHub webhook to a test repo. I created a GitHub release in my test repo, and waited in anticipation.

Milliseconds later, I hear the familiar click-click-click of Slack…

screenshot

Pretty awesome, if I do say so myself. 🔔

A few notes:

  • I used the slack-node NPM package to make life easier. I could have used the requests library or the built-in HTTPS library (and you can if you want to avoid using external dependencies).
  • The GitHub API is very helpful for figuring out what kind of response to expect from your webhook. That’s how I determined the values to set for releaseVersion, releaseUrl, author.
  • When you console.log() in your serverless function, the results can be seen in the AWS CloudWatch logs. Stackery provides a convenient direct link for each function.

screenshot

  • This serverless application should fit within your AWS free tier, but keep an eye on your logs and billing just in case.

If you’d like to make your own serverless gong, all of the configuration code is available in my Serverless Gong GitHub repository. Just create a new stack your Stackery account (you can sign up for a free trial if you don’t have one yet), choose Create New Repo as the Repo Source, and select Specify Remote Source to paste in the link to my repo as a template.

Add your GitHub and Slack environment parameters, deploy your stack, and sit back and wait for your Slack to gong!

AWS ReInvent: Serverless, Stackery, and Corey Quinn of LastWeekInAWS
Abner Germanow

Abner Germanow | November 14, 2018

AWS ReInvent: Serverless, Stackery, and Corey Quinn of LastWeekInAWS

Welcome savvy builder. If you’ve made it to our corner of the Internet and headed to re:invent, you are in the right place.

We want you to leave Las Vegas with the savvy to choose how and when to apply the growing menu of serverless capabilities to your initiatives. To help you, we’re sending our serverless-first engineers to Las Vegas with three goals.

  1. Share experiences building AWS serverless apps
  2. Show app builders how Stackery + AWS serverless offerings accelerate velocity and confidence
  3. Connect the AWS serverless community

Sharing Our Serverless Experience

As we build our serverless-first service for teams, we examine the developer and operations experience to make the experience faster and safer for our customers. We’ve learned a few things along the way about what makes serverless awesome, when to insert container services into the mix, and how workflows differ from services we’ve built in the past.

At our booth we’ll be holding demonstrations walking through what we’ve learned and where we find developers and teams working differently. Keep an eye on twitter for exact timing.

Booth Talks Include:

  • ICYMI: Corey Quinn From Last Week In AWS (Thurs @ 2:15)
  • PSA: Permission Scoping Accounts, Services, and Humans
  • Namespacing for fun and dev/test/prod environments
  • A look at the new AWS [REDACTED]
  • Where and when we use containers in our serverless-first apps
  • Using existing resources with serverless applications
  • How to build state machines around serverless apps
  • Instrumentation and monitoring serverless apps with Stackery and Epsagon
  • Testing serverless apps
  • Secrets Manager vs Parameter Store
  • Lambda@Edge vs Lambda What You Should Know
  • Systems Manager Parameter Store

Show off Stackery’s Serverless Acceleration Software

If you are new to Stackery or an old pro, a lot has changed just in the last month!

“We don’t need a whiteboard, I’ll mock it up in Stackery.” -Customer using Stackery to break a monolith into microservices.

We’ve made it even easier to visually mock up the architectural intent of your app with a new template to visual architecture toggle editor that you can take straight to packaging and deployment. GraphQL, the ability to import projects in SAM or Serverless.com frameworks, Lambda@Edge, and much more.

Drop by to see the latest, or better yet, sign up for a slot and we’ll make sure our engineers are dedicated to you.

Corey Quinn of Last Week In AWS and Connecting the Community

AWS moves fast. Almost as fast as serverless-first teams. On Thursday at 2:15, Corey Quinn of the Last Week In AWS Newsletter will be at our booth for an exclusive ICYMI to review announcements you probably missed. You can get his snark-a-take (I just made that up) on the keynotes, serverless, and more.

Our invite only serverless insiders party is designed to connect the pioneers with those who are ramping up in 2019. If you are interested in an invite drop us a note.

Finally, like all serverless teams, we abhor repeating writing, so for a guide to serverless sessions, check out these guides:

See you in Vegas!

How to find us at ReInvent: Booth: #2032 - We’re about 40 feet from the dev lounge in the Sands/Venetian Hall B. Contact our team: reinvent@stackery.io

The Case for Minimalist Infrastructure
Garrett Gillas

Garrett Gillas | November 13, 2018

The Case for Minimalist Infrastructure

If your company could grow its engineering organization by 40% without increasing costs, would they do it? If your DevOps team could ship more code and features with fewer people, would they want to? Hopefully, the answer to both of these questions is ‘yes’. At Stackery, we believe in helping people create the most minimal application infrastructure possible.

Let me give you some personal context. Last year, I was in charge of building a web application integrated with a CMS that required seven virtual machines, three MongoDBs, a MySQL database and CDN caching for production. In addition, we had staging and dev environments with similar quantities of infrastructure. Over the course of 2 weeks, we were able to work with our IT-Ops team to get our environments up and running and start building the application relatively painlessly.

At Stackery, we saw a big opportunity that allows software teams to spend less time on infrastructure, and more time building software.

After we got our application running, something happened. Our IT-Ops team went through their system hardening procedure. For those outside the cybersecurity industry, system hardening can be defined as “securing a system by reducing its surface of vulnerability”. This often includes things like changing default passwords, removing unnecessary software, unnecessary logins, and the disabling or removal of unnecessary services. This sounds fairly straightforward, but it isn’t.

In our case, it involved checking our system against a set of rules like this one for Windows VMs and this one for Linux. Because we cared about security, this included closing every single port on every single applicant that was not in use. As the project lead, I discovered three things by the end.

  • We had spent much more people-hours on security and ops than on development.
  • Because there were no major missteps, this was nobody’s fault.
  • This should never happen.

Every engineering manager should have a ratio in their head of work hours spent in their organization on software engineering vs other related tasks (ops, QA, product management, etc…). The idea is that organizations that spend the majority of their time actually shipping code will perform better than groups that spend a larger percentage of their time on operations. At this point, I was convinced that there had to be a better way.

Serverless Computing

There have been many attempts since the exodus to the cloud to make infrastructure easier to manage in a way that requires fewer personnel hours. We came from bare-metal hardware to datacenter VMs, then VMs in the cloud and later containers.

In November 2014 Amazon Web Services announced AWS Lambda. The purpose of Lambda was to simplify building on-demand applications that are responsive to events and data. At Stackery, we saw a big opportunity that allows software teams to spend less time on infrastructure, and more time building software. We have made it our mission to make it easier for software engineers to build highly-scalable applications on the most minimal, modern cloud infrastructure available.

Stackery Achieves AWS Advanced Technology Partner Status
Garrett Gillas

Garrett Gillas | July 26, 2018

Stackery Achieves AWS Advanced Technology Partner Status

Stackery is excited to announce the company has achieved Advanced Tier Technology Partner status in the Amazon Web Services (AWS) Partner Network (APN). Stackery has achieved this recognition by meeting strict quality and accreditation standards for Amazon Web Services partners.

What This Means for Stackery

Stackery has always been dedicated to providing the best platform for building serverless applications that utilize some of the most advanced capabilities that AWS has to offer. Stackery underwent a rigorous architectural review by AWS and was found to meet all of the the technical requirements for APN Advanced Technology Partners. This new partner status validates our commitment to building a great product that meets all of the standards and best practices that the world’s leading cloud provider requires.

What This Means for Stackery Customers

As a part of the approval process, Stackery had to meet Amazon’s strict requirements for security, site reliability, disaster recovery and customer data protection. This means that Stackery customers have the assurance that their serverless architecture is protected and maintained according to AWS standards.

“As we move forward with improving and expanding our product, it is important that we stay committed to giving our customers the best data protection and reliability possible.” said Nate Taggart, CEO of Stackery. “We are excited about our new relationship with Amazon and look forward to bringing more tools to our customers that will help their businesses grow in the cloud.”

Stackery 2018 Product Updates
Sam Goldstein

Sam Goldstein | May 16, 2018

Stackery 2018 Product Updates

Our product engineering team ships every single day.

That means Stackery’s product gets better every single day. Stackery engineers commit code into git which marches into our continuous delivery pipeline. We promote each version of our microservices, frontend, and CLI through multiple testing environments, rolling shiny new features into production or notifying the team of failures. This is the best way we know to develop modern software and explains why our team is able to ship so much functionality so rapidly.

However, because we’re constantly shipping, it means we need to pause periodically to take note of new features and improvements. In this post I’ll summarize some of the most significant features and changes from our product team over the past few months. For a more detailed list of changes, you can read and/or follow Stackery’s Release Notes.

Referenced Resource

One of the best things about microservice architecture is the degree which you can encapsulate and reuse functionality. For example, if you need to check if a user is authorized to perform a certain action, there’s no need to scatter permissioning code throughout your services. Put it all in one place (an AuthorizationService perhaps), and call out to that in each service that needs to check permissions.

Stackery’s Referenced Resource nodes let’s you reference existing infrastructure resources (be they Lambda functions, S3 buckets, VPCs, you name it) by their AWS ARN and seamlessly integrate these into your other services.

One of the best uses I’ve seen for Referenced Resources is using it as the mechanism to implement centralized error reporting for serverless architectures. Write one central Lambda function that forwards exceptions into your primary error reporting and alerting tool. Configure every other stack to send error events to this central handler. Viola! Complete visiblity into all serverless application errors.

Support for Multiple AWS Accounts

Every company we work with uses multiple AWS accounts. Sometimes there’s one for production and one for everything else. In Stackery’s engineering team each engineer has multiple accounts for development and testing, as well as shared access to accounts for integration testing, staging, and production. Splitting your infrastructure across multiple accounts has major benefits. You can isolate permissions and account-wide limits, minimizing risk to critical accounts (e.g. production).

However, managing deployment of serverless architectures across multiple accounts is often a major PITA. This is why working across multiple accounts is now treated as a first class concern across all of Stackery’s functionality. Multiple AWS accounts can be registered within a Stackery account using our CLI tool. Stackery environments are tied to an AWS accounts, which maps flexibly into the vast majority of AWS account usage patterns.

Managing multiple AWS accounts is a key part of most organizations’ cloud security strategy. Stackery supports this by relying on your existing AWS IAM policies and roles when executing changes. If the individual executing the change doesn’t have permission in that AWS account, the action will fail. This makes it straightforward to set up workflows where engineers have full control to make changes in development and testing environments, but can only propose changes in the production account, which are then reviewed and executed by an authorized individual or automation tool.

You can read more in our knowledge base article about working with multiple AWS accounts in Stackery.

CloudFormation Resource Nodes

Sometimes you need to do something a little different, which is why we built custom CloudFormation Resource nodes. You can use these to provision any AWS resource and take advantage of the full power and flexibility of CloudFormation, for situations when that’s required or desirable.

What’s been coolest about rolling this feature out is the variety of creative uses we’ve seen it used. For example use CloudFormation Resource nodes to automatically configure and seed a database the first time you deploy to a new environment. You can also use them to automatically deploy an HTML front end to CloudFront each time you deploy your backend serverless app. The possibilities are endless.

AWS Resource Tagging

Resource Tagging may not be the most glamorous of features, but it’s a critical part of most organizations’ strategies for tracking cost, compliance, and ownership across their infrastructure. Stackery now boasts first class support for tagging provisioned resources. We also provide the ability to require specific tags prior to deployment, making it orders of magnitude to get everyone on the same page on how to correctly tag resources.

Always Shipping

Our goal is to always be shipping. We aim to push out valuable changes every day. Customer’s gain more control and visiblity over their serverless applications each day, so they can ship faster and more frequently too. Look out for more great changes rolling out each day in the product, and watch this blog for regular announcements summarizing our progress. We also love to hear what you think so if you have wants or needs managing your serverless infrastructure, don’t hesitate to let us know.

Building a Reddit Bot with Stackery
Stephanie Baum

Stephanie Baum | January 18, 2018

Building a Reddit Bot with Stackery

I’ve always wanted to build a Reddit bot, however, I didn’t want to go through the hassel of actually setting up cloud based hosting for it to run on. One of the most powerful aspects of serverless architectures is how simple it is to implement a task pipeline. In this case, I created a fully live Reddit bot in about an hour, that scrapes the top posts from /r/cooking and emails them to me. It’s easy to see how these atomic types of tasks can be chained together to create powerful applications. For example, with a bit more work, instead of an AWS SNS topic we could feed the Reddit posts into an AWS Kinesis Stream, then attach consumer lambda functions to the stream to perform context analytics. One can see how this can apply to a CI/CD pipeline, and in fact we use similar processes with our own serverless continuous integration (CI) and continuous delivery (CD) pipeline. Read more about Stackery’s CI/CD here.

Overview of Components

  • “Timer” node to ping a function, triggering the reddit bot to work once a day
  • “RedditBot” node, a lambda function that once triggered, authenticates with reddit using the snoowrap library and scrapes the hot /r/cooking posts, sending along the good ones via SNS
  • “HotCookingPosts” SNS topic node, an SNS topic that forwards all messages to my email address
Implementation Details

Create a reddit account for your bot. Then navigate to https://www.reddit.com/prefs/apps/ and select Create App, making sure you select “script” in the radios underneath the name. Note down the client id and client secret, these will go into the function configuration along with the reddit username and password for your bot account.

Configure a stack using the Stackery dashboard with 3 nodes:

Timer -> Function -> SNS Topic

Attached to the function are some configuration values that are necessary for reddit’s authentication mechanism. Stackery automatically includes certain information about a function based on what it’s attached to (in this case, the SNS topic). Read more about the output port data here. We can leverage this when specifying the topic node ARN for forwarding on the selected posts, implemented in this file.

Function Settings:

Configuration Environment Variables:

Fill in your saved client id, secret, Reddit bot username, and Reddit bot password and store under environment variable names in the function editor panel. For more information on how to create a deployment environment configuration, visit the Environment Configuration Docs. It’s important not to add these sensitive variables directly, as they will then be committed to github, and (depending on your repository settings) exposed to the public. When added via an environment configuration, these key value pairs are automatically encrypted and stored in an S3 bucket on your AWS account.

The “bot” function will receive a timer event which then triggers it to scrape /r/cooking. The timer interval can be triggered every minute to test functionality, then I’d recommend changing it to a more sane interval.

The function looks through the hot submissions and any with > 50 comments get forwarded to the SNS topic. See the code for this here: https://github.com/stackery/stackery-reddit-bot

You can also insert log statements in your own code to debug the lambda function via Cloudwatch Logs (which you can easily get to the logs in from the function’s metrics tab section in the Stackery deployments view).

Currently, the code sends a json object directly to email. This is done by navigating to your AWS accounts SNS service, to the topic that Stackery has automatically provisioned, and clicking the Create Subscription button, with the Protocol field set to email, and value as your email. For more on the capabilities of SNS visit Amazon’s SNS Docs.

As you can see, it’s really straightforward to build a Reddit bot (and many other types of bots) using serverless resources and Stackery’s cloud management capabilities. Bots are functionally lightweight by nature, and fit easily into serverless architectures. With the composability of AWS Lambda, they can be orchestrated and chained together to perform a variety of tasks, ranging from emailing scraped posts off Reddit, to managing CI/CD.

Get the Serverless Development Toolkit for Teams

now and get started for free. Contact one of our product experts to get started building amazing serverless applications today.

To Top