Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

 Hi! I’m Gracie Gregory, Stackery’s New Copywriter
Gracie Gregory

Gracie Gregory | December 05, 2018

Hi! I’m Gracie Gregory, Stackery’s New Copywriter

I’ve worked in various sectors of tech since graduating college in 2014 with a Russian literature degree and an appetite for something entirely new post-graduation. After meeting with a handful of Portlanders in various sectors of business, I landed a PR and branding role at The Linux Foundation where I stayed for years. At the risk of using a platitude, joining the open source community was like “drinking from the firehose” for someone used to reading novels all day.

Since then, my career has taken other unexpected turns but always within technology. Because I am primarily a writer, I’ve often lacked the hands-on experience that would make new concepts like cloud-native, Node.js, and yes, serverless, come naturally. While my right-brain sometimes limits my ability to follow along in this particular realm without asking 10 million questions, I do believe an outsider’s perspective is an asset to a tech company’s communication strategy. Since I approach most technological concepts as an outsider, the content I produce is positioned for a more general audience. If you enjoy learning, technical writing from a non-technical background is really a dream job.

I applied to work at Stackery in Fall 2018 for that reason: Serverless is a fascinating new corner of computing and much of the landscape is still burgeoning. Working at Stackery would mean I’d be challenged every day and surrounded by pioneers in the field. I thought it would be a humbling opportunity and indeed it has been. Every day is a crash course in modern software development, tech history, and the variegated nature of startup life.

Throughout the interview process, everyone was kind enough to assure me that it was ok if I didn’t fully “get” serverless that day. They all told me that the space itself was relatively new and that, if I were hired, I’d have lots of resources to call upon. While I was grateful for the team’s reassurance, it didn’t quell my anxious desire to better understand serverless computing right that second. I had created an account with Stackery and played around in the demo, which really helped me frame things. But I still had fundamental questions. It was clear I had to lay some major groundwork to be a worthwhile candidate. I did, however, come up with a few serverless comparisons while I was researching the company. This made the concepts easier for me to digest before interviewing with the team.

“I wouldn’t risk throwing any of those out there,” my friend said the eve of my final interview. “What if you’re way off-base? You’d look like an idiot.”

Since trying to avoid looking like an idiot is the soulful principle that guides my life’s path, I was planning to take this advice to heart. But when I actually met my interviewers, I quickly understood that this was an experimentative culture that encouraged trying things before judging them. When I met with Stackery’s VP of Product and Engineering, Sam Goldstein, I actually felt empowered to test out a few of my serverless metaphors to see whether or not I was on the track to understanding. I was pleasantly surprised that he said they were (at the most general level) apt.

If you’re an expert, do not take this too seriously. What I am about to say will, best case scenario, make me look like a newb. Worst case scenario, it will make me look like a n00b. For anyone non-technical who might have found our blog without a drop of serverless understanding, you have permission to use my Cliff’s Notes below. I hope this will clarify serverless computing and get you started with this amazing technology!

Serverless is Like Dropshipping

At the risk of defining a theoretically new concept with another theoretically new concept…dropshipping!

Dropshipping uses platforms like Shopify to allow hopeful online sellers to only tackle the parts of eCommerce they want. In most cases, this means curating and designing the layout of their store. They pick from a vast library of products that appeal to them/gel with their brand vision and get to outsource the headache of managing inventory to a warehouse. People have been doing this in eCommerce for a while but new platforms make it accessible to more people or at least help them get it up and running faster. Serverless is similar in that engineers are able to focus on their application rather than infrastructure. Like dropshippers, serverless engineers don’t have to worry about their “inventory” (i.e. implementing and maintaining infrastructure.) Both are something of a jackpot for those who want to focus more on the art and science of their work instead of the logistics or administration.

Serverless is Like WiFi

This comparison is for those who don’t understand what precisely the “less” in serverless means. Imagine you are an average American in 2003: right around when WiFi was solidified as the home internet solution. You want faster internet in your home and to access it easily and without complications. You’ve known about wifi for a while and finally decide to hook your home up but can’t quite conceptualize how the wireless part works. Will you still need a router? Will you need to become a sysadmin to use it? We now know the answers to be a vehement yes and no, respectively. Yes, you still need a router, but it won’t take up space; you’ll basically never interact with it. It’s upstairs in a spare bedroom or hidden in your TV stand. Out of sight, but still enabling you to check your email and watch Ebaum’s World videos (it’s 2003, after all.) Serverless is the same. There is still a server, it’s simply elsewhere and not your problem as an engineer on a daily basis.

Serverless is Like Modern Car Insurance

Stay with me here. Let me say upfront that serverless is obviously more interesting than car insurance but the latter is creating relevant shockwaves in the industry. Ever heard of pay-as-you-go car insurance? Essentially, the provider sends you a small device to implant in your car. This allows them to track how much you drive and you only pay for the miles you use. This differs from traditional insurance because a) it’s cheaper and b) it’s a more lightweight solution. What I mean by this is, it’s there when you need it and not your problem when you don’t. Serverless is similar. You never pay for idle time, however, the tools are reliable and available when in use. Both are also beneficial in inconsistent traffic scenarios (… you promised to humor me.)


What’s the point of publishing all of the above, besides indulgently breaking down how my brain works? Well, the undergraduate class of 2019 gets their diploma in just six months and I can guarantee you that serverless will have expanded even further by then. I believe it to be the future of software development and writers are, of course, needed in this space. It doesn’t serve people like me to hear terms like “serverless” and write it off as a buzzword that’s above our paygrade; to do so would mean missing out on a fascinating subject to write about. So, if you work in marketing at any kind of company, I encourage you to start a dialogue with your engineering team. Learn from them and ask questions, no matter the beat you decide to cover.

It’s time for all of us to get involved in new technology as it develops. Serverless is a great place to start.


If you manage a software team and are interested in Stackery, set up a call with our serverless engineers today.

Lambda Layers & Runtime API: More Modular, Flexible Functions
Sam Goldstein

Sam Goldstein | November 29, 2018

Lambda Layers & Runtime API: More Modular, Flexible Functions

Lambda layers and runtime API are two new feature of AWS Lambda which open up fun possibilities for customizing the Lambda runtime and enable decreased duplication of code across Lambda functions. Layers lets you package up a set of files and include them in multiple functions. Runtime API provides an API for interacting with the Lambda service function lifecycle events which lets you be much more flexible about what you run in your Lambda.

Layers is aimed at a common pain point teams hit as the number of Lambdas in their application grows. Today, we see customers performing gymastics in order to compile binaries or package reusable libraries inside functions. One downside of this behavior is that it is difficult to ensure all functions have the latest version of the dependency, leading to inconsistencies across environments or over-complicated error-prone packaging processes. For example, at Stackery we compile git and package it into some our functions to enable integration with GitHub, GitLab, and CodeCommit. Prior to layers upgrading that dependency involved each developer responsible for a function repackaging those files in each related function. With layers, it’s much easier to standardize those technical and human dependencies and the combination of layers and runtime API enables a cleaner separation of concerns between business logic function code and cross-cutting runtime concerns. In fact, in Stackery, adding a layer to a function is just a dropdown box. That feels like a little thing, but it opens up several interesting use cases:

1. Bring Your Own Runtime

AWS Lambda provides 6 different language runtimes (Python, Node, Java, C#, Go, and Ruby). Along with layers comes the ability to customize specific files that are hooked into the Lambda runtime. This means you can gasp run any language you want in AWS Lambda. We’ve been aware that there is no serverless “lock in” for some time now but with these new capabilities you are able to fully customize the Lambda runtime.

To implement your own runtime you create a file called bootstrap in either a layer or directly in your function. It must have executable permissions (chmod +x).

Your bootstrap custom runtime implementation must perform these steps:

  1. Load the function handler using the Lambda handler configuration. This is passed to bootstrap through the _HANDLER environment variable.

  2. Request the next event over http: curl "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next"

  3. Invoke the function handler and capture the result

  4. Send the response to the Lambda service over http:

curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$INVOCATION_ID/response" -d "$RESPONSE"

It’s pretty much guaranteed there will be a bunch of new languages for you to deploy any minute through layers. At Stackery we’re debating whether a PHP or Haskell layer would be of greater benefit.

2. Shared Binaries and Libraries

Serverless apps often rely on reusable libraries and commands which the business logic code calls into. For example, our engineering team runs git inside some of our functions, which we package alongside our node.js function code. Scientific libraries, shell scripts, and compiled binaries are a few other common examples. While it’s nice to be able to package any files along with our code when these dependencies are used across many functions, need to be compiled, or are updated frequently you can end up hitting increasing function build complexity and team distractions.

With layers you can extract these shared dependencies and register that package within the account. In Stackery’s function editor you’ll see a list of all the layers in your account and can apply them to that function. This simplifies the management and versioning of reusable libraries used by your functions.

The layers approach has the added benefit that it’s easier to keep dependencies in sync across all your functions and to upgrade these dependencies across your microservices. Layers provides a way to reduce duplication in your function code and shared libraries in layers are only counted once against AWS storage limits regardless of how many functions use the layer. Layers can also be made public so it’s likely we’ll see open source communities and companies publish Lambda layers to make it easier for developers to run software in Lambda.

Serverless Cross-Cutting Concerns

By now it should be clear that layers unlock some exciting possibilities. Let’s take a step back and note this is one aspect of a broader set of good operational hygene. Microservices have major benefits over monolithic architecture. The pieces of your system get simpler. They can be developed, deployed, and scaled independently. On the other hand, your system consists of many pieces, making it more challenging to keep the things that need to be consistent in sync. These cross-cutting concerns, such as security, quality, change management, error reporting, observability, configuration management, continuous delivery, and environment management (to name a few) are critical to success, but addressing them often feels at odds with the serverless team’s desires to focus on core business value and avoid doing undifferentiated infrastructure work.

Addressing cross-cutting concerns for engineering teams is something I’m passionate about since throughout my career I’ve seen the huge impact (both positive and negative) it has on an engineering orgs’ ability to deliver. Stackery accelerates serverless teams, by addressing the cross-cutting concerns that are inherent in serverless development. This drives technical consistency, increases engineering focus, and multiplies velocity. This is the reason I’m excited to integrate Lambda layers into Stackery; now improving the consistency of your Lambda runtime environments is as easy as selecting the right layers from a drop down. It’s the same reason we’re regularly adding new cross-cutting capabilities, such as Secrets Management, GraphQL API definition, and visual editing of existing serverless projects.

There’s a saying in software that if something hurts you should do it more often, and typically this applies to cross-cutting problems. Best practices such as automated testing, continuous integration, and continuous delivery all spring from this line of thought. Solving these “hard” cross-cutting problems is the key to unlocking high velocity engineering - moving with greater confidence towards your goals.

PHP on Lambda? Layers Makes it Possible!
Nuatu Tseggai

Nuatu Tseggai | November 29, 2018

PHP on Lambda? Layers Makes it Possible!

AWS’s announcement of Lambda Layers means big things for those of us using serverless in production. The creation of set components that can be included with any number of Lambdas means you no longer have to zip up your application code and all its dependencies each time you deploy a serverless stack. This allows you to include dependencies that are much more bespoke to your particular serverless environment.

In order to enable Stackery customers with Layers at launch, we took a look at Lambda Layers use cases. I also decided to go a bit further and publish a layer that enables you to write a Lambda in PHP. Keep in mind that this is an early iteration of the PHP runtime Layer, which is not yet ready for production. Feel free to use this Layer to learn about the new Lambda Layers feature and begin experimenting with PHP functions and send us any feedback; we expect this will evolve as the activity around proof of concepts expands.

What does PHP do?

PHP is a pure computing language and you can use to emulate the event processing syntax of a general-purpose Lambda. But really, PHP is used to create websites, so Chase’s implementation maintains that model: your Lambda accepts API gateway events and processes them through a PHP web server.

How do you use it?

Configure your function as follows:

  1. Set the Runtime to provided
  2. Determine the latest version of the layer: aws lambda list-layer-versions --layer-name arn:aws:lambda:<your 3. region>:887080169480:layer:php71
  3. Add the following Lambda Layer: arn:aws:lambda:<your region>:887080169480:layer:php71:<latest version>

If you are using AWS SAM it’s even easier! Update your function:

Resources:
  MyFunction:
    Type: AWS::Serverless::Function
    Properties:
      ...
      Runtime: provided
      Layers:
        - !Sub arn:aws:Lambda:${AWS::Region}:887080169480:layer:php71

Now let’s write some Lambda code!

<?php
header('Foo: bar');
print('Request Headers:');
print_r(getallheaders());
print('Query String Params:');
print_r($_GET);
?>

Hi!

The response you get from this code isn’t very well formatted, but it does contain the header information passed by the API gateway:

If you try anything other than the path with a set API endpoint response, you’ll get an error response that the sharp-eyed will recognize as being from the PHP web server, which as mentioned above is processing all requests

Implementation Details

Layers can be shared between AWS accounts, which is why the instructions above for adding a layer works: you don’t have to create a new layer for each Lambda. Some key points to remember:

  • A layer must be published on your region
  • You must specify the version number for a layer
  • For the layer publisher, a version number is an integer that increments each time you deploy your layer

How Stackery Makes an Easy Process Easier

Stackery can improve every part of the serverless deployment process, and Lambda Layers are no exception. Stackery makes it easy to configure components like your API gateway.

Stackery also has integrated features in the Stackery Operations Console, which lets you add layers to your Lambda:

Conclusions

Lambda Layers offers the potential for more complex serverless applications making use of a deep library of components, both internal to your team and shared. Try adding some data variables or a few npm modules as a layer today!

Stackery Welcomes Abner Germanow
Nate Taggart

Nate Taggart | November 21, 2018

Stackery Welcomes Abner Germanow

Today, I’m proud to announce Abner Germanow is joining us as the company’s first chief marketing officer (CMO). Like Chase Douglas, Sam Goldstein and I, Abner also hails from the halls of New Relic, where we all contributed in our various roles to making New Relic the market leader it is today. Abner has more than 20 years experience in global marketing, product and solution marketing. He has an uncanny ease in advocating and evangelizing technology as well as analyzing customer adoption of new technologies. I think because of Abner’s years of experience as an IDC analyst, he has this way of engaging customers, helping them pinpoint issues and then producing education and marketing campaigns to reach new customers.

I’m delighted to have Abner join the team. He assumes responsibility for raising up the company’s brand and marketing the tools that we have worked so hard to bring to customers. His experience in reaching the early adopters of new tech solutions and expanding and engaging partners in the AWS ecosystem are the same goals we have for Stackery.

We’ve come a long way since we launched at the Serverless Conference, one year ago in October. Then, I promised that we would keep building, refining and polishing. Making serverless better and easier to use. Now that Abner has joined us, he will help get the word out that Stackery + AWS help customers ship and iterate new applications faster than they ever have before.

Please give a shout out and welcome Abner. He’s @abnerg on twitter and we’ll all be at re:invent next week.

How Benefit Cosmetics Uses Serverless
Guest Author - Jason Collingwood

Guest Author - Jason Collingwood | November 21, 2018

How Benefit Cosmetics Uses Serverless

Founded by twin sisters in San Francisco well before the city became the focal point of tech, Benefit has been a refreshing and innovative answer to cosmetics customers for over 40 years. The company is a major player in this competitive industry, with a presence at over 2,000 counters in more than 30 countries and online. In recent years, Benefit has undergone a swift digital transformation, with a popular eCommerce site in addition to their brick-and-mortar stores.

When I started with Benefit, the dev team’s priority was to resolve performance issues across our stack. After some quick successes, the scope opened up to include exploring how we could improve offline business processes, as well. We started with our product scorecard, which involved measuring:

  • In-site search result ranking.
  • Product placement and mentions across home and landing pages.
  • How high we appeared within a given category.

We needed to capture all this information on several different sites and in a dozen different markets. If you can believe it, we’d been living in a chaotic, manually updated spreadsheet and wasting thousands of hours per year gathering this information. There had to be a better way.

Automating Applications

To monitor a large number of sites in real time, a few SaaS options exist, but the costs can be hard to justify. Moreover, most solutions are aimed at end-to-end testing and don’t offer the kind of customization we needed. With our needs so well-defined it wasn’t very much work to write our own web scraper and determine the direction we needed to take.

The huge number of pages to load, though, meant that scaling horizontally was a must. Checking thousands of pages synchronously could take multiple days, which just wasn’t going to cut it when we needed daily reports!

“Well, let’s look into this serverless thing.”

Web monitors and testers are a classic case for serverless. The service needs to be independent of our other infrastructure, run regularly, and NOT be anyone’s full-time job to manage! We didn’t have the time nor people to spend countless hours configuring resources- and really didn’t want to be patching servers to keep it running a year in the future.

How it Works

We use Selenium and a headless Chrome driver to load our pages and write the results to a DynamoDB table. Initially, we tried to use PhantomJS but ran into problems when some of the sites we needed to measure couldn’t connect correctly. Unfortunately, we found ourselves confronted with a lof of “SSL Handshake Failed” and other common connection timeout/connection refused request errors.

The hardest part of switching to the ChromeDriver instead of PhantomJS is that it’s a larger package, and the max size for an AWS Lambda’s code package is 50 mb. We had to do quite a bit of work to get our function, with all its dependencies, down under the size limit.

The Trouble of Complexity

At this point, even though we now had a working Lambda, we weren’t completely out of the woods. Hooking up all the other services proved to be a real challenge. We needed our Lambdas to connect to DynamoDB, multiple S3 buckets, Kinesis streams, and an API Gateway endpoint. Then, in order to scale we needed to be able to build the same stack multiple times.

The Serverless Application Model (SAM) offers some relief from rebuilding and configuring stacks by hand in the AWS console, but the YAML syntax and the specifics of the AWS implementation make it pretty difficult to use freehand. For example, a timer to periodically trigger a Lambda is not a top-level element nor is it a direct child of the Lambda. Rather, it’s a ‘rule’ on a Lambda. There are no examples of this in the AWS SAM documentation.

At one point, we were so frustrated that we gave up and manually zipped up the package and uploaded via the AWS Console UI… at every change to our Lambdas! Scaling a lot of AWS services is simple, but we needed help to come up with a deployment and management process that could scale.

How Stackery Helps

It’s no surprise that when people first see the Stackery Operations Console, they assume it’s just a tool for diagramming AWS stacks. Connecting a Lambda to DynamoDB involves half a dozen menus on the AWS console, but Stackery makes it as easy as drawing a line.

Stackery outputs SAM YAML, meaning we don’t have to write it ourselves, and the changes show up as commits to our code repository so we can learn from the edits that Stackery makes.

It was very difficult to run a service even as simple as ours from scratch and now it’s hard to imagine ever doing it without Stackery. But if we ever did stop using the service, it’s nice to know that all of our stacks are stored in our repositories, along with the SAM YAML I would need to deploy those stacks via CloudFront.

Results

With the headaches of managing the infrastructure out of the way, it meant we could focus our efforts on the product and new features. Within a few months were able to offload maintenance of the tool to a contractor. A simple request a few times a day starts the scanning/scraping process and the only updates needed are to the CSS selectors used to find pertinent elements.

Lastly, since we’re using all of these services on AWS, there’s no need to setup extra monitoring tools, or update them every few months, or generate special reports on their costs. The whole toolkit is rolled into AWS and best of all, upkeep is minimal!

GitHub Actions: Automating Serverless Deployments
Toby Fee

Toby Fee | November 20, 2018

GitHub Actions: Automating Serverless Deployments

The whole internet is abuzz over GitHub Actions, if by ‘whole internet’ you mean ‘the part of the internet that is obsessed with serverless ops’ and by ‘abuzz’ you mean ‘aware of‘.

But Actions are a bit surprising! GitHub is a company that has famously focused on doing a single thing extremely well. As the ranks of developer-tooling SaaS companies swells by the day, you would think GitHub would have long ago joined the fray. Wouldn’t you like to try out a browser-based IDE, CI/CD tools, or live debugging tools, with the GitHub logo at the top left corner?

With the GitHub Actions product page promising to let you build your own ‘workflow’ from private and shared ‘actions’ in any order you want, with each action to be run in its own docker container, the whole thing configured with a simple-but-powerful scripting logic; GitHub Actions feels like a big ambitious expansion of the product. Could this be the influence of notoriously over ambitious new leadership at Microsoft?

In reality GitHub Actions are a powerful new tool for expanding what you do based on GitHub events and nothing else!

What can it do?

A lot! Again in the realm of ‘doing something to your repo when something happens to your repo. Some use cases that stand out:

  • Build your assets when you commit to Master
  • Raise alerts for critical issues
  • Take some custom action when commits get comments
  • Notify stakeholders when feature branches are merged to production

What can’t it do?

A whole lot! Workflows can’t:

  • Respond to anything other than GitHub repo events (you can’t send a request from anywhere else to kick off a workflow)
  • Take more than an hour
  • Have more than 100 actions - a limitation that seems pretty minor since actions can do arbitrarily large tasks

Overall Impressions

GitHub Actions are definitely a step in the right direction, since both the configuration for a workflow and the docker images for each action can all be part of a single repo managed like others code. And as I and others have often harped on: one should always prefer managing code over managing config. GitHub Actions increases the likelihood that your whole teams will be able to see how the workflow around your code is supposed to work, and that’s an unalloyed benefit to your team.

“I’m sold, GitHub Actions forever! I’ll add them to master tomorrow.”

Bad news sport, GitHub Actions are on a beta with a waitlist and while GitHub has its sight set on integrating actions with your production process, a warning at the top of the developer guide explicitly states that GitHub Actions isn’t ready to do that.

So for now head on over and get on the waiting list for the feature, and try it out with your dev branches sometime soon.

GitHub makes no secret of the fact that Actions replace the task of building an app to receive webhooks from your repository. If you’d like to build an app in the simplest possible structure, my coworker Anna Spysz wrote about how to receive GitHub webhooks in just a few steps. Further, using Stackery makes it easy to hook your app up to a docker container to run your build tasks.

Building Slack Bots for Fun: A Serverless Release Gong
Anna Spysz

Anna Spysz | November 16, 2018

Building Slack Bots for Fun: A Serverless Release Gong

We have a running joke at Stackery regarding our tiny little gong that’s used to mark the occasion when we get a new customer.

sad tiny gong

So tiny.

And while I’m all about the sales team celebrating their successes (albeit with a far-too-small gong), I felt like the dev team needed its own way to commemorate major product releases and iterations.

Then I saw that Serverless Framework is doing its No Server November challenge, and I thought, what a perfect way to show off our multiple framework support while iterating on our Github Webhooks Tutorial to support Serverless Framework projects!

Starting from Scratch…Almost

Stackery makes it easy to import an existing stack or create new a stack based on an existing template. And, conveniently, I had already build a GitHub webhook listener just the week before as part of the webhook tutorial. However, the rules of the competition specifically state that “to qualify, the entry must use the Serverless Framework and a serverless backend” - and I was curious to see the differences when building out my app using that framework as compared to our default (AWS SAM).

So the first thing I did was create an empty Serverless Framework template I could use to build my app on. This was quite simple - I just created a serverless.yml file in a new directory and added the following:

service: serverless-gong

frameworkVersion: ">=1.4.0 <2.0.0"

provider:
  name: aws
  runtime: nodejs8.10

I initialized a new git repository, and added, committed and pushed the serverless.yml file to it.

Building in Stackery

Now it was time to import my new Serverless Framework boilerplate into Stackery so I could start adding resources. In the Stackery App, I navigated to my Stacks page, and clicked the Create New Stack button in the upper right, filling it out like so:

screenshot

Then, in the Stackery Dashboard, I created an API Gateway resource with a POST route with a /webhook path and a Function resource named handleGong, and connected them with a wire. All of this, including saving and using environment variables for your GitHub secret, is documented in the webhook tutorial, so I won’t go through it again. In the end, I had a setup very similar to that found at the end of that tutorial, with the exception of having a serverless.yml file rather than a template.yml for the configuration, and having everything in one directory (which was fine for a small project like this, but not ideal in the long run).

With the added resources, my serverless configuration now looked like this:

service: serverless-gong
frameworkVersion: '>=1.4.0 <2.0.0'
provider:
  name: aws
  runtime: nodejs8.10
functions:
  handleGong:
    handler: handler.gongHandler
    description:
      Fn::Sub:
        - 'Stackery Stack #{StackeryStackTagName} Environment #{StackeryEnvironmentTagName} Function #{ResourceName}'
        - ResourceName: handleGong
    events:
      - http:
          path: /webhook
          method: POST
    environment:
      GITHUB_WEBHOOK_SECRET:
        Ref: StackeryEnvConfiggithubSecretAsString
      SLACK_WEBHOOK_URL:
        Ref: StackeryEnvConfigslackWebhookURLAsString
resources:
  Parameters:
    StackeryStackTagName:
      Type: String
      Description: Stack Name (injected by Stackery at deployment time)
      Default: serverless-gong
    StackeryEnvironmentTagName:
      Type: String
      Description: Environment Name (injected by Stackery at deployment time)
      Default: dev
    StackeryEnvConfiggithubSecretAsString:
      Type: AWS::SSM::Parameter::Value<String>
      Default: /Stackery/Environments/<StackeryEnvId>/Config/githubSecret
    StackeryEnvConfigslackWebhookURLAsString:
      Type: AWS::SSM::Parameter::Value<String>
      Default: /Stackery/Environments/<StackeryEnvId>/Config/slackWebhookURL
  Metadata:
    StackeryEnvConfigParameters:
      StackeryEnvConfiggithubSecretAsString: githubSecret
      StackeryEnvConfigslackWebhookURLAsString: slackWebhookURL
plugins:
  - serverless-cf-vars

Look at all that yaml I didn't write!

And my Dashboard looked like so:

screenshot

Since I had already written a webhook starter function that at the moment logged to the console, it didn’t feel necessary to reinvent the wheel, so I committed in Stackery, then git pulled my code to see the updates, and created a handler.js file in the same directory as the serverless.yml. In it, I pasted the code from my previous webhook function - this was going to be my starting point:

const crypto = require('crypto');
function signRequestBody(key, body) {
  return `sha1=${crypto.createHmac('sha1', key).update(body, 'utf-8').digest('hex')}`;
}
// The webhook handler function
exports.gongHandler = async event => {
  // get the GitHub secret from the environment variables
  const token = process.env.GITHUB_WEBHOOK_SECRET;
  const calculatedSig = signRequestBody(token, event.body);
  let errMsg;
  // get the remaining variables from the GitHub event
  const headers = event.headers;
  const sig = headers['X-Hub-Signature'];
  const githubEvent = headers['X-GitHub-Event'];
  const body = JSON.parse(event.body);
  // this determines username for a push event, but lists the repo owner for other events
  const username = body.pusher ? body.pusher.name : body.repository.owner.login;
  const message = body.pusher ? `${username} pushed this awesomeness/atrocity through (delete as necessary)` : `The repo owner is ${username}.`
  // get repo variables
  const { repository } = body;
  const repo = repository.full_name;
  const url = repository.url;

  // check that a GitHub webhook secret variable exists, if not, return an error
  if (typeof token !== 'string') {
    errMsg = 'Must provide a \'GITHUB_WEBHOOK_SECRET\' env variable';
    return {
      statusCode: 401,
      headers: { 'Content-Type': 'text/plain' },
      body: errMsg,
    };
  }
  // check validity of GitHub token
  if (sig !== calculatedSig) {
    errMsg = 'X-Hub-Signature incorrect. Github webhook token doesn\'t match';
    return {
      statusCode: 401,
      headers: { 'Content-Type': 'text/plain' },
      body: errMsg,
    };
  }

  // print some messages to the CloudWatch console
  console.log('---------------------------------');
  console.log(`\nGithub-Event: "${githubEvent}" on this repo: "${repo}" at the url: ${url}.\n ${message}`);
  console.log('Contents of event.body below:');
  console.log(event.body);
  console.log('---------------------------------');

  // return a 200 response if the GitHub tokens match
  const response = {
    statusCode: 200,
    body: JSON.stringify({
      input: event,
    }),
  };

  return response;
};

At this point, I prepared and did the initial deploy of my stack in order to get the Rest API endpoint for the GitHub webhook I needed to set up. Again, the webhook tutorial runs through the deployment and webhook setup process step by step, so I won’t repeat it here.

Using the Rest API /webhook url, I created a webhook in our Stackery CLI repo that was now listening for events, and I confirmed in my CloudWatch logs that it was indeed working.

Bring on the Gong

The next step was to modify the function so it “gonged” our Slack channel when our Stackery CLI repo was updated with a new release. To do that, I had to create a custom Slack app for our channel and set up its incoming webhooks. Luckily, Slack makes that really easy to do, and I just followed the step-by-step instructions in Slack’s webhook API guide to get going.

I set up a #gong-test channel in our Slack for testing so as to not annoy my co-workers with incessant gonging, and copied the URL Slack provided (it should look something like https://hooks.slack.com/services/T00000000/B00000000/12345abcde).

Before editing the Lambda function itself, I needed a way for it to reference that URL as well as my GitHub secret without hard-coding it in my function that would then be committed to my public repo (because that is a Very Bad Way to handle secrets). This is where Stackery Environments come in handy.

I saved my GitHub secret and Slack URL in my environment config like so:

screenshot

Then referenced it in my function:

screenshot

And will add it to my function code in the next step, using process.env.GITHUB_WEBHOOK_SECRET and process.env.SLACK_WEBHOOK_URL as the variables.

Final Ingredient

Since we’re automating our gong, what’s more appropriate than an automated gong? After a somewhat frustrating YouTube search, I found this specimen:

A auto-gong for our automated app? Perfect! Now let’s use our function to send that gong to our Slack channel.

Here’s the code for the final gongHandler function in handler.js:

const crypto = require('crypto');
const Slack = require('slack-node');

// validate your payload from GitHub
function signRequestBody(key, body) {
  return `sha1=${crypto.createHmac('sha1', key).update(body, 'utf-8').digest('hex')}`;
}
// webhook handler function
exports.gongHandler = async event => {
  // get the GitHub secret from the environment variables
  const token = process.env.GITHUB_WEBHOOK_SECRET;
  const calculatedSig = signRequestBody(token, event.body);
  let errMsg;
  // get the remaining variables from the GitHub event
  const headers = event.headers;
  const sig = headers['X-Hub-Signature'];
  const githubEvent = headers['X-GitHub-Event'];
  const body = JSON.parse(event.body);
  // get repo variables
  const { repository, release } = body;
  const repo = repository.full_name;
  const url = repository.url;
  // set variables for a release event
  let releaseVersion, releaseUrl, author = null;
  if (githubEvent === 'release') {
    releaseVersion = release.tag_name;
    releaseUrl = release.html_url;
    author = release.author.login;
  }

  // check that a GitHub webhook secret variable exists, if not, return an error
  if (typeof token !== 'string') {
    errMsg = 'Must provide a \'GITHUB_WEBHOOK_SECRET\' env variable';
    return {
      statusCode: 401,
      headers: { 'Content-Type': 'text/plain' },
      body: errMsg,
    };
  }
  // check validity of GitHub token
  if (sig !== calculatedSig) {
    errMsg = 'X-Hub-Signature incorrect. Github webhook token doesn\'t match';
    return {
      statusCode: 401,
      headers: { 'Content-Type': 'text/plain' },
      body: errMsg,
    };
  }

  // if the event is a 'release' event, gong the Slack channel!
  const webhookUri = process.env.SLACK_WEBHOOK_URL;

  const slack = new Slack();
  slack.setWebhook(webhookUri);

  // send slack message
  if (githubEvent === 'release') {
    slack.webhook({
      channel: "#gong-test", // your desired channel here
      username: "gongbot",
      icon_emoji: ":gong:", // because Slack is for emojis
      text: `It's time to celebrate! ${author} pushed release version ${releaseVersion}. See it here: ${releaseUrl}!\n:gong:  https://youtu.be/8nBOF5sJrSE?t=11` // your message
    }, function(err, response) {
      console.log(response);
      if (err) {
        console.log('Something went wrong');
        console.log(err);
      }
    });
  }

  // (optional) print some messages to the CloudWatch console (for testing)
  console.log('---------------------------------');
  console.log(`\nGithub-Event: "${githubEvent}" on this repo: "${repo}" at the url: ${url}.`);
  console.log(event.body);
  console.log('---------------------------------');

  // return a 200 response if the GitHub tokens match
  const response = {
    statusCode: 200,
    body: JSON.stringify({
      input: event,
    }),
  };

  return response;
};

Finally, I needed to add a package.json file so that I could use dependencies. When creating a function using an AWS SAM template, Stackery would do this for your automatically, but in this case I had to create the file and add the following myself:

{
  "private": true,
  "dependencies": {
    "aws-sdk": "~2",
    "slack-node": "0.1.8"
  }
}

I added, committed and pushed the new code, re-deployed my Serverless Framework stack, then added another GitHub webhook to a test repo. I created a GitHub release in my test repo, and waited in anticipation.

Milliseconds later, I hear the familiar click-click-click of Slack…

screenshot

Pretty awesome, if I do say so myself. 🔔

A few notes:

  • I used the slack-node NPM package to make life easier. I could have used the requests library or the built-in HTTPS library (and you can if you want to avoid using external dependencies).
  • The GitHub API is very helpful for figuring out what kind of response to expect from your webhook. That’s how I determined the values to set for releaseVersion, releaseUrl, author.
  • When you console.log() in your serverless function, the results can be seen in the AWS CloudWatch logs. Stackery provides a convenient direct link for each function.

screenshot

  • This serverless application should fit within your AWS free tier, but keep an eye on your logs and billing just in case.

If you’d like to make your own serverless gong, all of the configuration code is available in my Serverless Gong GitHub repository. Just create a new stack your Stackery account (you can sign up for a free trial if you don’t have one yet), choose Create New Repo as the Repo Source, and select Specify Remote Source to paste in the link to my repo as a template.

Add your GitHub and Slack environment parameters, deploy your stack, and sit back and wait for your Slack to gong!

AWS ReInvent: Serverless, Stackery, and Corey Quinn of LastWeekInAWS
Abner Germanow

Abner Germanow | November 14, 2018

AWS ReInvent: Serverless, Stackery, and Corey Quinn of LastWeekInAWS

Welcome savvy builder. If you’ve made it to our corner of the Internet and headed to re:invent, you are in the right place.

We want you to leave Las Vegas with the savvy to choose how and when to apply the growing menu of serverless capabilities to your initiatives. To help you, we’re sending our serverless-first engineers to Las Vegas with three goals.

  1. Share experiences building AWS serverless apps
  2. Show app builders how Stackery + AWS serverless offerings accelerate velocity and confidence
  3. Connect the AWS serverless community

Sharing Our Serverless Experience

As we build our serverless-first service for teams, we examine the developer and operations experience to make the experience faster and safer for our customers. We’ve learned a few things along the way about what makes serverless awesome, when to insert container services into the mix, and how workflows differ from services we’ve built in the past.

At our booth we’ll be holding demonstrations walking through what we’ve learned and where we find developers and teams working differently. Keep an eye on twitter for exact timing.

Booth Talks Include:

  • ICYMI: Corey Quinn From Last Week In AWS (Thurs @ 2:15)
  • PSA: Permission Scoping Accounts, Services, and Humans
  • Namespacing for fun and dev/test/prod environments
  • A look at the new AWS [REDACTED]
  • Where and when we use containers in our serverless-first apps
  • Using existing resources with serverless applications
  • How to build state machines around serverless apps
  • Instrumentation and monitoring serverless apps with Stackery and Epsagon
  • Testing serverless apps
  • Secrets Manager vs Parameter Store
  • Lambda@Edge vs Lambda What You Should Know
  • Systems Manager Parameter Store

Show off Stackery’s Serverless Acceleration Software

If you are new to Stackery or an old pro, a lot has changed just in the last month!

“We don’t need a whiteboard, I’ll mock it up in Stackery.” -Customer using Stackery to break a monolith into microservices.

We’ve made it even easier to visually mock up the architectural intent of your app with a new template to visual architecture toggle editor that you can take straight to packaging and deployment. GraphQL, the ability to import projects in SAM or Serverless.com frameworks, Lambda@Edge, and much more.

Drop by to see the latest, or better yet, sign up for a slot and we’ll make sure our engineers are dedicated to you.

Corey Quinn of Last Week In AWS and Connecting the Community

AWS moves fast. Almost as fast as serverless-first teams. On Thursday at 2:15, Corey Quinn of the Last Week In AWS Newsletter will be at our booth for an exclusive ICYMI to review announcements you probably missed. You can get his snark-a-take (I just made that up) on the keynotes, serverless, and more.

Our invite only serverless insiders party is designed to connect the pioneers with those who are ramping up in 2019. If you are interested in an invite drop us a note.

Finally, like all serverless teams, we abhor repeating writing, so for a guide to serverless sessions, check out these guides:

See you in Vegas!

How to find us at ReInvent: Booth: #2032 - We’re about 40 feet from the dev lounge in the Sands/Venetian Hall B. Contact our team: reinvent@stackery.io

Get the Serverless Development Toolkit for Teams

Sign up now for a 60-day free trial. Contact one of our product experts to get started building amazing serverless applications today.

To Top