Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Posts on Tutorials & Guides

The Anatomy of a Serverless App
Toby Fee

Toby Fee | February 11, 2019

The Anatomy of a Serverless App

Serverless has, for the last year or so, felt like an easy term to define: code run in a highly managed environment with (almost) no configuration of the underlying computer layer done by your team. Fair enough, but what is is a serverless application? A Lambda isn’t an app by itself, heck, it can’t even communicate with the world outside of Amazon Web Services (AWS) by itself, so there must be more to a serverless app than that. Let’s explore a serverless app’s anatomy, the features that should be shared by all the serverless apps you’ll build.

Serverless applications have three components:

  • Business logic: function (Lambda) that defines the business logic
  • Building Blocks: Resources such as databases, api gateways, authentication services, IOT, Machine Learning, container tasks, and other cloud services that support a function
  • Workflow phase dependencies: Environment configuration and secrets that respectively define and enable access to the unique dependencies unique each phase of development workflow.

Taken together, these three components create a single ‘Active Stack’ when running within an AWS region.

Review: What’s a Lambda?

I could write this piece in a generic tone and call Lambdas ‘Serverless Functions,’ after all, both Microsoft and Google have similar offerings. But Lambdas have fast become the dominant form of serverless functions, with features like Lambda Layers showing how Lambdas are fast maturing into an offering both the weekend tinkerer and the enterprise team can use effectively.

But what are Lambdas again? They’re blobs of code that AWS will run for you in a virtualized environment without you having to do any configuration. It might make more sense to describe how Lambdas get used:

  • You write a blob of Node, Ruby, or several other languages, all in the general mode of ‘take in a triggering event, kick off whatever side effects you need to, then return something’
  • Upload your code blob to AWS Lambda
  • Send your Lambda requests
  • AWS starts up your code in a virtual environment, complete with whatever software packages you required
  • Look at the response!
  • Send your Lambda 10,000 requests in a minute
  • AWS starts up a bunch of instances of your code, each one handling several requests
  • Look at all these responses!

Are Lambdas like containers? Sort of, in that you don’t manage storage or the file system directly, it should all be set in configuration. But you don’t manage Lambda startup, responses, or routing directly; you leave all of that to AWS.

Note that Lambdas do not handle any part of their communication with the outside world. They can be triggered by events from other AWS services but not from direct HTTP requests, for that a Lambda needs to be connected to an API gateway or more indirectly to another AWS service (E.G. a Lambda can respond to events off an S3 bucket, which could be HTTP uploads)

What supports our Lambdas?

We’ve already implied the need for at least one ‘service’ outside of just a Lambda: an api gateway. But that’s not all we need: with a virtualized operating system layer, we can’t store anything on our Lambdas between runs, so we need some kind of storage. Lambdas shouldn’t be used for extremely long running tasks, so we need a service for that. Finally it’s possible that we want to make decisions about which Lambda should respond based on the type of request, so we might need to connect Lambdas to other Lambdas.

In general, we could say that every function will have a resource architecture around it that lets it operate like a fully featured application. The capabilities and pallet of offerings of this resource architecture continue to expand rapidly, both in terms of the breadth of offerings for IoT, AI, machine learning, security, databases, containers, and more as well as services to improve performance, connectivity, and cost profiles.

With all these necessary pieces to make a Lambda do any actual work, AWS has a service to let us treat all these pieces as a unit. CloudFormation can treat a complete serverless ‘stack’ as a configuration file that can be moved and deployed in different environments. With Stackery you can build and stacks from an easy graphical canvas and the files it produces are the same YAML that CloudFormation uses natively!

Secrets

Lambdas are blobs of code that should be managed through normal code sharing platforms like Github. Two problems present themselves right away: How do we tell our Lambda where its running, and how do we give it the secrets that it needs to interact with other services?

The most common example of this will be accessing a database.

Note: If we’re using an AWS-hosted serverless database like DynamoDB the following steps should not be necessary since we can handle giving permissions to the Lambda for our DB within the Lambda’s settings. Using Stackery to connect Lambdas to AWS databases makes this part as easy as drawing a line!

We need secrets to authenticate to our DB, but we also need our Lambda to know whether it’s running on staging so that it doesn’t try to update the production database during our test runs.

So we can identify three key sections of our serverless app: our function, its resources, and the secrets/configuration that make up its environment.

The Wider World

In a highly virtualized environment, it’s counter-intuitive to ask ‘where is my code running’ but while you can’t put a pin in a map you must spread your app across AWS availability zones to ensure true reliability. We should therefore draw a box around our ‘environment’ with our stack, its configuration, and secrets. This entire system will exist across multiple zones or even in services other than AWS (if you really enjoy the headache of writing code and config for multiple clouds).

How many ‘Active Stacks’ is your team running?

An active stack is a complete set of functions, resources, and environment. If you have the same function code and resources running on three environments (e.g. dev, test, and prod) you have three active stacks. If you take your production stack and distribute it to three different AWS regions, you again have three active stacks.

How this anatomy can help your team.

Identifying unifying features is not, in itself, useful for your team, but it is an essential step in planning. We cannot adopt a serverless model for part of our architecture without a plan to build and manage all these features. You must have:

  • Programmers to write your functions and manage their source code
  • Cloud professionals to assign and control the resources those functions need
  • Operations and security to deploy these stacks in the right environments

You also need a plan for how these people will interact and coordinate on releases, updates, and emergencies (I won’t say outages since spreading your app across availability zones should make that vanishingly rare).

Later articles will use this understanding of the essential parts of a serverless app to explore the key decisions you must make as you plan your app.

How Stackery Can Help

Now that we’ve defined these three basic structures, it would be nice if they were truly modular within AWS. While lambda code can easily be re-used and deployed to different contexts, It’s more difficult to use a set of resources or an environment like a module that you can move about with ease.

Stackery makes this extremely easy: you can mix and match ‘stacks’ and their environments, and easily define complete applications and re-deploy them in different AWS regions.

Creating Cognito User Pools with CloudFormation
Matthew Bradburn

Matthew Bradburn | January 31, 2019

Creating Cognito User Pools with CloudFormation

I’ve been working on creating AWS Cognito User Pools in CloudFormation, and thought this would be a good time to share some of what I’ve learned.

As an overview of this project:

  • For sign-up, I’m creating Cognito users directly from my server app. It’s also possible to have users create their own accounts in Cognito, but that’s not what I want.
  • I want to use email addresses as the user names, rather than having user names with separate associated email addresses.
  • I don’t want the users to have to mess around with temporary passwords. This is part of the ordinary Cognito workflow, but I set the initial password in my server-side code and then immediately reset the password to the same value. So there is a temporary password, but the users don’t notice it.
  • Sign-in is a transaction directly between the client-side app and Cognito; the client gets a JWT (JSON Web Token) from Cognito, which is validated by my AuthenticatedApi function on the back-end.
  • The Cognito User Pool, Lambda functions, etc., are created by CloudFormation with a SAM (Serverless Application Model) template.

Sample Source

The source code for this project is available from my github. The disclaimer is that the source is pretty rough, and should be tidied before being used in production.

Template Generation

I used the Stackery editor to lay out the components and generate a template: stackery editor

The template is available in the Git repo as template.yaml.

This is a simple application; I have an Api Gateway that my client app will hit, with one endpoint to effect sign-up and one to demonstrate an authenticated API. Each of these endpoints invokes a separate Lambda function. Those functions have access to my User Pool.

I’ve wired the User Pool’s triggered functions up just as an experiment. Currently all the triggers invoke my CognitoTriggered function, which is currently logging the input messages but that’s all – according to my understanding, these functions work by modifying the input message and returning it, but my function returns the input message unmolested.

I’ve hand-edited the SAM template to add the user pool client:

  UserPoolClient:
    Type: AWS::Cognito::UserPoolClient
    Properties:
      ClientName: my-app
      GenerateSecret: false
      UserPoolId: !Ref UserPool
      ExplicitAuthFlows:
        - ADMIN_NO_SRP_AUTH

I’ve set GenerateSecret to false because in a web app it’s hard to keep a secret of this type. We use ADMIN_NO_SRP_AUTH during the user creation process as Admin. I’ve also added an environment variable to each of my functions so they’ll get the user pool client ID.

Deployment

Of course Stackery makes it simple to deploy this application into AWS, but it should be pretty easy to give the template directly to CloudFormation. You may want to go through and whack the parameters like ‘StackTagName’ that are added by the Stackery runtime.

Client Tester App

Once you’ve deployed the app, there are a couple of parameters from the running app to be copied to the client. These go in the source code near the top. For instance, the URI of the API Gateway is needed by the client but isn’t availble until after the app is deployed.

This may not be an issue for you if you’re doing a web client app instead of a Node.js app, but in my case I’m using the NPM package named amazon-cognito-identity-js to talk to Cognito for authentication. That package depends on the fetch() API, which browsers have but Node.js does not. I’ve included the package source directly in my repo, and added a use of node-fetch-polyfill in amazon-congnito-identiy-js/lib/Client.js.

Run ./client-app.js --sign-up --email <email> --password <pass> to create a new user in your Cognito pool. In real apps you should never acceppt passwords on the command-line like this.

Once you’ve created a user, run ./client-app.js --sign-in --email <email> --password <pass>, giving it the new user’s email and password, to get a JWT for the user.

Assuming sign-in succeeds, that command prints the JWT created by Cognito. You can then test the authenticated API with ./client-app.js --fetch --token <JWT>.

Areas for Improvement

This is rather marginal sample code, as I mentioned, and there are several obvious areas for improvement:

  • The amazon-cognito-identity-js package isn’t meant for Node.js. I wonder if it makes sense to use the AWS SDK directly.

  • The AuthenticatedApi function gets public keys from Cognito on every request; they should be cached.

  • The client-app uses the access token, but a real client app would have to be prepared to use the refresh token to generate a new access token periodically.

Building Slack Bots for Fun: A Serverless Release Gong
Anna Spysz

Anna Spysz | November 16, 2018

Building Slack Bots for Fun: A Serverless Release Gong

We have a running joke at Stackery regarding our tiny little gong that’s used to mark the occasion when we get a new customer.

sad tiny gong

So tiny.

And while I’m all about the sales team celebrating their successes (albeit with a far-too-small gong), I felt like the dev team needed its own way to commemorate major product releases and iterations.

Then I saw that Serverless Framework is doing its No Server November challenge, and I thought, what a perfect way to show off our multiple framework support while iterating on our Github Webhooks Tutorial to support Serverless Framework projects!

Starting from Scratch…Almost

Stackery makes it easy to import an existing stack or create new a stack based on an existing template. And, conveniently, I had already build a GitHub webhook listener just the week before as part of the webhook tutorial. However, the rules of the competition specifically state that “to qualify, the entry must use the Serverless Framework and a serverless backend” - and I was curious to see the differences when building out my app using that framework as compared to our default (AWS SAM).

So the first thing I did was create an empty Serverless Framework template I could use to build my app on. This was quite simple - I just created a serverless.yml file in a new directory and added the following:

service: serverless-gong

frameworkVersion: ">=1.4.0 <2.0.0"

provider:
  name: aws
  runtime: nodejs8.10

I initialized a new git repository, and added, committed and pushed the serverless.yml file to it.

Building in Stackery

Now it was time to import my new Serverless Framework boilerplate into Stackery so I could start adding resources. In the Stackery App, I navigated to my Stacks page, and clicked the Create New Stack button in the upper right, filling it out like so:

screenshot

Then, in the Stackery Dashboard, I created an API Gateway resource with a POST route with a /webhook path and a Function resource named handleGong, and connected them with a wire. All of this, including saving and using environment variables for your GitHub secret, is documented in the webhook tutorial, so I won’t go through it again. In the end, I had a setup very similar to that found at the end of that tutorial, with the exception of having a serverless.yml file rather than a template.yml for the configuration, and having everything in one directory (which was fine for a small project like this, but not ideal in the long run).

With the added resources, my serverless configuration now looked like this:

service: serverless-gong
frameworkVersion: '>=1.4.0 <2.0.0'
provider:
  name: aws
  runtime: nodejs8.10
functions:
  handleGong:
    handler: handler.gongHandler
    description:
      Fn::Sub:
        - 'Stackery Stack #{StackeryStackTagName} Environment #{StackeryEnvironmentTagName} Function #{ResourceName}'
        - ResourceName: handleGong
    events:
      - http:
          path: /webhook
          method: POST
    environment:
      GITHUB_WEBHOOK_SECRET:
        Ref: StackeryEnvConfiggithubSecretAsString
      SLACK_WEBHOOK_URL:
        Ref: StackeryEnvConfigslackWebhookURLAsString
resources:
  Parameters:
    StackeryStackTagName:
      Type: String
      Description: Stack Name (injected by Stackery at deployment time)
      Default: serverless-gong
    StackeryEnvironmentTagName:
      Type: String
      Description: Environment Name (injected by Stackery at deployment time)
      Default: dev
    StackeryEnvConfiggithubSecretAsString:
      Type: AWS::SSM::Parameter::Value<String>
      Default: /Stackery/Environments/<StackeryEnvId>/Config/githubSecret
    StackeryEnvConfigslackWebhookURLAsString:
      Type: AWS::SSM::Parameter::Value<String>
      Default: /Stackery/Environments/<StackeryEnvId>/Config/slackWebhookURL
  Metadata:
    StackeryEnvConfigParameters:
      StackeryEnvConfiggithubSecretAsString: githubSecret
      StackeryEnvConfigslackWebhookURLAsString: slackWebhookURL
plugins:
  - serverless-cf-vars

Look at all that yaml I didn't write!

And my Dashboard looked like so:

screenshot

Since I had already written a webhook starter function that at the moment logged to the console, it didn’t feel necessary to reinvent the wheel, so I committed in Stackery, then git pulled my code to see the updates, and created a handler.js file in the same directory as the serverless.yml. In it, I pasted the code from my previous webhook function - this was going to be my starting point:

const crypto = require('crypto');
function signRequestBody(key, body) {
  return `sha1=${crypto.createHmac('sha1', key).update(body, 'utf-8').digest('hex')}`;
}
// The webhook handler function
exports.gongHandler = async event => {
  // get the GitHub secret from the environment variables
  const token = process.env.GITHUB_WEBHOOK_SECRET;
  const calculatedSig = signRequestBody(token, event.body);
  let errMsg;
  // get the remaining variables from the GitHub event
  const headers = event.headers;
  const sig = headers['X-Hub-Signature'];
  const githubEvent = headers['X-GitHub-Event'];
  const body = JSON.parse(event.body);
  // this determines username for a push event, but lists the repo owner for other events
  const username = body.pusher ? body.pusher.name : body.repository.owner.login;
  const message = body.pusher ? `${username} pushed this awesomeness/atrocity through (delete as necessary)` : `The repo owner is ${username}.`
  // get repo variables
  const { repository } = body;
  const repo = repository.full_name;
  const url = repository.url;

  // check that a GitHub webhook secret variable exists, if not, return an error
  if (typeof token !== 'string') {
    errMsg = 'Must provide a \'GITHUB_WEBHOOK_SECRET\' env variable';
    return {
      statusCode: 401,
      headers: { 'Content-Type': 'text/plain' },
      body: errMsg,
    };
  }
  // check validity of GitHub token
  if (sig !== calculatedSig) {
    errMsg = 'X-Hub-Signature incorrect. Github webhook token doesn\'t match';
    return {
      statusCode: 401,
      headers: { 'Content-Type': 'text/plain' },
      body: errMsg,
    };
  }

  // print some messages to the CloudWatch console
  console.log('---------------------------------');
  console.log(`\nGithub-Event: "${githubEvent}" on this repo: "${repo}" at the url: ${url}.\n ${message}`);
  console.log('Contents of event.body below:');
  console.log(event.body);
  console.log('---------------------------------');

  // return a 200 response if the GitHub tokens match
  const response = {
    statusCode: 200,
    body: JSON.stringify({
      input: event,
    }),
  };

  return response;
};

At this point, I prepared and did the initial deploy of my stack in order to get the Rest API endpoint for the GitHub webhook I needed to set up. Again, the webhook tutorial runs through the deployment and webhook setup process step by step, so I won’t repeat it here.

Using the Rest API /webhook url, I created a webhook in our Stackery CLI repo that was now listening for events, and I confirmed in my CloudWatch logs that it was indeed working.

Bring on the Gong

The next step was to modify the function so it “gonged” our Slack channel when our Stackery CLI repo was updated with a new release. To do that, I had to create a custom Slack app for our channel and set up its incoming webhooks. Luckily, Slack makes that really easy to do, and I just followed the step-by-step instructions in Slack’s webhook API guide to get going.

I set up a #gong-test channel in our Slack for testing so as to not annoy my co-workers with incessant gonging, and copied the URL Slack provided (it should look something like https://hooks.slack.com/services/T00000000/B00000000/12345abcde).

Before editing the Lambda function itself, I needed a way for it to reference that URL as well as my GitHub secret without hard-coding it in my function that would then be committed to my public repo (because that is a Very Bad Way to handle secrets). This is where Stackery Environments come in handy.

I saved my GitHub secret and Slack URL in my environment config like so:

screenshot

Then referenced it in my function:

screenshot

And will add it to my function code in the next step, using process.env.GITHUB_WEBHOOK_SECRET and process.env.SLACK_WEBHOOK_URL as the variables.

Final Ingredient

Since we’re automating our gong, what’s more appropriate than an automated gong? After a somewhat frustrating YouTube search, I found this specimen:

A auto-gong for our automated app? Perfect! Now let’s use our function to send that gong to our Slack channel.

Here’s the code for the final gongHandler function in handler.js:

const crypto = require('crypto');
const Slack = require('slack-node');

// validate your payload from GitHub
function signRequestBody(key, body) {
  return `sha1=${crypto.createHmac('sha1', key).update(body, 'utf-8').digest('hex')}`;
}
// webhook handler function
exports.gongHandler = async event => {
  // get the GitHub secret from the environment variables
  const token = process.env.GITHUB_WEBHOOK_SECRET;
  const calculatedSig = signRequestBody(token, event.body);
  let errMsg;
  // get the remaining variables from the GitHub event
  const headers = event.headers;
  const sig = headers['X-Hub-Signature'];
  const githubEvent = headers['X-GitHub-Event'];
  const body = JSON.parse(event.body);
  // get repo variables
  const { repository, release } = body;
  const repo = repository.full_name;
  const url = repository.url;
  // set variables for a release event
  let releaseVersion, releaseUrl, author = null;
  if (githubEvent === 'release') {
    releaseVersion = release.tag_name;
    releaseUrl = release.html_url;
    author = release.author.login;
  }

  // check that a GitHub webhook secret variable exists, if not, return an error
  if (typeof token !== 'string') {
    errMsg = 'Must provide a \'GITHUB_WEBHOOK_SECRET\' env variable';
    return {
      statusCode: 401,
      headers: { 'Content-Type': 'text/plain' },
      body: errMsg,
    };
  }
  // check validity of GitHub token
  if (sig !== calculatedSig) {
    errMsg = 'X-Hub-Signature incorrect. Github webhook token doesn\'t match';
    return {
      statusCode: 401,
      headers: { 'Content-Type': 'text/plain' },
      body: errMsg,
    };
  }

  // if the event is a 'release' event, gong the Slack channel!
  const webhookUri = process.env.SLACK_WEBHOOK_URL;

  const slack = new Slack();
  slack.setWebhook(webhookUri);

  // send slack message
  if (githubEvent === 'release') {
    slack.webhook({
      channel: "#gong-test", // your desired channel here
      username: "gongbot",
      icon_emoji: ":gong:", // because Slack is for emojis
      text: `It's time to celebrate! ${author} pushed release version ${releaseVersion}. See it here: ${releaseUrl}!\n:gong:  https://youtu.be/8nBOF5sJrSE?t=11` // your message
    }, function(err, response) {
      console.log(response);
      if (err) {
        console.log('Something went wrong');
        console.log(err);
      }
    });
  }

  // (optional) print some messages to the CloudWatch console (for testing)
  console.log('---------------------------------');
  console.log(`\nGithub-Event: "${githubEvent}" on this repo: "${repo}" at the url: ${url}.`);
  console.log(event.body);
  console.log('---------------------------------');

  // return a 200 response if the GitHub tokens match
  const response = {
    statusCode: 200,
    body: JSON.stringify({
      input: event,
    }),
  };

  return response;
};

Finally, I needed to add a package.json file so that I could use dependencies. When creating a function using an AWS SAM template, Stackery would do this for your automatically, but in this case I had to create the file and add the following myself:

{
  "private": true,
  "dependencies": {
    "aws-sdk": "~2",
    "slack-node": "0.1.8"
  }
}

I added, committed and pushed the new code, re-deployed my Serverless Framework stack, then added another GitHub webhook to a test repo. I created a GitHub release in my test repo, and waited in anticipation.

Milliseconds later, I hear the familiar click-click-click of Slack…

screenshot

Pretty awesome, if I do say so myself. 🔔

A few notes:

  • I used the slack-node NPM package to make life easier. I could have used the requests library or the built-in HTTPS library (and you can if you want to avoid using external dependencies).
  • The GitHub API is very helpful for figuring out what kind of response to expect from your webhook. That’s how I determined the values to set for releaseVersion, releaseUrl, author.
  • When you console.log() in your serverless function, the results can be seen in the AWS CloudWatch logs. Stackery provides a convenient direct link for each function.

screenshot

  • This serverless application should fit within your AWS free tier, but keep an eye on your logs and billing just in case.

If you’d like to make your own serverless gong, all of the configuration code is available in my Serverless Gong GitHub repository. Just create a new stack your Stackery account (you can sign up for a free trial if you don’t have one yet), choose Create New Repo as the Repo Source, and select Specify Remote Source to paste in the link to my repo as a template.

Add your GitHub and Slack environment parameters, deploy your stack, and sit back and wait for your Slack to gong!

Webhooks Made Easy with Stackery
Anna Spysz

Anna Spysz | November 08, 2018

Webhooks Made Easy with Stackery

Webhooks are about as close as you can get to the perfect serverless use case. They are event-driven and generally perform just one (stateless) function. So of course we wanted to show how easy it is to implement webhooks using Stackery.

Our newest tutorial, the Serverless Webhooks Tutorial, teaches you to create a GitHub webhook and connect it to a Lambda function through an API Gateway. Or, to put it simple terms: when your GitHub repository does a thing, your function does another thing. What that second thing does is completely up to you.

Here are some possible use cases of a GitHub webhook:

  • Connect your webhook to the Slack API and have Slack ping your team members when someone has opened a PR
  • Have your function deploy another stack when its master branch is updated
  • Expanding on that, you can even have your function deploy to multiple environments depending on which branch has been updated
  • Write an Alexa Skill that plays a certain song when your repository has been starred - the possibilities are endless!

The best part is, GitHub allows you to be very specific in what events you subscribe to, and you can further narrow down events in your function logic.

So for example, do you want to be notified by text message every time Jim pushes a change to the master branch of your repository, because Jim has been known to push buggy code? You can set that up using webhooks and Stackery, and never have master go down again (sorry, Jim).

Check out the tutorial to see what else you can build!

Building a Single-Page App With Stackery & React
Jun Fritz

Jun Fritz | October 30, 2018

Building a Single-Page App With Stackery & React

After completing this tutorial, you’ll have a serverless SPA built using Stackery and React. Stackery will be used to configure, deploy, and host our application which will be built using the React library.

The newest tutorial on our documentation site guides you through the process of building a Serverless Single-Page App using Stackery and React.

You’ll be using Stackery to set up the cloud resources needed to deploy, host, and distribute your single-page application. You’ll configure a Lambda function, an S3 Bucket, and a CloudFront CDN in this tutorial with the goal of keeping this application within AWS Free Tier limits.

By the end of this tutorial, you’ll have a fully-scalable backend and an organized React front-end to add to, and grow your application. Watch part one of the tutorial below to see what we’re building, or follow along with the plain-text version here.

Stay tuned for more serverless tutorials from Stackery!

Building Serverless Applications with AWS Amplify
Danielle Heberling

Danielle Heberling | October 24, 2018

Building Serverless Applications with AWS Amplify

So you want to use AWS Cognito to authenticate users and have your user pool, identity pool, and app client all set up in the AWS console. …the next question is how can you connect this with your React based frontend? While there are a few ways to go about doing this, this post is going to give you a brief overview on how to do this via a library called AWS-Amplify.

AWS-Amplify is an open source project managed by AWS described as “a declarative JavaScript library for application development using cloud services.” I liked this particular library, because it has a client first approach and abstracts away some of the setup required in the JavaScript SDK.

My favorite features of Amplify are: Authentication (via Cognito), API (via API Gateway), and Storage (via S3), but this library has a lot more to offer than just those features. This post will focus on how to authenticate users from a React based frontend…more specifically user signup that has an email address verification step.

The Setup

First you’ll need to setup a config file to reference your already created AWS resources (in this case the user pool, identity pool, and client id) in your /src folder. The file will look something like this :

src/config.js

export default {
   cognito: {
    REGION: ‘YOUR_COGNITO_REGION’,
    USER_POOL_ID: ‘YOUR_USER_POOL_ID’,
    APP_CLIENT_ID: ‘YOUR_APP_CLIENT_ID’,
    IDENTITY_POOL_ID: ‘YOUR_IDENTITY_POOL_ID’
  }
};

Then in your index.js file where you setup your react app, you’ll need to configure aws Amplify. It’ll look similar to this:

src/index.js

import React from ‘react’;
import ReactDOM from ‘react-dom’;
import Amplify from ‘aws-amplify’;

import config from ‘./config’;
import App from ‘./App’;

Amplify.configure({
  Auth: {
    mandatorySignIn: true,
    region: config.cognito.REGION,
    userPoolId: config.cognito.USER_POOL_ID,
    identityPoolId: config.cognito.IDENTITY_POOL_ID,
    userPoolWebClientId: config.cognito.APP_CLIENT_ID
  }
});

ReactDOM.render(
  <Router>
    <App />
  </Router>,
  document.getElementById(‘root’)
);

The mandatorySignIn property is optional, but is a good idea if you are using other AWS resources via Amplify and want to enforce user authentication before accessing those resources.

Also note that for now having a separate config file might seem a bit overkill, but once you add in multiple resources (i.e. Storage, API, Pub Sub etc.) you’ll want that extra config file to keep things easy to manage.

Implementation Overview

The signup flow will look like this:

  1. The user submits what they’ll use for login credentials (in this case email and password) via a signup form and a second form to type in a confirmation code will appear.
  2. Behind the scenes the Amplify library will sign the user up in Cognito.
  3. Cognito will send a confirmation code email to the user’s signup email address to verify that the email address is real.
  4. The user will check their email > get the code > type the code into the confirmation form.
  5. On submit, Amplify will send the information to Cognito which then confirms the signup. On successful confirmation, Amplify will sign the user into the application.

Implementation Part 1

First in your signup form component, you’ll need to import Auth from the Amplify library like this:

import { Auth } from ‘aws-amplify’;

As you create your form, I’d suggest using local component state to store the form data. It’ll look like your typical form with the difference being using the Amplify methods in your handleSubmit function whenever the user submits the form. The handleSubmit function will look like this:

 handleSubmit = async event => {
    event.preventDefault();

    try {
      const newUser = await Auth.signUp({
        username: this.state.email,
        password: this.state.password
      });
      this.setState({
        newUser
      });
    } catch (event) {
      if (event.code === ‘UsernameExistsException’) {
        const tryAgain = await Auth.resendSignUp(this.state.email);
        this.setState({
          newUser: tryAgain
        });
      } else {
        alert(event.message);
      }
    }
  }

On success, Amplify returns a user object after the signUp method is called, so I’ve decided to store this object in my component local state so the component knows which form to render (the signup or the confirmation).

Before we continue let’s go over a quick edge case. So if our user refreshes the page when on the confirmation form and then tries to sign up again with the same email address, they’ll receive an error that the user already exists and will need to signup with a different email address. The catch block demonstrates one way of handling that possibility by resending the signup code to the user if that email is already present in Cognito. This will allow the user to continue using the same email address should they refresh the page or leave the site before entering the confirmation code.

Implementation Part 2

So now the user is looking at the confirmation form and has their confirmation code to type in. We’ll need to render the confirmation form. Similar to the signup form it’ll look like a typical form with the exception being the function that is called whenever the user submits the confirmation form. The handleSubmit function for the confirmation form will look similar to this when using Amplify:

 handleConfirmationSubmit = async event => {
    event.preventDefault();

    try {
      await Auth.confirmSignUp(this.state.email, this.state.confirmationCode);
      await Auth.signIn(this.state.email, this.state.password);

      this.props.isAuthenticated(true);
      this.props.history.push("/");
    } catch (event) {
      alert(event.message);
    }
  }

So it is taking in the form data, using Amplify to confirm the user’s email address via the conformation code and signing in the user if successful. You can then verify if a user is signed in via props at the route level if you’d like. In this case, I arbitrarily named it isAuthenticated and redirected the user to the root path.

The complete docs for using the Auth feature of Amplify can be found here. We’ve only scratched the surface in this post, so go forth and explore the all of the different features that Amplify has to offer. I’ve found it has a very nice declarative syntax and is very readable for folks who are new to a codebase. For building further on your React-based serverless applications, I highly recommend Stackery for managing all of your serverless infrastructure backed up by seamless, git-based version control.

Stackery’s Quickstart Just Got Quicker—and More Useful
Anna Spysz

Anna Spysz | October 15, 2018

Stackery’s Quickstart Just Got Quicker—and More Useful

If you’ve been over to our documentation site lately, you may have noticed some changes. We’ve got a new look and some new tutorials, but the latest upgrade is our new Quickstart tutorial.

While the first version of our Quickstart just got you up and running with Stackery, version 2.0 also has you deploying a static HTML portfolio page to an API endpoint:

Oooh, fancy!

Once you’ve followed the tutorial and deployed your static site, you can customize the HTML with your own information and links to your projects. You can then follow our serverless contact form tutorial to give the contact form on your site functionality as well.

Want a preview? This YouTube video walks you through the entire Quickstart tutorial:

And be sure to visit our docs site regularly, as we have several new tutorials in the works. Stay tuned for a React application with a serverless backend - coming soon!

Deploy GraphQL APIs with Stackery
Sam Goldstein

Sam Goldstein | October 03, 2018

Deploy GraphQL APIs with Stackery

It’s been a busy month in Stackery engineering. Here’s a quick recap of what’s new in the product this week.

You can now use Stackery to configure and provision AWS AppSync GraphQL APIs, which is a serverless pay-per-invocation service similar to API Gateway, but for GraphQL! GraphQL resolvers can be connected to backend data sources like DynamoDB tables, Lambda functions, or HTTP proxies. You can read more about the using Stackery with GraphQL in the Stackery docs.

Trigger Lambda Function on Deploy

Does your deployment processes involve multiple commands that need to be run in a certain order? Stackery now provides the ability to mark any function as “Trigger on First Deploy” or “Trigger on Every Deploy”, which provides a clean mechanism to handle database migration, ship single page apps, and handle custom deploy logic across all your environments. To make this work Stackery sets up a CloudFormation Custom Resource in your project’s SAM template which is used to invoke the function when the stack is deployed. Read more in the Stackery Function Docs.

Reference Existing Cloud Resources

Teams are often deploying serverless stacks into existing cloud infrastructures. What happens when your Function needs to subscribe to an existing DynamoDB stream or be placed in an existing VPC? Stackery provides the ability to replace resources in a stack with a pointer to an already provisioned resource. This can be specified per environment which enables you to provision mock resources in dev/test environments but reference central infrastructure in production. Check out the “Use Existing” flag on resources like DynamoDb Tables or Virtual Networks

GitHub and GitLab bulk project import

No one wants to set up a bunch of AWS Serverless Application Model (SAM) projects with Stackery one by one so we built a 1 click importer which locates all your projects with a valid SAM template file (template.yaml) and sets them up to deploy and edit with Stackery. It works for both GitHub and GitLab and you can find it on the Stackery Dashboard homepage at app.stackery.io.

Get the Serverless Development Toolkit for Teams

now and get started for free. Contact one of our product experts to get started building amazing serverless applications today.

To Top