Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Cloud-Side Development For All with Stackery's Free Tier
Nate Taggart

Nate Taggart | February 26, 2019

Cloud-Side Development For All with Stackery's Free Tier

New Pricing & Free Tier

Today, I’m thrilled to announce our new free tier and pricing model for teams adopting modern serverless application development. I believe cloud-side development is the future and I want everyone to experience it at their own pace. We also now offer a model that can scale with either teams or workflows depending on how aggressively you decide to adopt cloud-side development.

How Software Development in the Cloud Has Changed

While we’ve been at this for a while, it’s worth reviewing where development workflows came from and what’s changing.

When “the cloud” first emerged, the prevailing development pattern was to spin up a generic (EC2) server and install software on it to customize its behavior. These customized behaviors might include event streams, data stores, database tables, APIs, or whatever else an application requires. That software, however, is now offered by the cloud providers as a pay-per-use capability that can be configured and requested with infrastructure-as-code (IaC).

Cloud providers have released hundreds of new services which, at their core, are purpose-built, use-case-driven, software-managed services. It’s no longer necessary to spin up a cluster of servers to stand up Kafka if you need a message streaming service, because it’s much faster and arguably cheaper (in terms of people, overhead and maintenance) to simply pull a streaming service like Kinesis or a managed Kafka service off the shelf of the Amazon Web Services menu.

You Can’t Replicate AWS on Your Laptop

The rise of these managed cloud services has fundamentally changed the model for modern application development.

The rise of these managed cloud services has fundamentally changed the model for modern application development.

Of course, the core advantage of this model is that it has become easy to architect at-scale systems in the cloud with very little operational overhead. The consequence of this change, however, is that the software development lifecycle has fundamentally changed. No longer can your laptop act as a development server (localhost). This was a great tool for replicating and testing server-like behavior in local development when the fundamental infrastructural underpinning everything was a server. But now, rather than raw servers, applications are composed of a collection of managed cloud services. Localhost has become a poor representation of the production environment, as it is impossible to replicate all of the functionality of AWS locally on a laptop.

This is driving a shift toward cloud-side development. This doesn’t mean you need to write code through a web browser; your favorite IDE will still work for your application code. But to test and iterate on the full application stack through the development cycle, you must now stand up development instances of the managed services you’re using to compose an application. Crucially, cloud-side development is about service composition: composing your application architecture from off-the-shelf services to accelerate at-scale application development and rapidly iterating on a cloud-native implementation of your application.

What does this tell us? Cloud-side development isn’t just the future, it’s now and it’s big. How big? At re:Invent 2018, AWS executives proclaimed hundreds of thousands of developers are actively developing with AWS’s menu of managed cloud services and Lambda. That’s big.

What tooling does cloud-side development require?

Here is the good news, your IDE, code repository, and CI/CD systems don’t change. What changes? How you manage stacks in the cloud and how you build and iterate stacks with your team.

Stackery now offers easy to consume tooling and environment management capabilities to every organization trying to deliver faster. To build Stackery, we’ve thought about, experienced, and built safeguards around the ways teams could get into trouble composing applications out of managed cloud services. All while keeping every output in standard CloudFormation in case you decide to go back to doing things the hard way.

Managing Active Stacks in the Cloud

Cloud-side development tools must automate and accelerate the iterative nature of development work on top of cloud managed services. This includes rapidly configuring, deploying, sandboxing, namespacing, and managing individual instances of cloud services for each developer involved in the development. At Stackery, we call these active stacks. Cloud-side tools will include automation around packaging and building your code, version controlling your IaC, managing developer environments, instrumentation, governance, and automating the release process across multiple environments and cloud accounts.

Building Stacks

Until recently, cloud-side development of complex applications using managed cloud services was limited to engineers dedicated to cloud innovation (and YAML.) That human investment is still useful but should be applied to setting patterns instead of troubleshooting misplaced characters. Infrastructure as code is the new assembly language. It is machine-readable and unforgiving, which means tooling needs to help developers do things like attaching a handler to a resource in seconds while properly setting all the correct permissions and more. Speaking of resources…

New! Amazon Aurora Serverless and Amazon Cognito

We owe a lot of kudos to our earliest customers who pushed us to add the most popular services needed to visually compose modern applications. Most recently, Amazon Aurora Serverless Database and Amazon Amazon Cognito(user authentication) (user authentication). We’ve also just added the “Anything Resource” that enables our users to add any AWS CloudFormation service beyond the (now 20!!) resource types currently available in the Stackery resource pallet. We like to say it takes a serverless team to keep up with a serverless team.

The Stackery Developer & Teams Plans

And now, with the introduction of our free Developer plan, we’re excited to unleash the possibilities of cloud-side development to everyone who wants to experience the power of the cloud. The Stackery Developer plan includes 6 free active stacks, which is plenty to get a side-project or proof of concept up and running. After you consume the first six stacks or if you want more support or collaborators in the account, additional active stacks can be added for $10 a month per stack. More details here.

Bring your own IDE, Git repository (blank or existing AWS SAM or serverless.yml files), AWS account, and your CI/CD system if you like - Stackery will accelerate you into cloud-development. It’s time to go build.


Further Reading On Cloud-Side Development:


The Anatomy of a Serverless App

We call an application deployed into a cloud service provider an active stack. This stack has three primary components - the functions where the business logic resides, the managed cloud services that serve as the building blocks of the application and then the environmental elements that define the specific dependencies and credentials of for a particular instance of the first two components. This anatomy of a serverless application post goes into the full detail of what serverless teams will build and manage.

Our friends at Lumigo on the need to test cloud-side (and some slower and manual non-Stackery methods for doing so).

Corey Quinn of Last Week in AWS (sign up for the snark, stay for the news, pay for the bill reduction) sparked this conversation on twitter.


Likewise, this “localhost is dead to me” rant by Matt Weagle, organizer of the Seattle Serverless Days, won him a shiny new Stackery account. This thread also garnered some helpful nuance and commentary from Amazon engineers James Hood, Preston Tamkin, and iRobot’s Ben Kehoe.


Lambda@Edge: Why Less is More
Nuatu Tseggai

Nuatu Tseggai | February 21, 2019

Lambda@Edge: Why Less is More

Lambda@Edge is a compute service that allows you to write JavaScript code that executes in any of the 150+ AWS edge locations making up the Amazon CloudFront content delivery network (CDN) service.

In this post, I’ll provide some background on CDN technologies. I will also build out an application stack that serves country-specific content depending on where the user request originates from. The stack utilizes a Lambda@Edge function which checks the country code of an HTTP request and modifies the URI to point to a different index.html object within an S3 bucket.

TL;DR: Less time, Fewer resources, Less effort

  • CDNs are ubiquitous. Modern websites and applications make extensive use of CDN technologies to increase speed and reliability.
  • Lambda@Edge has some design limitations: Node.JS only, must be deployed through us-east-1, limitations on memory size differ between event types, etc.

Read on for a working example alongside tips and outside resources to inform you of key design considerations as you evaluate Lambda@Edge.

The best of both worlds: Lambda + CloudFront

  • Fully managed: no servers to manage and you never have to pay for idle
  • Reliable: built-in availability and fault-tolerance
  • Low latency: a global network of 160+ Points of Presence in 65 cities across 29 countries (as of early 2019)

A Use Case

You have a website accessed by users from around the world. For users in the United States, you want CloudFront to serve a website with US market-specific information. The same is true for users in Australia, Brazil, Europe, or Singapore and each of their respective markets. For users in any country besides those mentioned above, you want CloudFront to serve a default website.

Stackery will be used to design, deploy, and operate this stack; but the Infrastructure as Code and Lambda@Edge concepts are valid with or without Stackery.

Check out this link to explore many of the other use cases such as:

  • A/B testing
  • User authentication and authorization
  • User prioritization
  • User tracking and analytics
  • Website security and privacy
  • Dynamic web application at the edge
  • Search engine optimization (SEO)
  • Intelligently route cross origins and data centers
  • Bot mitigation at the edge
  • Improved user experience (via personalized content)
  • Real-time image transformation

Background: Need for Speed

Traffic on the modern Internet has been growing at a breakneck rate over the last two decades. This growth in traffic is being fueled by nearly 4 billion humans with an Internet connection. It’s estimated that more than half of the world’s traffic is now coming from mobile phones and that video streaming accounts for 57.69% of global online data traffic. Netflix alone is responsible for 14.97% of the total downstream volume of traffic across the entire internet! The other half comes from web browsing, gaming, file sharing, connected devices (cars, watches, speakers, TVs) Industrial IoT, and back-end service-to-service communications.

To keep pace with this rate of growth, websites owners and Internet providers have turned to CDN technologies to cache web content on geographically dispersed servers at edge locations around the world. Generally speaking, these CDN’s serve HTTP requests by accepting the connection at an edge location in close proximity to the user (latency-wise), organizing the request into phases, and caching the response content so that the aggregate user experience is fast, secure, and reliable.

When done correctly, the result is a win-win. The end user can expect faster load times, a lighter load on the origin server, and backhaul portion of the major telecommunications networks (ie: the intermediate links between the core network, backbone network, and subnetworks at the edge of the network).

For more background, check out the What is a CDN page from CloudFlare and this Amazon CloudFront Key Features page from AWS.

Lambda@Edge

Lambda@Edge is a relatively new feature (circa 2017) of CloudFront which enables the triggering of Lambda functions by any of the following four CDN events.

Viewer Request

Edge Function is invoked when the CDN receives a request from an end user. This occurs before the CDN checks if the requested data is in its cache.

Origin Request

Edge Function is invoked only when the CDN forwards a request to your origin. If the requested data is in the CDN cache, the Edge Function assigned to this event does not execute.

Origin Response

Edge Function is invoked when the CDN receives a response from your origin. This occurs before the CDN caches the origin’s response data. An Edge Function assigned to this event is triggered even if the origin returns an error.

Viewer Response

Edge Function is invoked when the CDN returns the requested data to the end user. An Edge Function assigned to this event is triggered regardless of whether the data is already present in the CDN’s cache.

When deciding which CDN event should trigger your Edge Function, consider these questions from the AWS Developer Guide, as well as additional clarifying questions from this helpful AWS blog post in the “Choose the Right Trigger” section.

Sample Source

The source code for this project is available from my GitHub.

Template Generation

I used the Stackery editor to lay out the components and generate a template:

The template is available in the Git repo as template.yaml.

This application stack is pretty straightforward: a CDN is configured to serve a default index.html from an S3 bucket and the CDN is also configured to trigger a Lambda@Edge function upon any Origin Request events. Origin Requests are only made when there is a cache miss, but in the context of this application stack, cache misses will be rare. The default TTL for files in CloudFront is 24 hours— depending on your needs, you can reduce the duration to serve dynamic content or increase the duration to get better performance. The latter will also lower the cost because your file is more likely to be served from an edge cache, thus reducing load on your origin.

Pay special attention to lines 11-12 within the infrastructure as code template. These lines configure the CDN to cache based on the CloudFront-Viewer-Country header which is added by CloudFront after the viewer request event.

Also, note that line 23 which specifies the “Price Class 200” for the CDN (which enables content to be delivered from all AWS edge regions except South America). Price Class All is the most expensive which enables content to be delivered from all AWS regions (this is the default if no other price class is specified). Price Class 100 is the cheapest and only delivers content from United States & Canada and Europe. For more information on pricing check out this link.

Lambda@Edge Function

The Lambda@Edge function checks if the country code of the request is AU, BR, EU, SG, or US. If it is, the URI of the HTTP request is modified to point to a specific index.html object (such as au/index.html or us/index.html) within the S3 bucket. The default index.html object is served from S3 bucket if the country code is NOT one of the above five.

Here’s the complete function code: index.js

'use strict'

exports.handler = async (event) => {
    const request = event.Records[0].cf.request
    const headers = request.headers

    console.log(JSON.stringify(request))
    console.log(JSON.stringify(request.uri))

    const auPath = '/au'
    const brPath = '/br'
    const euPath = '/eu'
    const sgPath = '/sg'
    const usPath = '/us'

    if (headers['cloudfront-viewer-country']) {
        const countryCode = headers['cloudfront-viewer-country'][0].value
        if (countryCode === 'AU') {
          request.uri = auPath + request.uri
        } else if(countryCode === 'BR') {
          request.uri = brPath + request.uri
        } else if (countryCode === 'EU') {
          request.uri = euPath + request.uri
        } else if (countryCode === 'SG') {
          request.uri = sgPath + request.uri
        } else if (countryCode === 'US') {
          request.uri = usPath + request.uri
        }
    }
    console.log(`Request uri set to "${request.uri}"`)

    return request
}

Deployment

Of course, Stackery makes it simple to deploy this application into AWS, but it should be pretty easy to give the template directly to CloudFormation. You may want to go through and whack the parameters like ‘StackTagName’ that are added by the Stackery runtime.

Once the deployment is complete, the provisioned CDN distribution will have a DNS address. I deployed this application to several different environments, one of which I have defined as staging. Here’s the DNS address of that distribution: https://d315q2a48nys0i.cloudfront.net/

Lastly, go to the newly created S3 bucket and add this default index.html file to the root of the S3 bucket. Then create the following 5 “folders” in the S3 bucket: au, br, eu, sg, us. I put folders in quotes because it’s not technically a folder, but via the UI, S3 refers to them as folders and allows them to be created as such. Once each folder is created, add the respective index.html that I have saved in this /html directory within this github project (ie: For the au bucket, copy over the index.html that I have saved at /html/au/index.html). The AWS CLI is convenient for this type of copying/syncing, check out this link for tips pertaining to managing S3 buckets and objects from the command line.

If I hit the DNS address of the CDN distribution from Portland Oregon, I see the following:

See How it Appears to the Rest of the World

GeoPeeker is a pretty nifty tool that allows you to see how a site appears to the rest of the world. Just go to this link and geopeeker will show the site I’ve deployed as it appears to users in Singapore, Brazil, Virginia, California, Ireland, and Australia.

Conclusion

I encourage you to explore the shape of the request event object as well as the response event object both of which can be found at this link. At one point prior to finding this page, I was getting my wires crossed in terms of the values available on each object. Once I found it, I was able to instantly get back on track and hone in on the URI value that I wanted to modify.

An alternative implementation to the changing the URI is to change the host. In that scenario, I could have created a separate S3 bucket for the default site and separate S3 buckets for the index.html for each of the 5 countries, then upon each Origin Request, modify the host instead of the URI when I found a match. Perhaps I’ll do that in a follow on post to show the difference in the resulting Infrastructure as Code template and Lambda@Edge function.

The use case I covered is relatively approachable. More advanced use cases, such as securing sites and applications from bots or DDOS attacks, would be really interesting and fun to implement using Lambda@Edge. It would be great to see more blog posts and/or reference implementations based on reproducible Infrastructure as Code samples that show Lambda@Edge based solutions that target A/B testing, analytics, and user authentication and authorization. Let me know on twitter or in the comments which types of use cases your interested in and I’ll work to put them together or coordinate with various serverless experts to bring the solutions to life.

Using Curiosity To Find Your Best Self
Farrah Campbell

Farrah Campbell | February 13, 2019

Using Curiosity To Find Your Best Self

As Stackery’s Ecosystems Manager, a huge part of my work revolves around meeting new people and developing relationships with them for the good of our company. I love this work not only because I’m passionate about people and serverless, but also because it keeps my curiosity muscle strong. To be good at my job, I need to do right by my personal connection to curiosity and learning— but sometimes I get off-track.

Did you know that the average person spends just 20% of their day engaged in meaningful activities that make them feel fulfilled and joyful? The rest of our day is spent sleeping, working, doing chores and mindless decompression activities like watching TV. If you’re not mindful, you could even lose some of that precious 20% by letting unfulfilling activities consume more of your day. For example, we spend much more time working than reading to our children. We spend more time doing mindless activities than we do learning, or growing. For those who are juggling higher education and a full-time job, have more than one employer, or are the caretaker of a sick family member this time for self-motivated learning becomes even rarer and more precious.

Like many, I recently found myself in this very situation. Even as a person who constantly seeks self-improvement, I was beginning to fall into old habits, spending too much time on things that didn’t bring me joy. I could feel resentment flooding back into my life and my shield against life’s stressors was thinning. I wasn’t being true to myself and was no longer focused on personal growth. I knew something needed to change but was struggling to identify what that was.

The Talk That Changed Everything

I reached out to Andrew Clay Shafer (someone I consider a mentor) and asked what talk he was most proud of. He immediately mentioned his keynote at O’Reily’s Velocity NYC 2013 called There is No Talent Shortage. It’s largely about company culture but many aspects can apply to your personal life as well. It touches on the practices of purpose-driven organizations and was just what I needed to hear. My biggest takeaway was the importance of finding a way to be better each day and, crucially, that talent attracts talent.

As Andrew says, “success isn’t about finding the right people, it’s about being the right people.” What can you do, each day, that will lead to new skills, new understanding or other forms of personal growth? How much of your day will you spend on things that truly bring you joy or fulfillment? Continued learning and growth are competitive advantages in the world and you need to seize them.

To change yourself, you have to first figure out what moves your soul. We tend to focus on things we think make us happy, without stepping back and figuring out what happiness really means to us. This can be really difficult when we’re balancing children, our work commute, putting food on the table, and nurturing others. But it’s extremely important in the long run not to beat yourself up about personal growth, that kind of judgment is the last thing you need on top of everything else! If you are curious about a subject or area of your life to improve upon, that’s enough of a seed to start.

Using Curiosity To Grow

If you are filled with earnest questions, you’ll listen more and show genuine interest in others.

This research in personal growth and finding my authentic self led to a life-altering article by Todd Kashdan called, The Power of Curiosity. I want to share a little bit of what I learned from this article and how you can apply it to your own life:

Curiosity creates openness to unfamiliar experiences which can lead to discovery and joy. Perhaps more approachable is the fact that a curious mind can be nurtured and developed. Like any skill, the more you use it, the better you become at it. Soon enough, that skill becomes part of who you are.

Studies by Gallup show that employee engagement comes mostly from relationships and connecting with a higher purpose. People are born wanting to think, learn, and grow but oftentimes responsibilities get in the way. Listen to urges to explore: as our curiosity deepens, more opportunities emerge.

Curiosity also helps us meet new people and develop interpersonal relationships. If you are filled with earnest questions, you’ll listen more and show genuine interest in others. The best part is that the people you meet have a basic level of wanting to be heard. When they sense an authentic level of caring, they will respond by opening up and sharing even more. This leads to tighter bonds and lasting relationships in work and at home.

How To Practice Curiosity

You can invite curiosity into your life by practicing, nurturing, and cultivating it. The first step is building knowledge; seek to learn one new thing each day and that knowledge will feed on itself. Essentially, the more you learn, the more you will want to know.

Curiosity can also enter your life when you become more playful and learn to thrive on uncertainty. Think about how boring life would be if we already knew exactly what was going to happen. What if you knew the results of every football game before you watched it? Would you even watch it? What if you knew for certain what grade you would get on a final exam? Would you need to study? The uncertainty is actually what drives us most of the time, even if we are not aware of it.

However, living a curious life is not always easy or free of risk. Those of us that have a predictable and guaranteed amount of free time (where they are adequately rested, hydrated, and energized) are probably in the minority and the rest should be patient with themselves. Just start by doing your best to locate sources of freedom in between responsibilities: call into a motivating webinar on your commute home to decompress, subscribe to a new, interesting podcast and listen to it while you clean the house. Even just taking a short walk to clear your head and map out creative time down the line can help. Anything to satiate your interest and invest in yourself. This is what I did when I sought out Andrew’s advice and the aforementioned article and both were huge stress-relievers when I needed them most.

Human beings have been makers and community members since the beginning of time. I think we often lose track of this in the throes of modern life, which leads us to a cog-in-the-machine mentality. This is one of the fantastic things about working at Stackery, I’m surrounded by a team that not only works together on tough problems every day but is actually building a solution to help other engineers do the same! It’s very inspiring.

Take small risks, try new things, try looking at an old “truth” with fresh eyes, and see where that takes you. I am happy to be doing that at Stackery and look forward to every adventure along the way.

The Anatomy of a Serverless App
Toby Fee

Toby Fee | February 11, 2019

The Anatomy of a Serverless App

Serverless has, for the last year or so, felt like an easy term to define: code run in a highly managed environment with (almost) no configuration of the underlying computer layer done by your team. Fair enough, but what is is a serverless application? A Lambda isn’t an app by itself, heck, it can’t even communicate with the world outside of Amazon Web Services (AWS) by itself, so there must be more to a serverless app than that. Let’s explore a serverless app’s anatomy, the features that should be shared by all the serverless apps you’ll build.

Serverless applications have three components:

  • Business logic: function (Lambda) that defines the business logic
  • Building Blocks: Resources such as databases, api gateways, authentication services, IOT, Machine Learning, container tasks, and other cloud services that support a function
  • Workflow phase dependencies: Environment configuration and secrets that respectively define and enable access to the unique dependencies unique each phase of development workflow.

Taken together, these three components create a single ‘Active Stack’ when running within an AWS region.

Review: What’s a Lambda?

I could write this piece in a generic tone and call Lambdas ‘Serverless Functions,’ after all, both Microsoft and Google have similar offerings. But Lambdas have fast become the dominant form of serverless functions, with features like Lambda Layers showing how Lambdas are fast maturing into an offering both the weekend tinkerer and the enterprise team can use effectively.

But what are Lambdas again? They’re blobs of code that AWS will run for you in a virtualized environment without you having to do any configuration. It might make more sense to describe how Lambdas get used:

  • You write a blob of Node, Ruby, or several other languages, all in the general mode of ‘take in a triggering event, kick off whatever side effects you need to, then return something’
  • Upload your code blob to AWS Lambda
  • Send your Lambda requests
  • AWS starts up your code in a virtual environment, complete with whatever software packages you required
  • Look at the response!
  • Send your Lambda 10,000 requests in a minute
  • AWS starts up a bunch of instances of your code, each one handling several requests
  • Look at all these responses!

Are Lambdas like containers? Sort of, in that you don’t manage storage or the file system directly, it should all be set in configuration. But you don’t manage Lambda startup, responses, or routing directly; you leave all of that to AWS.

Note that Lambdas do not handle any part of their communication with the outside world. They can be triggered by events from other AWS services but not from direct HTTP requests, for that a Lambda needs to be connected to an API gateway or more indirectly to another AWS service (E.G. a Lambda can respond to events off an S3 bucket, which could be HTTP uploads)

What supports our Lambdas?

We’ve already implied the need for at least one ‘service’ outside of just a Lambda: an api gateway. But that’s not all we need: with a virtualized operating system layer, we can’t store anything on our Lambdas between runs, so we need some kind of storage. Lambdas shouldn’t be used for extremely long running tasks, so we need a service for that. Finally it’s possible that we want to make decisions about which Lambda should respond based on the type of request, so we might need to connect Lambdas to other Lambdas.

In general, we could say that every function will have a resource architecture around it that lets it operate like a fully featured application. The capabilities and pallet of offerings of this resource architecture continue to expand rapidly, both in terms of the breadth of offerings for IoT, AI, machine learning, security, databases, containers, and more as well as services to improve performance, connectivity, and cost profiles.

With all these necessary pieces to make a Lambda do any actual work, AWS has a service to let us treat all these pieces as a unit. CloudFormation can treat a complete serverless ‘stack’ as a configuration file that can be moved and deployed in different environments. With Stackery you can build and stacks from an easy graphical canvas and the files it produces are the same YAML that CloudFormation uses natively!

Secrets

Lambdas are blobs of code that should be managed through normal code sharing platforms like Github. Two problems present themselves right away: How do we tell our Lambda where its running, and how do we give it the secrets that it needs to interact with other services?

The most common example of this will be accessing a database.

Note: If we’re using an AWS-hosted serverless database like DynamoDB the following steps should not be necessary since we can handle giving permissions to the Lambda for our DB within the Lambda’s settings. Using Stackery to connect Lambdas to AWS databases makes this part as easy as drawing a line!

We need secrets to authenticate to our DB, but we also need our Lambda to know whether it’s running on staging so that it doesn’t try to update the production database during our test runs.

So we can identify three key sections of our serverless app: our function, its resources, and the secrets/configuration that make up its environment.

The Wider World

In a highly virtualized environment, it’s counter-intuitive to ask ‘where is my code running’ but while you can’t put a pin in a map you must spread your app across AWS availability zones to ensure true reliability. We should therefore draw a box around our ‘environment’ with our stack, its configuration, and secrets. This entire system will exist across multiple zones or even in services other than AWS (if you really enjoy the headache of writing code and config for multiple clouds).

How many ‘Active Stacks’ is your team running?

An active stack is a complete set of functions, resources, and environment. If you have the same function code and resources running on three environments (e.g. dev, test, and prod) you have three active stacks. If you take your production stack and distribute it to three different AWS regions, you again have three active stacks.

How this anatomy can help your team.

Identifying unifying features is not, in itself, useful for your team, but it is an essential step in planning. We cannot adopt a serverless model for part of our architecture without a plan to build and manage all these features. You must have:

  • Programmers to write your functions and manage their source code
  • Cloud professionals to assign and control the resources those functions need
  • Operations and security to deploy these stacks in the right environments

You also need a plan for how these people will interact and coordinate on releases, updates, and emergencies (I won’t say outages since spreading your app across availability zones should make that vanishingly rare).

Later articles will use this understanding of the essential parts of a serverless app to explore the key decisions you must make as you plan your app.

How Stackery Can Help

Now that we’ve defined these three basic structures, it would be nice if they were truly modular within AWS. While lambda code can easily be re-used and deployed to different contexts, It’s more difficult to use a set of resources or an environment like a module that you can move about with ease.

Stackery makes this extremely easy: you can mix and match ‘stacks’ and their environments, and easily define complete applications and re-deploy them in different AWS regions.

A Greater Gatsby: Modern, Static-Site Generation
Toby Fee

Toby Fee | February 04, 2019

A Greater Gatsby: Modern, Static-Site Generation

Gatsby is currently generating a ton of buzz as the new hot thing for generating static sites. This has lead to a number of frequent questions like:

  • A static…what now?
  • How is GraphQL involved? Do I need to set up a GraphQL server?
  • What if I’m not a great React developer, really more of a bad React developer? Does this mean our copywriter won’t have to push to a GitHub repo to add a new page?

I had a few of these questions myself and decided to get firsthand experience by creating a few sites using Stackery and Gatsby. While I am good with Javascript and the general mechanics of websites, I am neither a React or a GraphQL expert. I’ve used both but as I enter my mid 30’s I find my mind is like a closet in Manhattan: it has room for only one season’s worth of interests.

What does Gatsby do exactly?

Gatsby is a static site generator, intended to build all the HTML you need based on data you created. Presumably, that data was simpler to manage than straight HTML.

I don’t use a static site generator now. Should I?

A static site generator probably doesn’t make sense if your site is truly static (e.g. a marketing site that gets updated twice a year.) Static site generators make more sense if you are building something that updates every 1-10 days, like a professional blog.

If you’ve ever used Jekyll, another static site generator, the basic concepts are similar: turn a folder of plain markdown files, or the like, into a complete site with just a bit of config.

But the limitations of Jekyll and other older generators are myriad:

  • Complex and endemic config formatting.
  • The lack of some particular feature which you need and they don’t have (it generates blog posts just great and interprets your ‘author’ field, but now you want posts two authors, for instance.)
  • There’s no great way to get new files from all team members. Often teams using Jekyll end up using a GitHub repository to store their source text files (again, Markdown or what have you) meaning your marketing copywriter needs to learn git to add a new blog post or more often: email a developer with an attached file for her to add.
  • You can’t make a Jekyll site in React.

Gatsby offers significant improvements in these areas since, along with bare-text files, Gatsby can import data from almost any data source and generate a clean React site based on the data it queries.

GraphQL is even cooler than React, but I have concerns.

GraphQL offers a tantalizing dream: a system that can connect multiple data sources with a simple query language. Suddenly your hip NoSQL databases and creaky SQL databases can all be displayed through a single app.

But building and running GraphQL are not always simple, and fundamentally this new engine requires learning a new query structure. It’s cool to see that Gatsby is ‘powered by GraphQL’ but doesn’t that mean there will be some massive friction in getting this deployed?

But remember Gatsby generates a static site. It uses GraphQL to access DB’s and generate your site, but once that process is over, GraphQL doesn’t need to be running for your site to work.

But isn’t there still a learning curve?

Yeah, that part is inescapable. While many common setups have templates to start with, your team’s weird content database from 1997 is going to require some custom config. There’s no way this could be harder than importing the data yourself, and once it’s configured you only need to update your database and re-run the Gatsby build.

What if I’m not a great React Developer?

If it took a React expert to use Gatsby, it wouldn’t really have a market and that’s clearly not the case. Ultimately, if we could make awesome React sites from scratch, what use would there really be for Gatsby?

Fortunately, the tutorial for Gatsby is also a great way to gain knowledge of React in general. If you only use Gatsby for your first React app, a few things like auto-link formatting will seem like they’re core React tools when they’re really part of Gatsby. But that shouldn’t be a dealbreaker for anyone.

Having written zero React in the last year, I found my first few sites a cinch with Gatsby after spending a couple hours in their tutorials. If I can do it, anyone can—yes a logical fallacy on the SAT, but true here.

Can we get the team away from Git?

Here we get to the real potential of Gatsby: for our non-git-savvy team members, Gatsby can consume the database of another Content Management System (CMS), seamlessly importing articles with your main site. A CMS can present an input and editor for articles without taking responsibility for displaying the content it stores. Used this way it’s ineptly called a ‘headless CMS’. The good news is, your content contributors can now publish content on their own, without needing to do a code push.

Gatsby is Different

I’m not aware of any other static site generator that has this kind of functionality out of the box. Problems like scheduling posts ahead of time, editing posts (without needing to edit HTML) and content with multiple links and embedded files can all be handled by a tried-and-true editor like WordPress, while Gatsby generates a high-performance React app from the data. Gatsby is worth the plunge. Let me know how it goes for your team!

Creating Cognito User Pools with CloudFormation
Matthew Bradburn

Matthew Bradburn | January 31, 2019

Creating Cognito User Pools with CloudFormation

I’ve been working on creating AWS Cognito User Pools in CloudFormation, and thought this would be a good time to share some of what I’ve learned.

As an overview of this project:

  • For sign-up, I’m creating Cognito users directly from my server app. It’s also possible to have users create their own accounts in Cognito, but that’s not what I want.
  • I want to use email addresses as the user names, rather than having user names with separate associated email addresses.
  • I don’t want the users to have to mess around with temporary passwords. This is part of the ordinary Cognito workflow, but I set the initial password in my server-side code and then immediately reset the password to the same value. So there is a temporary password, but the users don’t notice it.
  • Sign-in is a transaction directly between the client-side app and Cognito; the client gets a JWT (JSON Web Token) from Cognito, which is validated by my AuthenticatedApi function on the back-end.
  • The Cognito User Pool, Lambda functions, etc., are created by CloudFormation with a SAM (Serverless Application Model) template.

Sample Source

The source code for this project is available from my github. The disclaimer is that the source is pretty rough, and should be tidied before being used in production.

Template Generation

I used the Stackery editor to lay out the components and generate a template: stackery editor

The template is available in the Git repo as template.yaml.

This is a simple application; I have an Api Gateway that my client app will hit, with one endpoint to effect sign-up and one to demonstrate an authenticated API. Each of these endpoints invokes a separate Lambda function. Those functions have access to my User Pool.

I’ve wired the User Pool’s triggered functions up just as an experiment. Currently all the triggers invoke my CognitoTriggered function, which is currently logging the input messages but that’s all – according to my understanding, these functions work by modifying the input message and returning it, but my function returns the input message unmolested.

I’ve hand-edited the SAM template to add the user pool client:

  UserPoolClient:
    Type: AWS::Cognito::UserPoolClient
    Properties:
      ClientName: my-app
      GenerateSecret: false
      UserPoolId: !Ref UserPool
      ExplicitAuthFlows:
        - ADMIN_NO_SRP_AUTH

I’ve set GenerateSecret to false because in a web app it’s hard to keep a secret of this type. We use ADMIN_NO_SRP_AUTH during the user creation process as Admin. I’ve also added an environment variable to each of my functions so they’ll get the user pool client ID.

Deployment

Of course Stackery makes it simple to deploy this application into AWS, but it should be pretty easy to give the template directly to CloudFormation. You may want to go through and whack the parameters like ‘StackTagName’ that are added by the Stackery runtime.

Client Tester App

Once you’ve deployed the app, there are a couple of parameters from the running app to be copied to the client. These go in the source code near the top. For instance, the URI of the API Gateway is needed by the client but isn’t availble until after the app is deployed.

This may not be an issue for you if you’re doing a web client app instead of a Node.js app, but in my case I’m using the NPM package named amazon-cognito-identity-js to talk to Cognito for authentication. That package depends on the fetch() API, which browsers have but Node.js does not. I’ve included the package source directly in my repo, and added a use of node-fetch-polyfill in amazon-congnito-identiy-js/lib/Client.js.

Run ./client-app.js --sign-up --email <email> --password <pass> to create a new user in your Cognito pool. In real apps you should never acceppt passwords on the command-line like this.

Once you’ve created a user, run ./client-app.js --sign-in --email <email> --password <pass>, giving it the new user’s email and password, to get a JWT for the user.

Assuming sign-in succeeds, that command prints the JWT created by Cognito. You can then test the authenticated API with ./client-app.js --fetch --token <JWT>.

Areas for Improvement

This is rather marginal sample code, as I mentioned, and there are several obvious areas for improvement:

  • The amazon-cognito-identity-js package isn’t meant for Node.js. I wonder if it makes sense to use the AWS SDK directly.

  • The AuthenticatedApi function gets public keys from Cognito on every request; they should be cached.

  • The client-app uses the access token, but a real client app would have to be prepared to use the refresh token to generate a new access token periodically.

Chaos Engineering Ideas for Serverless
Danielle Heberling

Danielle Heberling | January 24, 2019

Chaos Engineering Ideas for Serverless

The Principles define chaos engineering as:

The discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production.

The high-level steps for implementing chaos experiments involve: defining your application’s steady state, hypothesizing the steady state in both the control and experimental groups, injecting realistic failures, observing the results, and making changes to your code base/infrastructure as necessary based off of the results.

Chaos experiments are not meant to replace unit and integration tests. They’re intended to work with those existing tests in order to assure the system is reliable. A great real-world analogy is that chaos experiments are like vaccines: A vaccine contains a small amount of the live virus that gets injected into the body in order to prompt the body to build up immunity to prevent illness. With chaos experiments, we’re injecting things like latency and errors into our application to see if the application handles them gracefully. If it does not, then we can adjust accordingly in order to prevent incidents from happening.

Sometimes chaos engineering gets a bad reputation as ‘breaking things for fun.’

Sometimes chaos engineering gets a bad reputation as “breaking things for fun.” I believe the problem is that there’s too much emphasis on breaking things while the focus should be on why the experiments are being run. In order to minimize your blast radius, it’s recommended to begin with some experiments in non-production environments during the workday while everyone is around to monitor the system. On the people side of things, make sure you communicate with your entire team what you’re doing, so they aren’t caught by surprise. Once you have experience and confidence running experiments, you can then move onto running them in production. The end goal is to run experiments in production since it is difficult to have an environment that matches production exactly.

Traditionally chaos engineering at a high level is running experiments that often involve shutting off servers, but if you are in a serverless environment with managed servers this can pose a new challenge. Serverless environments typically have smaller units of deployment, but more of them. This means for someone who wants to run chaos experiments, there are more boundaries to harden around in your applications.

If you’re thinking about running some chaos experiments of your own in a serverless environment, some ideas of things to look out for are:

  • Performance/latency (most common)
  • Improperly tuned timeouts
  • Missing error handling
  • Missing fallbacks
  • Missing regional failover (if using multiple regions)

For serverless, the most common experiments involve latency injection or error injection into functions.

Some examples of errors you could inject are:

  • Errors common in your application
  • HTTP 5xx
  • Amazon DynamoDB throughput exceeded
  • Throttled AWS lambda invocations

A pretty neat trend I’ve seen is folks writing their own libraries to inject latency and/or errors into a Lambda function using the new Lambda layers feature. Stackery makes it easy to add layers to your function. Another idea is to implement chaos experiments as part of your CI/CD pipeline. If you don’t want to write your own library there are a lot of open source projects and also some companies that offer “chaos-as-a-service.”

If you’d like to go into more detail on implementation, I’d suggest checking out this article to see some code. This GitHub repo also has some great resources on the overall topic of Chaos Engineering. I hope this post gave you some ideas and inspiration on different ways you can test your serverless environment to ensure system reliability for your customers!

How I Got Comfortable Building with Serverless
Jun Fritz

Jun Fritz | January 17, 2019

How I Got Comfortable Building with Serverless

A few months back, I blogged about my experience arriving at Stackery after code school. Months later, each day is still interesting and challenging and I’m so glad to have decided to pursue serverless as my concentration. I credit my AWS certifications for narrowing my focus enough to lead me to this point. The serverless community puts so much emphasis on exploration and getting started on your work or experiments today that, getting some exposure to AWS, you can get started right away. Here’s a breakdown of how I went from serverless novice to software engineer at Stackery:

Gaining Confidence

I was interested in cloud computing, but I had very little knowledge of AWS, so I opted for what felt like the next best thing: certifications.

After graduating from code bootcamp, I was eager to dive into the job search and get placed somewhere awesome right away. What I discovered was that many of the listings required some level of AWS experience. I was interested in cloud computing, but I had very little knowledge of AWS, so I opted for what felt like the next best thing: certifications. I wasn’t sure which to choose, so I decided to start with three associate-level certs offered by AWS: Solution Architect, Developer, and SysOps. Each covered different roles, which forced me to adopt a distinct mindset for each, deepening my understanding of cloud services and their use cases. Taking the exams gave me enough confidence to begin building cloud-based applications and showcasing them to potential employers. The ability to discuss different cloud infrastructures and build my own serverless web apps helped me earn a spot on the Stackery engineering team, and I was excited to start gaining more real-world experience.

Real-World Experience

In my current role at Stackery I get to work with AWS resources every day, but discovering what problems other serverless developers are working on has influenced me a lot. There are developers out there that have had different experiences with the tools I use, and it’s important for me to be aware of them so that I’m not limited to my own way of thinking.

The online forums and webinars about serverless development provide me with tons of useful content, but a find a lot of value in the use-cases and questions I get from others. For example, attending any of the live 3-400 level online webinars from AWS provides a deep dive into various serverless topics, along with a pointed Q&A session to address common concerns in the community. So, even if your day-to-day work doesn’t consist of serverless or AWS, you can still engage in these topics by being open to the real-world experiences of other developers.

The following resources can help keep you in the know about new advancements for serverless and AWS:

Learn by Building

Before I was introduced to Stackery, serverless development was definitely a challenge. Building my own serverless applications helped solidify what I’d learned from my certifications, but I’d get frustrated with the amount of configuration and guesswork required to write out my own CloudFormation templates.

With Stackery, it doesn’t take long for me to quickly define cloud resources and integrate them with other services; I’m able to build a stack that confirms my understanding of a specific serverless workflow rapidly. Using Stackery has helped me build serverless applications and thus learn more about serverless development.

If any part of you is compelled to learn more about serverless right out of school, I can’t recommend the above strategies highly enough. It really just comes down to curiosity, research, experimentation and, if you want a boost in confidence, perhaps a certification or two.

Get the Serverless Development Toolkit for Teams

now and get started for free. Contact one of our product experts to get started building amazing serverless applications today.

To Top