Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Using Curiosity To Find Your Best Self
Farrah Campbell

Farrah Campbell | February 13, 2019

Using Curiosity To Find Your Best Self

As Stackery’s Ecosystems Manager, a huge part of my work revolves around meeting new people and developing relationships with them for the good of our company. I love this work not only because I’m passionate about people and serverless, but also because it keeps my curiosity muscle strong. To be good at my job, I need to do right by my personal connection to curiosity and learning— but sometimes I get off-track.

Did you know that the average person spends just 20% of their day engaged in meaningful activities that make them feel fulfilled and joyful? The rest of our day is spent sleeping, working, doing chores and mindless decompression activities like watching TV. If you’re not mindful, you could even lose some of that precious 20% by letting unfulfilling activities consume more of your day. For example, we spend much more time working than reading to our children. We spend more time doing mindless activities than we do learning, or growing. For those who are juggling higher education and a full-time job, have more than one employer, or are the caretaker of a sick family member this time for self-motivated learning becomes even rarer and more precious.

Like many, I recently found myself in this very situation. Even as a person who constantly seeks self-improvement, I was beginning to fall into old habits, spending too much time on things that didn’t bring me joy. I could feel resentment flooding back into my life and my shield against life’s stressors was thinning. I wasn’t being true to myself and was no longer focused on personal growth. I knew something needed to change but was struggling to identify what that was.

The Talk That Changed Everything

I reached out to Andrew Clay Shafer (someone I consider a mentor) and asked what talk he was most proud of. He immediately mentioned his keynote at O’Reily’s Velocity NYC 2013 called There is No Talent Shortage. It’s largely about company culture but many aspects can apply to your personal life as well. It touches on the practices of purpose-driven organizations and was just what I needed to hear. My biggest takeaway was the importance of finding a way to be better each day and, crucially, that talent attracts talent.

As Andrew says, “success isn’t about finding the right people, it’s about being the right people.” What can you do, each day, that will lead to new skills, new understanding or other forms of personal growth? How much of your day will you spend on things that truly bring you joy or fulfillment? Continued learning and growth are competitive advantages in the world and you need to seize them.

To change yourself, you have to first figure out what moves your soul. We tend to focus on things we think make us happy, without stepping back and figuring out what happiness really means to us. This can be really difficult when we’re balancing children, our work commute, putting food on the table, and nurturing others. But it’s extremely important in the long run not to beat yourself up about personal growth, that kind of judgment is the last thing you need on top of everything else! If you are curious about a subject or area of your life to improve upon, that’s enough of a seed to start.

Using Curiosity To Grow

If you are filled with earnest questions, you’ll listen more and show genuine interest in others.

This research in personal growth and finding my authentic self led to a life-altering article by Todd Kashdan called, The Power of Curiosity. I want to share a little bit of what I learned from this article and how you can apply it to your own life:

Curiosity creates openness to unfamiliar experiences which can lead to discovery and joy. Perhaps more approachable is the fact that a curious mind can be nurtured and developed. Like any skill, the more you use it, the better you become at it. Soon enough, that skill becomes part of who you are.

Studies by Gallup show that employee engagement comes mostly from relationships and connecting with a higher purpose. People are born wanting to think, learn, and grow but oftentimes responsibilities get in the way. Listen to urges to explore: as our curiosity deepens, more opportunities emerge.

Curiosity also helps us meet new people and develop interpersonal relationships. If you are filled with earnest questions, you’ll listen more and show genuine interest in others. The best part is that the people you meet have a basic level of wanting to be heard. When they sense an authentic level of caring, they will respond by opening up and sharing even more. This leads to tighter bonds and lasting relationships in work and at home.

How To Practice Curiosity

You can invite curiosity into your life by practicing, nurturing, and cultivating it. The first step is building knowledge; seek to learn one new thing each day and that knowledge will feed on itself. Essentially, the more you learn, the more you will want to know.

Curiosity can also enter your life when you become more playful and learn to thrive on uncertainty. Think about how boring life would be if we already knew exactly what was going to happen. What if you knew the results of every football game before you watched it? Would you even watch it? What if you knew for certain what grade you would get on a final exam? Would you need to study? The uncertainty is actually what drives us most of the time, even if we are not aware of it.

However, living a curious life is not always easy or free of risk. Those of us that have a predictable and guaranteed amount of free time (where they are adequately rested, hydrated, and energized) are probably in the minority and the rest should be patient with themselves. Just start by doing your best to locate sources of freedom in between responsibilities: call into a motivating webinar on your commute home to decompress, subscribe to a new, interesting podcast and listen to it while you clean the house. Even just taking a short walk to clear your head and map out creative time down the line can help. Anything to satiate your interest and invest in yourself. This is what I did when I sought out Andrew’s advice and the aforementioned article and both were huge stress-relievers when I needed them most.

Human beings have been makers and community members since the beginning of time. I think we often lose track of this in the throes of modern life, which leads us to a cog-in-the-machine mentality. This is one of the fantastic things about working at Stackery, I’m surrounded by a team that not only works together on tough problems every day but is actually building a solution to help other engineers do the same! It’s very inspiring.

Take small risks, try new things, try looking at an old “truth” with fresh eyes, and see where that takes you. I am happy to be doing that at Stackery and look forward to every adventure along the way.

The Anatomy of a Serverless App
Toby Fee

Toby Fee | February 11, 2019

The Anatomy of a Serverless App

Serverless has, for the last year or so, felt like an easy term to define: code run in a highly managed environment with (almost) no configuration of the underlying computer layer done by your team. Fair enough, but what is is a serverless application? A Lambda isn’t an app by itself, heck, it can’t even communicate with the world outside of Amazon Web Services (AWS) by itself, so there must be more to a serverless app than that. Let’s explore a serverless app’s anatomy, the features that should be shared by all the serverless apps you’ll build.

Serverless applications have three components:

  • Business logic: function (Lambda) that defines the business logic
  • Building Blocks: Resources such as databases, api gateways, authentication services, IOT, Machine Learning, container tasks, and other cloud services that support a function
  • Workflow phase dependencies: Environment configuration and secrets that respectively define and enable access to the unique dependencies unique each phase of development workflow.

Taken together, these three components create a single ‘Active Stack’ when running within an AWS region.

Review: What’s a Lambda?

I could write this piece in a generic tone and call Lambdas ‘Serverless Functions,’ after all, both Microsoft and Google have similar offerings. But Lambdas have fast become the dominant form of serverless functions, with features like Lambda Layers showing how Lambdas are fast maturing into an offering both the weekend tinkerer and the enterprise team can use effectively.

But what are Lambdas again? They’re blobs of code that AWS will run for you in a virtualized environment without you having to do any configuration. It might make more sense to describe how Lambdas get used:

  • You write a blob of Node, Ruby, or several other languages, all in the general mode of ‘take in a triggering event, kick off whatever side effects you need to, then return something’
  • Upload your code blob to AWS Lambda
  • Send your Lambda requests
  • AWS starts up your code in a virtual environment, complete with whatever software packages you required
  • Look at the response!
  • Send your Lambda 10,000 requests in a minute
  • AWS starts up a bunch of instances of your code, each one handling several requests
  • Look at all these responses!

Are Lambdas like containers? Sort of, in that you don’t manage storage or the file system directly, it should all be set in configuration. But you don’t manage Lambda startup, responses, or routing directly; you leave all of that to AWS.

Note that Lambdas do not handle any part of their communication with the outside world. They can be triggered by events from other AWS services but not from direct HTTP requests, for that a Lambda needs to be connected to an API gateway or more indirectly to another AWS service (E.G. a Lambda can respond to events off an S3 bucket, which could be HTTP uploads)

What supports our Lambdas?

We’ve already implied the need for at least one ‘service’ outside of just a Lambda: an api gateway. But that’s not all we need: with a virtualized operating system layer, we can’t store anything on our Lambdas between runs, so we need some kind of storage. Lambdas shouldn’t be used for extremely long running tasks, so we need a service for that. Finally it’s possible that we want to make decisions about which Lambda should respond based on the type of request, so we might need to connect Lambdas to other Lambdas.

In general, we could say that every function will have a resource architecture around it that lets it operate like a fully featured application. The capabilities and pallet of offerings of this resource architecture continue to expand rapidly, both in terms of the breadth of offerings for IoT, AI, machine learning, security, databases, containers, and more as well as services to improve performance, connectivity, and cost profiles.

With all these necessary pieces to make a Lambda do any actual work, AWS has a service to let us treat all these pieces as a unit. CloudFormation can treat a complete serverless ‘stack’ as a configuration file that can be moved and deployed in different environments. With Stackery you can build and stacks from an easy graphical canvas and the files it produces are the same YAML that CloudFormation uses natively!

Secrets

Lambdas are blobs of code that should be managed through normal code sharing platforms like Github. Two problems present themselves right away: How do we tell our Lambda where its running, and how do we give it the secrets that it needs to interact with other services?

The most common example of this will be accessing a database.

Note: If we’re using an AWS-hosted serverless database like DynamoDB the following steps should not be necessary since we can handle giving permissions to the Lambda for our DB within the Lambda’s settings. Using Stackery to connect Lambdas to AWS databases makes this part as easy as drawing a line!

We need secrets to authenticate to our DB, but we also need our Lambda to know whether it’s running on staging so that it doesn’t try to update the production database during our test runs.

So we can identify three key sections of our serverless app: our function, its resources, and the secrets/configuration that make up its environment.

The Wider World

In a highly virtualized environment, it’s counter-intuitive to ask ‘where is my code running’ but while you can’t put a pin in a map you must spread your app across AWS availability zones to ensure true reliability. We should therefore draw a box around our ‘environment’ with our stack, its configuration, and secrets. This entire system will exist across multiple zones or even in services other than AWS (if you really enjoy the headache of writing code and config for multiple clouds).

How many ‘Active Stacks’ is your team running?

An active stack is a complete set of functions, resources, and environment. If you have the same function code and resources running on three environments (e.g. dev, test, and prod) you have three active stacks. If you take your production stack and distribute it to three different AWS regions, you again have three active stacks.

How this anatomy can help your team.

Identifying unifying features is not, in itself, useful for your team, but it is an essential step in planning. We cannot adopt a serverless model for part of our architecture without a plan to build and manage all these features. You must have:

  • Programmers to write your functions and manage their source code
  • Cloud professionals to assign and control the resources those functions need
  • Operations and security to deploy these stacks in the right environments

You also need a plan for how these people will interact and coordinate on releases, updates, and emergencies (I won’t say outages since spreading your app across availability zones should make that vanishingly rare).

Later articles will use this understanding of the essential parts of a serverless app to explore the key decisions you must make as you plan your app.

How Stackery Can Help

Now that we’ve defined these three basic structures, it would be nice if they were truly modular within AWS. While lambda code can easily be re-used and deployed to different contexts, It’s more difficult to use a set of resources or an environment like a module that you can move about with ease.

Stackery makes this extremely easy: you can mix and match ‘stacks’ and their environments, and easily define complete applications and re-deploy them in different AWS regions.

A Greater Gatsby: Modern, Static-Site Generation
Toby Fee

Toby Fee | February 04, 2019

A Greater Gatsby: Modern, Static-Site Generation

Gatsby is currently generating a ton of buzz as the new hot thing for generating static sites. This has lead to a number of frequent questions like:

  • A static…what now?
  • How is GraphQL involved? Do I need to set up a GraphQL server?
  • What if I’m not a great React developer, really more of a bad React developer? Does this mean our copywriter won’t have to push to a GitHub repo to add a new page?

I had a few of these questions myself and decided to get firsthand experience by creating a few sites using Stackery and Gatsby. While I am good with Javascript and the general mechanics of websites, I am neither a React or a GraphQL expert. I’ve used both but as I enter my mid 30’s I find my mind is like a closet in Manhattan: it has room for only one season’s worth of interests.

What does Gatsby do exactly?

Gatsby is a static site generator, intended to build all the HTML you need based on data you created. Presumably, that data was simpler to manage than straight HTML.

I don’t use a static site generator now. Should I?

A static site generator probably doesn’t make sense if your site is truly static (e.g. a marketing site that gets updated twice a year.) Static site generators make more sense if you are building something that updates every 1-10 days, like a professional blog.

If you’ve ever used Jekyll, another static site generator, the basic concepts are similar: turn a folder of plain markdown files, or the like, into a complete site with just a bit of config.

But the limitations of Jekyll and other older generators are myriad:

  • Complex and endemic config formatting.
  • The lack of some particular feature which you need and they don’t have (it generates blog posts just great and interprets your ‘author’ field, but now you want posts two authors, for instance.)
  • There’s no great way to get new files from all team members. Often teams using Jekyll end up using a GitHub repository to store their source text files (again, Markdown or what have you) meaning your marketing copywriter needs to learn git to add a new blog post or more often: email a developer with an attached file for her to add.
  • You can’t make a Jekyll site in React.

Gatsby offers significant improvements in these areas since, along with bare-text files, Gatsby can import data from almost any data source and generate a clean React site based on the data it queries.

GraphQL is even cooler than React, but I have concerns.

GraphQL offers a tantalizing dream: a system that can connect multiple data sources with a simple query language. Suddenly your hip NoSQL databases and creaky SQL databases can all be displayed through a single app.

But building and running GraphQL are not always simple, and fundamentally this new engine requires learning a new query structure. It’s cool to see that Gatsby is ‘powered by GraphQL’ but doesn’t that mean there will be some massive friction in getting this deployed?

But remember Gatsby generates a static site. It uses GraphQL to access DB’s and generate your site, but once that process is over, GraphQL doesn’t need to be running for your site to work.

But isn’t there still a learning curve?

Yeah, that part is inescapable. While many common setups have templates to start with, your team’s weird content database from 1997 is going to require some custom config. There’s no way this could be harder than importing the data yourself, and once it’s configured you only need to update your database and re-run the Gatsby build.

What if I’m not a great React Developer?

If it took a React expert to use Gatsby, it wouldn’t really have a market and that’s clearly not the case. Ultimately, if we could make awesome React sites from scratch, what use would there really be for Gatsby?

Fortunately, the tutorial for Gatsby is also a great way to gain knowledge of React in general. If you only use Gatsby for your first React app, a few things like auto-link formatting will seem like they’re core React tools when they’re really part of Gatsby. But that shouldn’t be a dealbreaker for anyone.

Having written zero React in the last year, I found my first few sites a cinch with Gatsby after spending a couple hours in their tutorials. If I can do it, anyone can—yes a logical fallacy on the SAT, but true here.

Can we get the team away from Git?

Here we get to the real potential of Gatsby: for our non-git-savvy team members, Gatsby can consume the database of another Content Management System (CMS), seamlessly importing articles with your main site. A CMS can present an input and editor for articles without taking responsibility for displaying the content it stores. Used this way it’s ineptly called a ‘headless CMS’. The good news is, your content contributors can now publish content on their own, without needing to do a code push.

Gatsby is Different

I’m not aware of any other static site generator that has this kind of functionality out of the box. Problems like scheduling posts ahead of time, editing posts (without needing to edit HTML) and content with multiple links and embedded files can all be handled by a tried-and-true editor like WordPress, while Gatsby generates a high-performance React app from the data. Gatsby is worth the plunge. Let me know how it goes for your team!

Creating Cognito User Pools with CloudFormation
Matthew Bradburn

Matthew Bradburn | January 31, 2019

Creating Cognito User Pools with CloudFormation

I’ve been working on creating AWS Cognito User Pools in CloudFormation, and thought this would be a good time to share some of what I’ve learned.

As an overview of this project:

  • For sign-up, I’m creating Cognito users directly from my server app. It’s also possible to have users create their own accounts in Cognito, but that’s not what I want.
  • I want to use email addresses as the user names, rather than having user names with separate associated email addresses.
  • I don’t want the users to have to mess around with temporary passwords. This is part of the ordinary Cognito workflow, but I set the initial password in my server-side code and then immediately reset the password to the same value. So there is a temporary password, but the users don’t notice it.
  • Sign-in is a transaction directly between the client-side app and Cognito; the client gets a JWT (JSON Web Token) from Cognito, which is validated by my AuthenticatedApi function on the back-end.
  • The Cognito User Pool, Lambda functions, etc., are created by CloudFormation with a SAM (Serverless Application Model) template.

Sample Source

The source code for this project is available from my github. The disclaimer is that the source is pretty rough, and should be tidied before being used in production.

Template Generation

I used the Stackery editor to lay out the components and generate a template: stackery editor

The template is available in the Git repo as template.yaml.

This is a simple application; I have an Api Gateway that my client app will hit, with one endpoint to effect sign-up and one to demonstrate an authenticated API. Each of these endpoints invokes a separate Lambda function. Those functions have access to my User Pool.

I’ve wired the User Pool’s triggered functions up just as an experiment. Currently all the triggers invoke my CognitoTriggered function, which is currently logging the input messages but that’s all – according to my understanding, these functions work by modifying the input message and returning it, but my function returns the input message unmolested.

I’ve hand-edited the SAM template to add the user pool client:

  UserPoolClient:
    Type: AWS::Cognito::UserPoolClient
    Properties:
      ClientName: my-app
      GenerateSecret: false
      UserPoolId: !Ref UserPool
      ExplicitAuthFlows:
        - ADMIN_NO_SRP_AUTH

I’ve set GenerateSecret to false because in a web app it’s hard to keep a secret of this type. We use ADMIN_NO_SRP_AUTH during the user creation process as Admin. I’ve also added an environment variable to each of my functions so they’ll get the user pool client ID.

Deployment

Of course Stackery makes it simple to deploy this application into AWS, but it should be pretty easy to give the template directly to CloudFormation. You may want to go through and whack the parameters like ‘StackTagName’ that are added by the Stackery runtime.

Client Tester App

Once you’ve deployed the app, there are a couple of parameters from the running app to be copied to the client. These go in the source code near the top. For instance, the URI of the API Gateway is needed by the client but isn’t availble until after the app is deployed.

This may not be an issue for you if you’re doing a web client app instead of a Node.js app, but in my case I’m using the NPM package named amazon-cognito-identity-js to talk to Cognito for authentication. That package depends on the fetch() API, which browsers have but Node.js does not. I’ve included the package source directly in my repo, and added a use of node-fetch-polyfill in amazon-congnito-identiy-js/lib/Client.js.

Run ./client-app.js --sign-up --email <email> --password <pass> to create a new user in your Cognito pool. In real apps you should never acceppt passwords on the command-line like this.

Once you’ve created a user, run ./client-app.js --sign-in --email <email> --password <pass>, giving it the new user’s email and password, to get a JWT for the user.

Assuming sign-in succeeds, that command prints the JWT created by Cognito. You can then test the authenticated API with ./client-app.js --fetch --token <JWT>.

Areas for Improvement

This is rather marginal sample code, as I mentioned, and there are several obvious areas for improvement:

  • The amazon-cognito-identity-js package isn’t meant for Node.js. I wonder if it makes sense to use the AWS SDK directly.

  • The AuthenticatedApi function gets public keys from Cognito on every request; they should be cached.

  • The client-app uses the access token, but a real client app would have to be prepared to use the refresh token to generate a new access token periodically.

Chaos Engineering Ideas for Serverless
Danielle Heberling

Danielle Heberling | January 24, 2019

Chaos Engineering Ideas for Serverless

The Principles define chaos engineering as:

The discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production.

The high-level steps for implementing chaos experiments involve: defining your application’s steady state, hypothesizing the steady state in both the control and experimental groups, injecting realistic failures, observing the results, and making changes to your code base/infrastructure as necessary based off of the results.

Chaos experiments are not meant to replace unit and integration tests. They’re intended to work with those existing tests in order to assure the system is reliable. A great real-world analogy is that chaos experiments are like vaccines: A vaccine contains a small amount of the live virus that gets injected into the body in order to prompt the body to build up immunity to prevent illness. With chaos experiments, we’re injecting things like latency and errors into our application to see if the application handles them gracefully. If it does not, then we can adjust accordingly in order to prevent incidents from happening.

Sometimes chaos engineering gets a bad reputation as ‘breaking things for fun.’

Sometimes chaos engineering gets a bad reputation as “breaking things for fun.” I believe the problem is that there’s too much emphasis on breaking things while the focus should be on why the experiments are being run. In order to minimize your blast radius, it’s recommended to begin with some experiments in non-production environments during the workday while everyone is around to monitor the system. On the people side of things, make sure you communicate with your entire team what you’re doing, so they aren’t caught by surprise. Once you have experience and confidence running experiments, you can then move onto running them in production. The end goal is to run experiments in production since it is difficult to have an environment that matches production exactly.

Traditionally chaos engineering at a high level is running experiments that often involve shutting off servers, but if you are in a serverless environment with managed servers this can pose a new challenge. Serverless environments typically have smaller units of deployment, but more of them. This means for someone who wants to run chaos experiments, there are more boundaries to harden around in your applications.

If you’re thinking about running some chaos experiments of your own in a serverless environment, some ideas of things to look out for are:

  • Performance/latency (most common)
  • Improperly tuned timeouts
  • Missing error handling
  • Missing fallbacks
  • Missing regional failover (if using multiple regions)

For serverless, the most common experiments involve latency injection or error injection into functions.

Some examples of errors you could inject are:

  • Errors common in your application
  • HTTP 5xx
  • Amazon DynamoDB throughput exceeded
  • Throttled AWS lambda invocations

A pretty neat trend I’ve seen is folks writing their own libraries to inject latency and/or errors into a Lambda function using the new Lambda layers feature. Stackery makes it easy to add layers to your function. Another idea is to implement chaos experiments as part of your CI/CD pipeline. If you don’t want to write your own library there are a lot of open source projects and also some companies that offer “chaos-as-a-service.”

If you’d like to go into more detail on implementation, I’d suggest checking out this article to see some code. This GitHub repo also has some great resources on the overall topic of Chaos Engineering. I hope this post gave you some ideas and inspiration on different ways you can test your serverless environment to ensure system reliability for your customers!

How I Got Comfortable Building with Serverless
Jun Fritz

Jun Fritz | January 17, 2019

How I Got Comfortable Building with Serverless

A few months back, I blogged about my experience arriving at Stackery after code school. Months later, each day is still interesting and challenging and I’m so glad to have decided to pursue serverless as my concentration. I credit my AWS certifications for narrowing my focus enough to lead me to this point. The serverless community puts so much emphasis on exploration and getting started on your work or experiments today that, getting some exposure to AWS, you can get started right away. Here’s a breakdown of how I went from serverless novice to software engineer at Stackery:

Gaining Confidence

I was interested in cloud computing, but I had very little knowledge of AWS, so I opted for what felt like the next best thing: certifications.

After graduating from code bootcamp, I was eager to dive into the job search and get placed somewhere awesome right away. What I discovered was that many of the listings required some level of AWS experience. I was interested in cloud computing, but I had very little knowledge of AWS, so I opted for what felt like the next best thing: certifications. I wasn’t sure which to choose, so I decided to start with three associate-level certs offered by AWS: Solution Architect, Developer, and SysOps. Each covered different roles, which forced me to adopt a distinct mindset for each, deepening my understanding of cloud services and their use cases. Taking the exams gave me enough confidence to begin building cloud-based applications and showcasing them to potential employers. The ability to discuss different cloud infrastructures and build my own serverless web apps helped me earn a spot on the Stackery engineering team, and I was excited to start gaining more real-world experience.

Real-World Experience

In my current role at Stackery I get to work with AWS resources every day, but discovering what problems other serverless developers are working on has influenced me a lot. There are developers out there that have had different experiences with the tools I use, and it’s important for me to be aware of them so that I’m not limited to my own way of thinking.

The online forums and webinars about serverless development provide me with tons of useful content, but a find a lot of value in the use-cases and questions I get from others. For example, attending any of the live 3-400 level online webinars from AWS provides a deep dive into various serverless topics, along with a pointed Q&A session to address common concerns in the community. So, even if your day-to-day work doesn’t consist of serverless or AWS, you can still engage in these topics by being open to the real-world experiences of other developers.

The following resources can help keep you in the know about new advancements for serverless and AWS:

Learn by Building

Before I was introduced to Stackery, serverless development was definitely a challenge. Building my own serverless applications helped solidify what I’d learned from my certifications, but I’d get frustrated with the amount of configuration and guesswork required to write out my own CloudFormation templates.

With Stackery, it doesn’t take long for me to quickly define cloud resources and integrate them with other services; I’m able to build a stack that confirms my understanding of a specific serverless workflow rapidly. Using Stackery has helped me build serverless applications and thus learn more about serverless development.

If any part of you is compelled to learn more about serverless right out of school, I can’t recommend the above strategies highly enough. It really just comes down to curiosity, research, experimentation and, if you want a boost in confidence, perhaps a certification or two.

The Journey to Serverless: How Did We Get Here? [Infographic]
Gracie Gregory

Gracie Gregory | January 08, 2019

The Journey to Serverless: How Did We Get Here? [Infographic]

It’s the beginning of a new year and when it comes to computing, going serverless is the resolution of many engineering teams. At Stackery, this excites us because we know how significant the positive impacts of serverless are and will be. So much, in fact, that we’re already thinking about its applications for next year and beyond.

But while Stackery is toasting to serverless just as much as the headlines are, it’s crucial at this juncture to ensure that there is a wider foundational understanding. Our team is thrilled that so many others are anxious to rethink how they approach computing, save money with a pay-per-use model, and build without limits using serverless. However, we’re also proponents of knowing your serverless strategy inside and out, thereby having an airtight business use-case that anyone on the team can explain. After all, serverless didn’t rise to the top of Gartner’s top 10 infrastructure and operations trends overnight; its (figurative) source code was being drafted decades ago and this is why it’s much more than a trend. Just as we learned in history class, what’s past is prologue; the developments of yesteryear are the stage directions for today’s innovation. In other words, understanding the origins of serverless will give you a competitive advantage.

So, how exactly did we get to the edge of widespread serverless adoption? What historical developments make all of this more than a temporary buzzword? Why have the conversations about serverless been growing among your peers and leadership team, not dying down? To answer these questions, let’s interrupt our regularly-scheduled New Year celebrations with a trip back in time to 1995…

At Stackery, we’re helping engineering teams build amazing serverless applications with limitless scalability. The best part? The stage for the next decade of software development is being set now. Join us in shaping serverless computing for the next generation. Get started with Stackery today.

Serverless in 2019: From 'Hello World' to 'Hello Production'
Nate Taggart

Nate Taggart | January 04, 2019

Serverless in 2019: From 'Hello World' to 'Hello Production'

A Look Ahead

As the CEO of Stackery, I have had a unique, inside view of serverless since we launched in 2016. I get to work alongside the world’s leading serverless experts, our customers, and our partners and learn from their discoveries. It’s a new year: the perfect time to take stock of professional progress, accomplishments, and goals. The Stackery team has been in this mindset for months, focusing on what 2019 means for this market. After two-and-a-half years of building serverless applications, speaking at serverless conferences, and running the world’s leading serverless company, I have a few ideas of what’s in store for this technology.

1) Serverless will be “managed cloud services,” not “FaaS”

As recently as a year ago, every serverless conference talk had an obligatory “what is serverless” slide. Everyone seemed to have a different understanding of what it all meant. There were some new concepts, like FaaS and “events” and a lot of confusion on the side. By now, this perplexity has been quelled and the verdict is in: serverless is all about composing software systems from a collection of cloud services. With serverless, you can lean on off-the-shelf cloud services resources for your application architecture, focus on business logic and application needs, while (mostly) ignoring infrastructure capacity and management.

In 2019, this understanding will reach the mainstream. Sure, some will continue to fixate on functions-as-a-service while ignoring all the other services needed to operate an application. Others will attempt to slap the name onto whatever they are pitching to developers. But, for the most part, people will realize that serverless is more than functions because applications are more than code.

I predict that the winners in serverless will continue to be the users capturing velocity gains to build great applications. By eschewing the burden of self-managed infrastructure and instead empowering their engineers to pull ready-to-use services off the shelf, software leaders will quickly stand up production-grade infrastructure. They’ll come to realize that this exciting movement is not really “serverless” so much as it is “service-full” - as in applications full of building blocks as a service. Alas, we’re probably stuck with the name. Misnomers happen when a shift is born out of necessity, without time to be fine-tuned by marketing copywriters. I’ll take it.

2) The IT Industrial Complex will throw shade

The IT Industrial Complex has billions of dollars and tens of thousands of jobs reliant on the old server model. And while these vendors are cloud-washing their businesses, the move to serverless renders them much less excited about the cloud-native disruption.

So get ready for even more fear, uncertainty, and doubt that the infrastructure old-guard is going to bring. It won’t be subtle. You’ll hear about the limitations of serverless (“you can’t run long-lived jobs!”), the difficulty in adoption (“there’s no lift-and-shift!”), and the use cases that don’t fit (“with that latency, you can’t do high-frequency trading!”). They’ll shout about vendor lock-in — of course they’d be much happier if you were still locked-in with their physical boxes. They’ll rail against costs (“At 100% utilization, it’s cheaper to run our hardware”), and they’ll scream about how dumb the name “serverless” is (you’ve probably gathered that I actually agree with this one).

I’d rather write software than patch infrastructure any day.

The reality? The offerings and capabilities of the serverless ecosystem are on an improvement velocity, unlike anything the IT infrastructure market has ever delivered. By the end of 2019, we’ll have more languages, more memory, longer run times, lower latency, and better developer ergonomics. They’ll ignore the operational cost of actually running servers — and patching, and scaling, and load-balancing, and orchestrating, and deploying, and… the list goes on! Crucially, they’ll ignore the fact that every company invested in serverless is able to do more things faster and with less. Serverless means lower spend, less hassle, more productive and focused engineers, apps with business value, and more fun. I’d rather write software than patch infrastructure any day.

Recognize these objections for what they are: the death throes of an out-of-touch generation of technology dinosaurs. And, as much as I like dinosaurs, I don’t take engineering advice from them.

3) Executives will accelerate pioneering serverless heroes

Depending on how far your desk is from the CEO of your company, this will be more or less obvious to you, but: your company doesn’t want to invest in technology because it’s interesting. Good technology investments are fundamentally business investments, designed to drive profits by cutting costs, innovation, or both.

Serverless delivers on both cost efficiency and innovation. Its pay-per-use model is substantially cheaper than the alternatives and its dramatically improved velocity means more business value delivery and less time toiling on thankless tasks. The people who bring this to your organization will be heroes.

So far, most organizations have been adopting serverless from the bottom-up. Individual developers and small teams have brought serverless in to solve a problem and it worked. But in 2019 a shift will happen. Project milestones will start getting hit early, developers will be more connected to customer and business needs, and IT spend will come in a little lower than budgeted… And the executive team is going to try to find out why, so they can do more of it.

So my prediction is that in 2019, serverless adoption will begin to win executive buy-in and be targeted as a core technology initiative. Serverless expertise will be a very good look for your team in 2019.

4) The great monolith to serverless refactoring begins

While greenfield apps led the way in serverless development, this year, word will get out that serverless is the fastest path to refactoring monoliths into microservices. In fact, because serverless teams obtain significant velocity from relying largely on standard infrastructure services, many will experience a cultural reset around what it means to refactor a monolith. It’s easier than ever before.

While “you can’t lift and shift to serverless” was a knock in 2018, 2019 will show the enterprise that it’s faster to refactor in serverless than migrate. They will see how refactoring in serverless takes a fraction of the time we thought it would take for a growing number of applications. Check out the Strangler Pattern to see how our customers are doing this today. When you combine this method with Lambda Layers and the rapid march of service innovations, the options for evolving legacy applications and code continue to broaden the realm of where serverless shines.

5) Serverless-only apps will transition to serverless-first apps

“Hello World” applications in tutorials are good fun and their initial functions deliver rapid purpose without an operations team. They are great wins for serverless.

However, when it comes to building serverless business applications, every software team will need to incorporate existing resources into their applications. Production databases and tables, networks, containers, EC2 instances, DNS services, and more. Today, complex YAML combined with the art of managing parameters across dev, test, staging, and production environments hold many teams back from effectively building on what already exists. A note: Stackery makes using existing resources across multiple environments easy.

In 2019, serverless will serve you more.

In 2019, we’ll see enormous growth in applications that are serverless-first, but not serverless only. The “best service for the job” mantra is already driving teams under pressure to deliver results to serverless. We believe teams who want to move fast will turn to serverless for most of what they need, but won’t live in a serverless silo.

To conclude: In 2019, serverless will serve you more.

All of these predictions add up to one obvious conclusion from my perspective: Serverless is finally mainstream and it’s here to stay. Stackery already helps serverless teams accelerate delivery from “Hello World” to “Hello Production”. We’d love to help your team, too.

Get the Serverless Development Toolkit for Teams

now and get started for free. Contact one of our product experts to get started building amazing serverless applications today.

To Top