Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Serverless in 2019: From 'Hello World' to 'Hello Production'
Nate Taggart

Nate Taggart | January 04, 2019

Serverless in 2019: From 'Hello World' to 'Hello Production'

A Look Ahead

As the CEO of Stackery, I have had a unique, inside view of serverless since we launched in 2016. I get to work alongside the world’s leading serverless experts, our customers, and our partners and learn from their discoveries. It’s a new year: the perfect time to take stock of professional progress, accomplishments, and goals. The Stackery team has been in this mindset for months, focusing on what 2019 means for this market. After two-and-a-half years of building serverless applications, speaking at serverless conferences, and running the world’s leading serverless company, I have a few ideas of what’s in store for this technology.

1) Serverless will be “managed cloud services,” not “FaaS”

As recently as a year ago, every serverless conference talk had an obligatory “what is serverless” slide. Everyone seemed to have a different understanding of what it all meant. There were some new concepts, like FaaS and “events” and a lot of confusion on the side. By now, this perplexity has been quelled and the verdict is in: serverless is all about composing software systems from a collection of cloud services. With serverless, you can lean on off-the-shelf cloud services resources for your application architecture, focus on business logic and application needs, while (mostly) ignoring infrastructure capacity and management.

In 2019, this understanding will reach the mainstream. Sure, some will continue to fixate on functions-as-a-service while ignoring all the other services needed to operate an application. Others will attempt to slap the name onto whatever they are pitching to developers. But, for the most part, people will realize that serverless is more than functions because applications are more than code.

I predict that the winners in serverless will continue to be the users capturing velocity gains to build great applications. By eschewing the burden of self-managed infrastructure and instead empowering their engineers to pull ready-to-use services off the shelf, software leaders will quickly stand up production-grade infrastructure. They’ll come to realize that this exciting movement is not really “serverless” so much as it is “service-full” - as in applications full of building blocks as a service. Alas, we’re probably stuck with the name. Misnomers happen when a shift is born out of necessity, without time to be fine-tuned by marketing copywriters. I’ll take it.

2) The IT Industrial Complex will throw shade

The IT Industrial Complex has billions of dollars and tens of thousands of jobs reliant on the old server model. And while these vendors are cloud-washing their businesses, the move to serverless renders them much less excited about the cloud-native disruption.

So get ready for even more fear, uncertainty, and doubt that the infrastructure old-guard is going to bring. It won’t be subtle. You’ll hear about the limitations of serverless (“you can’t run long-lived jobs!”), the difficulty in adoption (“there’s no lift-and-shift!”), and the use cases that don’t fit (“with that latency, you can’t do high-frequency trading!”). They’ll shout about vendor lock-in — of course they’d be much happier if you were still locked-in with their physical boxes. They’ll rail against costs (“At 100% utilization, it’s cheaper to run our hardware”), and they’ll scream about how dumb the name “serverless” is (you’ve probably gathered that I actually agree with this one).

I’d rather write software than patch infrastructure any day.

The reality? The offerings and capabilities of the serverless ecosystem are on an improvement velocity, unlike anything the IT infrastructure market has ever delivered. By the end of 2019, we’ll have more languages, more memory, longer run times, lower latency, and better developer ergonomics. They’ll ignore the operational cost of actually running servers — and patching, and scaling, and load-balancing, and orchestrating, and deploying, and… the list goes on! Crucially, they’ll ignore the fact that every company invested in serverless is able to do more things faster and with less. Serverless means lower spend, less hassle, more productive and focused engineers, apps with business value, and more fun. I’d rather write software than patch infrastructure any day.

Recognize these objections for what they are: the death throes of an out-of-touch generation of technology dinosaurs. And, as much as I like dinosaurs, I don’t take engineering advice from them.

3) Executives will accelerate pioneering serverless heroes

Depending on how far your desk is from the CEO of your company, this will be more or less obvious to you, but: your company doesn’t want to invest in technology because it’s interesting. Good technology investments are fundamentally business investments, designed to drive profits by cutting costs, innovation, or both.

Serverless delivers on both cost efficiency and innovation. Its pay-per-use model is substantially cheaper than the alternatives and its dramatically improved velocity means more business value delivery and less time toiling on thankless tasks. The people who bring this to your organization will be heroes.

So far, most organizations have been adopting serverless from the bottom-up. Individual developers and small teams have brought serverless in to solve a problem and it worked. But in 2019 a shift will happen. Project milestones will start getting hit early, developers will be more connected to customer and business needs, and IT spend will come in a little lower than budgeted… And the executive team is going to try to find out why, so they can do more of it.

So my prediction is that in 2019, serverless adoption will begin to win executive buy-in and be targeted as a core technology initiative. Serverless expertise will be a very good look for your team in 2019.

4) The great monolith to serverless refactoring begins

While greenfield apps led the way in serverless development, this year, word will get out that serverless is the fastest path to refactoring monoliths into microservices. In fact, because serverless teams obtain significant velocity from relying largely on standard infrastructure services, many will experience a cultural reset around what it means to refactor a monolith. It’s easier than ever before.

While “you can’t lift and shift to serverless” was a knock in 2018, 2019 will show the enterprise that it’s faster to refactor in serverless than migrate. They will see how refactoring in serverless takes a fraction of the time we thought it would take for a growing number of applications. Check out the Strangler Pattern to see how our customers are doing this today. When you combine this method with Lambda Layers and the rapid march of service innovations, the options for evolving legacy applications and code continue to broaden the realm of where serverless shines.

5) Serverless-only apps will transition to serverless-first apps

“Hello World” applications in tutorials are good fun and their initial functions deliver rapid purpose without an operations team. They are great wins for serverless.

However, when it comes to building serverless business applications, every software team will need to incorporate existing resources into their applications. Production databases and tables, networks, containers, EC2 instances, DNS services, and more. Today, complex YAML combined with the art of managing parameters across dev, test, staging, and production environments hold many teams back from effectively building on what already exists. A note: Stackery makes using existing resources across multiple environments easy.

In 2019, serverless will serve you more.

In 2019, we’ll see enormous growth in applications that are serverless-first, but not serverless only. The “best service for the job” mantra is already driving teams under pressure to deliver results to serverless. We believe teams who want to move fast will turn to serverless for most of what they need, but won’t live in a serverless silo.

To conclude: In 2019, serverless will serve you more.

All of these predictions add up to one obvious conclusion from my perspective: Serverless is finally mainstream and it’s here to stay. Stackery already helps serverless teams accelerate delivery from “Hello World” to “Hello Production”. We’d love to help your team, too.

Conquering a Double-Barrel Webpack Upgrade
Anna Yovandich

Anna Yovandich | December 20, 2018

Conquering a Double-Barrel Webpack Upgrade

Over the last couple of weeks, we’ve prioritized some sustaining product goals to polish the codebase and update some big ticket dependencies. Among those updates were: React, Redux, and Webpack - the biggies. The first two were pretty painless and inspired the confidence to approach updating Webpack from v2 to v4 like maybe no big deal! Though confidence level was on high, I felt a slight chill and a twinge of doubt by the prospect of making changes to our build configs.

Enter Webpack 4

The latest version of Webpack has the lowest barrier to entry of any other version. Its new mode parameter comes with default environment configs and enables built-in optimizations. This “no config” option is ideal for a new project and/or a newcomer to Webpack that wants to get started quickly. Migrating an existing config is a little trickier but following the migration guide got our development environment in pretty good shape. I was pleasantly shocked by the Webpack documentation. It’s thorough, well organized, and has improved significantly from the early days of v1.

Development Mode

To begin migrating our development config, I added the new mode property, removed some deprecated plugins, and replaced autoprefixer with postcss-preset-env in the post-css-loader plugin config. Starting the dev server (npm start) at this point led to the first snag: this.htmlWebpackPlugin.getHooks is not a function. Hunting that error landed in an issue thread, suggesting a fix - which did the trick. Development mode: good to go. Confidence mode: strong.

Production Mode

Continuing migration with the production config was a similar process. We have a fairly standard setup to compile the static build directory: transpile (ES6 and JSX) and minify JS; transform, externalize, and minify CSS; then generate an index.html file to tie it all together. However, running the production build (npm run build) was a different story.


The first issue was harsh: FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory. Ooof! Lots of searching and skimming repeatedly offered the same suggestion: to pass an argument to the node process --max_old_space_size=<value> which increases the heap memory allocation. It felt like slapping some tape on a shiny new toy but it enabled the build process to complete successfully.

Feeling unsatisfied with band-aiding an ominous failure, I investigated why the build was consistently choking on source map generation and here is where I discovered a 2 alarm fire:

  1. Our main (and only) bundle is 1.6MB
  2. That one giant bundle is accompanied by a behemoth source map…19MB to be exact. 😱 Not ok.
Code Splitting

First, the bundle needs to be split by configuring optimization.splitChunks. Then, the vendor source maps need to be excluded by configuring SourceMapDevToolPlugin exclude option. An important step when using SourceMapDevToolPlugin, is setting dev-tool: false. Otherwise, the its configuration (with exclude rules) will get trampled by Webpack’s dev-tool operation and output another monster source map (mapping the entire build again).

devtool: false,
optimization: {
 splitChunks: {
  chunks: 'all',
  name: true,
  cacheGroups: {
   vendors: {
    test: /[\\/]node_modules[\\/].*\.js$/,
    filename: 'static/js/vendors.[chunkhash:8].js',
    priority: -10
   default: {
    minChunks: 2,
    priority: -20,
    reuseExistingChunk: true
plugins: [
 new webpack.SourceMapDevToolPlugin({
  filename: 'static/js/[name].[chunkhash:8]',
  exclude: /static\/js\/vendors*(.+?).js/

With the build output in much better shape (though the vendors bundle should be further split into smaller chunks), I try removing the node argument band-aid and re-running the build command (sans gargantuan source map). Success! The fatal error was almost exclusively due to source mapping one enormous build.

Minify CSS

Now the build succeeds and I’m cookin with gas. However, the CSS file is much bigger than it used to be…it’s no longer minified. One of the plugins that changed with this upgrade was replacing ExtractTextPlugin with MiniCssExtractPlugin (extracts all css modules into a separate file). However, MiniCssExtractPlugin does not handle minification( like ExtractTextPlugin did. To minify CSS, the OptimizeCSSAssetsWebpackPlugin (aka OCAWP) is necessary.

To include OCAWP, add optimization.minimizer configuration to the module:

optimization: {
 minimizer: [
  new OptimizeCSSAssetsWebpackPlugin({
   cssProcessorOptions: {
    parser: require('postcss-safe-parser'),
    map: {
     inline: false,
     annotation: true
   cssProcessorPluginOptions: {
    preset: ['default', {
     discardComments: {
      removeAll: true

Now, CSS is minified but…JavaScript is not. 😑 Hoo boy.

Minify JS

By default, Webpack uses UglifyJs to minify JavaScript. When optimization.minimizer is customized (in this case for CSS minification), JS minification needs to be explicitly handled as well. Now the optimization.minimizer config contains OCAWP and UglifyJs but the build script fails again - citing: Unexpected token: keyword (const) error from UglifyJs. Siiigh.

It turns out, uglify-js (the parser used by UglifyJsWebpackPlugin) does not support ES6 uglification. The maintainer of UglifyJsWebpackPlugin, as well as the Webpack docs urge the adoption of TerserWebpackPlugin instead. This works out great, since the next version of Webpack will use Terser as its default minifier. Thank you, next!

optimization: {
 minimizer: [
  new OptimizeCSSAssetsWebpackPlugin({...}),
  new TerserWebpackPlugin({
   sourceMap: true,
   parallel: true,
   terserOptions: {
    parse: {
     ecma: 8
    compress: {
     ecma: 5,
     warnings: false,
     comparisons: false,
     inline: 2
    output: {
     ecma: 5,
     comments: false,
     ascii_only: true

The production build is finally compiling as expected. There are still improvements to be made but I will rest easier knowing that this configuration isn’t exploding CPUs and that I have a better grip on optimizations going forward.

It’s been a tough and humbling week. Configuring Webpack’s loaders and plugins correctly can feel overwhelming - there are countless options and optimizations to understand. If you or someone you love is going through frontend dependency hardships, just know: it gets better and you are not alone. Hang in there!

Infrastructure-as-Code Is The New Assembly Language For The Cloud
Chase Douglas

Chase Douglas | December 13, 2018

Infrastructure-as-Code Is The New Assembly Language For The Cloud

My career as a software engineer started in 2007 at Purdue University. I was working in the Linux kernel and researching how data was shuffled between the kernel and the user application layers. This was happening in huge clusters of machines that all talked to each other using OpenMPI — how supercomputers, like those at Los Alamos National Labs, operate to perform their enormous calculations around meteorology, physics, chemistry, etc.

It was an exciting time, but I had to learn a ton about how to debug the kernel. I’d only started programming in C over the previous year, so it really stretched my knowledge and experience. A big component of this was figuring out how to navigate a gigantic code base, which hit 6 million lines of code that year (again, in 2007!) There were times when I felt helpless trying to make sense of it all, but I will be forever grateful for the experience.

Being thrown in the deep-end meant that I was exposed to the way real-world code can be modularized. I learned how to quickly dissect a large codebase and how to debug in some of the toughest environments. But over time I also realized that I had learned a lot of skills that are largely irrelevant to how the vast majority of people build business value into software today. I now build business value that solves more abstracted problems than how bits are shuffled through a networking stack.

It’s these higher order abstractions that help engineering teams realize pivotal business results.

The main drivers of software-engineering productivity are the abstractions used to reach development goals. You can write software using CPU assembly languages or modern scripting languages. In a theoretical sense, you can achieve the same software goals with either approach. But realistically, productivity will be higher with modern scripting languages than with assembly languages.

Yet, everything we write today compiles down to assembly language in some form, even if it’s through Just-In-Time compilation. That’s because assembly language is the core medium we use to communicate intent to hardware, which ultimately carries out the operations. But now, we no longer directly write software with it; we have better abstractions.

Infrastructure-as-Code (IaC) fulfills the same foundational mechanism for cloud computing. It informs the cloud provider with raw data about our intentions: create a function here with these permissions and create a topic over there with this name.

Just as with assembly language, we have been writing IaC templates by hand because there have not been any better methods.

Just as with assembly language, we have been writing IaC templates by hand because there have not been any better methods. Frameworks like that of are ever-so-slightly better abstractions; however, many adopters of these frameworks have yet to achieve meaningful business-productivity gains. This is largely because, once off the beaten path, you end up writing bare CloudFormation. The whole process leaves you back at square one for some of your most complicated infrastructure like VPCs and databases.

IaC is the only sane way to provision cloud infrastructure. That means it’s time for us to find abstractions on top of IaC that provide us with meaningful productivity gains. This is where Stackery comes in. Stackery provides you with an easy drag-and-drop interface to configure your serverless IaC templates. Crucially, you can also import your existing IaC templates (AWS SAM or and use Stackery to extend your applications without worrying that Stackery will delete or modify unrelated infrastructure configuration.

My career could have taken a number of different paths, but I’m glad to be in serverless today. The industry is moving steadily in this direction and my team creates solutions that make it more manageable for everyone. Notably, the “deep-end” of serverless is much more navigable than the technology I was working with in 2007. Unlike certain aspects of what I learned in the bowels of the Linux kernel, serverless and the tools that manage our IaC templates are the new assembly language for the cloud. Stackery and IaC are significant when considering how the majority of developers will be building business value into software going forward.

Hi! I’m Gracie Gregory, Stackery’s New Copywriter
Gracie Gregory

Gracie Gregory | December 05, 2018

Hi! I’m Gracie Gregory, Stackery’s New Copywriter

I’ve worked in various sectors of tech since graduating college in 2014 with a Russian literature degree and an appetite for something entirely new post-graduation. After meeting with a handful of Portlanders in various sectors of business, I landed a PR and branding role at The Linux Foundation where I stayed for years. At the risk of using a platitude, joining the open source community was like “drinking from the firehose” for someone used to reading novels all day.

Since then, my career has taken other unexpected turns but always within technology. Because I am primarily a writer, I’ve often lacked the hands-on experience that would make new concepts like cloud-native, Node.js, and yes, serverless, come naturally. While my right-brain sometimes limits my ability to follow along in this particular realm without asking 10 million questions, I do believe an outsider’s perspective is an asset to a tech company’s communication strategy. Since I approach most technological concepts as an outsider, the content I produce is positioned for a more general audience. If you enjoy learning, technical writing from a non-technical background is really a dream job.

I applied to work at Stackery in Fall 2018 for that reason: Serverless is a fascinating new corner of computing and much of the landscape is still burgeoning. Working at Stackery would mean I’d be challenged every day and surrounded by pioneers in the field. I thought it would be a humbling opportunity and indeed it has been. Every day is a crash course in modern software development, tech history, and the variegated nature of startup life.

Throughout the interview process, everyone was kind enough to assure me that it was ok if I didn’t fully “get” serverless that day. They all told me that the space itself was relatively new and that, if I were hired, I’d have lots of resources to call upon. While I was grateful for the team’s reassurance, it didn’t quell my anxious desire to better understand serverless computing right that second. I had created an account with Stackery and played around in the demo, which really helped me frame things. But I still had fundamental questions. It was clear I had to lay some major groundwork to be a worthwhile candidate. I did, however, come up with a few serverless comparisons while I was researching the company. This made the concepts easier for me to digest before interviewing with the team.

“I wouldn’t risk throwing any of those out there,” my friend said the eve of my final interview. “What if you’re way off-base? You’d look like an idiot.”

Since trying to avoid looking like an idiot is the soulful principle that guides my life’s path, I was planning to take this advice to heart. But when I actually met my interviewers, I quickly understood that this was an experimentative culture that encouraged trying things before judging them. When I met with Stackery’s VP of Product and Engineering, Sam Goldstein, I actually felt empowered to test out a few of my serverless metaphors to see whether or not I was on the track to understanding. I was pleasantly surprised that he said they were (at the most general level) apt.

If you’re an expert, do not take this too seriously. What I am about to say will, best case scenario, make me look like a newb. Worst case scenario, it will make me look like a n00b. For anyone non-technical who might have found our blog without a drop of serverless understanding, you have permission to use my Cliff’s Notes below. I hope this will clarify serverless computing and get you started with this amazing technology!

Serverless is Like Dropshipping

At the risk of defining a theoretically new concept with another theoretically new concept…dropshipping!

Dropshipping uses platforms like Shopify to allow hopeful online sellers to only tackle the parts of eCommerce they want. In most cases, this means curating and designing the layout of their store. They pick from a vast library of products that appeal to them/gel with their brand vision and get to outsource the headache of managing inventory to a warehouse. People have been doing this in eCommerce for a while but new platforms make it accessible to more people or at least help them get it up and running faster. Serverless is similar in that engineers are able to focus on their application rather than infrastructure. Like dropshippers, serverless engineers don’t have to worry about their “inventory” (i.e. implementing and maintaining infrastructure.) Both are something of a jackpot for those who want to focus more on the art and science of their work instead of the logistics or administration.

Serverless is Like WiFi

This comparison is for those who don’t understand what precisely the “less” in serverless means. Imagine you are an average American in 2003: right around when WiFi was solidified as the home internet solution. You want faster internet in your home and to access it easily and without complications. You’ve known about wifi for a while and finally decide to hook your home up but can’t quite conceptualize how the wireless part works. Will you still need a router? Will you need to become a sysadmin to use it? We now know the answers to be a vehement yes and no, respectively. Yes, you still need a router, but it won’t take up space; you’ll basically never interact with it. It’s upstairs in a spare bedroom or hidden in your TV stand. Out of sight, but still enabling you to check your email and watch Ebaum’s World videos (it’s 2003, after all.) Serverless is the same. There is still a server, it’s simply elsewhere and not your problem as an engineer on a daily basis.

Serverless is Like Modern Car Insurance

Stay with me here. Let me say upfront that serverless is obviously more interesting than car insurance but the latter is creating relevant shockwaves in the industry. Ever heard of pay-as-you-go car insurance? Essentially, the provider sends you a small device to implant in your car. This allows them to track how much you drive and you only pay for the miles you use. This differs from traditional insurance because a) it’s cheaper and b) it’s a more lightweight solution. What I mean by this is, it’s there when you need it and not your problem when you don’t. Serverless is similar. You never pay for idle time, however, the tools are reliable and available when in use. Both are also beneficial in inconsistent traffic scenarios (… you promised to humor me.)

What’s the point of publishing all of the above, besides indulgently breaking down how my brain works? Well, the undergraduate class of 2019 gets their diploma in just six months and I can guarantee you that serverless will have expanded even further by then. I believe it to be the future of software development and writers are, of course, needed in this space. It doesn’t serve people like me to hear terms like “serverless” and write it off as a buzzword that’s above our paygrade; to do so would mean missing out on a fascinating subject to write about. So, if you work in marketing at any kind of company, I encourage you to start a dialogue with your engineering team. Learn from them and ask questions, no matter the beat you decide to cover.

It’s time for all of us to get involved in new technology as it develops. Serverless is a great place to start.

If you manage a software team and are interested in Stackery, set up a call with our serverless engineers today.

Lambda Layers & Runtime API: More Modular, Flexible Functions
Sam Goldstein

Sam Goldstein | November 29, 2018

Lambda Layers & Runtime API: More Modular, Flexible Functions

Lambda layers and runtime API are two new feature of AWS Lambda which open up fun possibilities for customizing the Lambda runtime and enable decreased duplication of code across Lambda functions. Layers lets you package up a set of files and include them in multiple functions. Runtime API provides an API for interacting with the Lambda service function lifecycle events which lets you be much more flexible about what you run in your Lambda.

Layers is aimed at a common pain point teams hit as the number of Lambdas in their application grows. Today, we see customers performing gymastics in order to compile binaries or package reusable libraries inside functions. One downside of this behavior is that it is difficult to ensure all functions have the latest version of the dependency, leading to inconsistencies across environments or over-complicated error-prone packaging processes. For example, at Stackery we compile git and package it into some our functions to enable integration with GitHub, GitLab, and CodeCommit. Prior to layers upgrading that dependency involved each developer responsible for a function repackaging those files in each related function. With layers, it’s much easier to standardize those technical and human dependencies and the combination of layers and runtime API enables a cleaner separation of concerns between business logic function code and cross-cutting runtime concerns. In fact, in Stackery, adding a layer to a function is just a dropdown box. That feels like a little thing, but it opens up several interesting use cases:

1. Bring Your Own Runtime

AWS Lambda provides 6 different language runtimes (Python, Node, Java, C#, Go, and Ruby). Along with layers comes the ability to customize specific files that are hooked into the Lambda runtime. This means you can gasp run any language you want in AWS Lambda. We’ve been aware that there is no serverless “lock in” for some time now but with these new capabilities you are able to fully customize the Lambda runtime.

To implement your own runtime you create a file called bootstrap in either a layer or directly in your function. It must have executable permissions (chmod +x).

Your bootstrap custom runtime implementation must perform these steps:

  1. Load the function handler using the Lambda handler configuration. This is passed to bootstrap through the _HANDLER environment variable.

  2. Request the next event over http: curl "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/next"

  3. Invoke the function handler and capture the result

  4. Send the response to the Lambda service over http:

curl -X POST "http://${AWS_LAMBDA_RUNTIME_API}/2018-06-01/runtime/invocation/$INVOCATION_ID/response" -d "$RESPONSE"

It’s pretty much guaranteed there will be a bunch of new languages for you to deploy any minute through layers. At Stackery we’re debating whether a PHP or Haskell layer would be of greater benefit.

2. Shared Binaries and Libraries

Serverless apps often rely on reusable libraries and commands which the business logic code calls into. For example, our engineering team runs git inside some of our functions, which we package alongside our node.js function code. Scientific libraries, shell scripts, and compiled binaries are a few other common examples. While it’s nice to be able to package any files along with our code when these dependencies are used across many functions, need to be compiled, or are updated frequently you can end up hitting increasing function build complexity and team distractions.

With layers you can extract these shared dependencies and register that package within the account. In Stackery’s function editor you’ll see a list of all the layers in your account and can apply them to that function. This simplifies the management and versioning of reusable libraries used by your functions.

The layers approach has the added benefit that it’s easier to keep dependencies in sync across all your functions and to upgrade these dependencies across your microservices. Layers provides a way to reduce duplication in your function code and shared libraries in layers are only counted once against AWS storage limits regardless of how many functions use the layer. Layers can also be made public so it’s likely we’ll see open source communities and companies publish Lambda layers to make it easier for developers to run software in Lambda.

Serverless Cross-Cutting Concerns

By now it should be clear that layers unlock some exciting possibilities. Let’s take a step back and note this is one aspect of a broader set of good operational hygene. Microservices have major benefits over monolithic architecture. The pieces of your system get simpler. They can be developed, deployed, and scaled independently. On the other hand, your system consists of many pieces, making it more challenging to keep the things that need to be consistent in sync. These cross-cutting concerns, such as security, quality, change management, error reporting, observability, configuration management, continuous delivery, and environment management (to name a few) are critical to success, but addressing them often feels at odds with the serverless team’s desires to focus on core business value and avoid doing undifferentiated infrastructure work.

Addressing cross-cutting concerns for engineering teams is something I’m passionate about since throughout my career I’ve seen the huge impact (both positive and negative) it has on an engineering orgs’ ability to deliver. Stackery accelerates serverless teams, by addressing the cross-cutting concerns that are inherent in serverless development. This drives technical consistency, increases engineering focus, and multiplies velocity. This is the reason I’m excited to integrate Lambda layers into Stackery; now improving the consistency of your Lambda runtime environments is as easy as selecting the right layers from a drop down. It’s the same reason we’re regularly adding new cross-cutting capabilities, such as Secrets Management, GraphQL API definition, and visual editing of existing serverless projects.

There’s a saying in software that if something hurts you should do it more often, and typically this applies to cross-cutting problems. Best practices such as automated testing, continuous integration, and continuous delivery all spring from this line of thought. Solving these “hard” cross-cutting problems is the key to unlocking high velocity engineering - moving with greater confidence towards your goals.

PHP on Lambda? Layers Makes it Possible!
Nuatu Tseggai

Nuatu Tseggai | November 29, 2018

PHP on Lambda? Layers Makes it Possible!

AWS’s announcement of Lambda Layers means big things for those of us using serverless in production. The creation of set components that can be included with any number of Lambdas means you no longer have to zip up your application code and all its dependencies each time you deploy a serverless stack. This allows you to include dependencies that are much more bespoke to your particular serverless environment.

In order to enable Stackery customers with Layers at launch, we took a look at Lambda Layers use cases. I also decided to go a bit further and publish a layer that enables you to write a Lambda in PHP. Keep in mind that this is an early iteration of the PHP runtime Layer, which is not yet ready for production. Feel free to use this Layer to learn about the new Lambda Layers feature and begin experimenting with PHP functions and send us any feedback; we expect this will evolve as the activity around proof of concepts expands.

What does PHP do?

PHP is a pure computing language and you can use to emulate the event processing syntax of a general-purpose Lambda. But really, PHP is used to create websites, so Chase’s implementation maintains that model: your Lambda accepts API gateway events and processes them through a PHP web server.

How do you use it?

Configure your function as follows:

  1. Set the Runtime to provided
  2. Determine the latest version of the layer: aws lambda list-layer-versions --layer-name arn:aws:lambda:<your 3. region>:887080169480:layer:php71
  3. Add the following Lambda Layer: arn:aws:lambda:<your region>:887080169480:layer:php71:<latest version>

If you are using AWS SAM it’s even easier! Update your function:

    Type: AWS::Serverless::Function
      Runtime: provided
        - !Sub arn:aws:Lambda:${AWS::Region}:887080169480:layer:php71

Now let’s write some Lambda code!

header('Foo: bar');
print('Request Headers:');
print('Query String Params:');


The response you get from this code isn’t very well formatted, but it does contain the header information passed by the API gateway:

If you try anything other than the path with a set API endpoint response, you’ll get an error response that the sharp-eyed will recognize as being from the PHP web server, which as mentioned above is processing all requests

Implementation Details

Layers can be shared between AWS accounts, which is why the instructions above for adding a layer works: you don’t have to create a new layer for each Lambda. Some key points to remember:

  • A layer must be published on your region
  • You must specify the version number for a layer
  • For the layer publisher, a version number is an integer that increments each time you deploy your layer

How Stackery Makes an Easy Process Easier

Stackery can improve every part of the serverless deployment process, and Lambda Layers are no exception. Stackery makes it easy to configure components like your API gateway.

Stackery also has integrated features in the Stackery Operations Console, which lets you add layers to your Lambda:


Lambda Layers offers the potential for more complex serverless applications making use of a deep library of components, both internal to your team and shared. Try adding some data variables or a few npm modules as a layer today!

Stackery Welcomes Abner Germanow
Nate Taggart

Nate Taggart | November 21, 2018

Stackery Welcomes Abner Germanow

Today, I’m proud to announce Abner Germanow is joining us as the company’s first chief marketing officer (CMO). Like Chase Douglas, Sam Goldstein and I, Abner also hails from the halls of New Relic, where we all contributed in our various roles to making New Relic the market leader it is today. Abner has more than 20 years experience in global marketing, product and solution marketing. He has an uncanny ease in advocating and evangelizing technology as well as analyzing customer adoption of new technologies. I think because of Abner’s years of experience as an IDC analyst, he has this way of engaging customers, helping them pinpoint issues and then producing education and marketing campaigns to reach new customers.

I’m delighted to have Abner join the team. He assumes responsibility for raising up the company’s brand and marketing the tools that we have worked so hard to bring to customers. His experience in reaching the early adopters of new tech solutions and expanding and engaging partners in the AWS ecosystem are the same goals we have for Stackery.

We’ve come a long way since we launched at the Serverless Conference, one year ago in October. Then, I promised that we would keep building, refining and polishing. Making serverless better and easier to use. Now that Abner has joined us, he will help get the word out that Stackery + AWS help customers ship and iterate new applications faster than they ever have before.

Please give a shout out and welcome Abner. He’s @abnerg on twitter and we’ll all be at re:invent next week.

How Benefit Cosmetics Uses Serverless
Guest Author - Jason Collingwood

Guest Author - Jason Collingwood | November 21, 2018

How Benefit Cosmetics Uses Serverless

Founded by twin sisters in San Francisco well before the city became the focal point of tech, Benefit has been a refreshing and innovative answer to cosmetics customers for over 40 years. The company is a major player in this competitive industry, with a presence at over 2,000 counters in more than 30 countries and online. In recent years, Benefit has undergone a swift digital transformation, with a popular eCommerce site in addition to their brick-and-mortar stores.

When I started with Benefit, the dev team’s priority was to resolve performance issues across our stack. After some quick successes, the scope opened up to include exploring how we could improve offline business processes, as well. We started with our product scorecard, which involved measuring:

  • In-site search result ranking.
  • Product placement and mentions across home and landing pages.
  • How high we appeared within a given category.

We needed to capture all this information on several different sites and in a dozen different markets. If you can believe it, we’d been living in a chaotic, manually updated spreadsheet and wasting thousands of hours per year gathering this information. There had to be a better way.

Automating Applications

To monitor a large number of sites in real time, a few SaaS options exist, but the costs can be hard to justify. Moreover, most solutions are aimed at end-to-end testing and don’t offer the kind of customization we needed. With our needs so well-defined it wasn’t very much work to write our own web scraper and determine the direction we needed to take.

The huge number of pages to load, though, meant that scaling horizontally was a must. Checking thousands of pages synchronously could take multiple days, which just wasn’t going to cut it when we needed daily reports!

“Well, let’s look into this serverless thing.”

Web monitors and testers are a classic case for serverless. The service needs to be independent of our other infrastructure, run regularly, and NOT be anyone’s full-time job to manage! We didn’t have the time nor people to spend countless hours configuring resources- and really didn’t want to be patching servers to keep it running a year in the future.

How it Works

We use Selenium and a headless Chrome driver to load our pages and write the results to a DynamoDB table. Initially, we tried to use PhantomJS but ran into problems when some of the sites we needed to measure couldn’t connect correctly. Unfortunately, we found ourselves confronted with a lof of “SSL Handshake Failed” and other common connection timeout/connection refused request errors.

The hardest part of switching to the ChromeDriver instead of PhantomJS is that it’s a larger package, and the max size for an AWS Lambda’s code package is 50 mb. We had to do quite a bit of work to get our function, with all its dependencies, down under the size limit.

The Trouble of Complexity

At this point, even though we now had a working Lambda, we weren’t completely out of the woods. Hooking up all the other services proved to be a real challenge. We needed our Lambdas to connect to DynamoDB, multiple S3 buckets, Kinesis streams, and an API Gateway endpoint. Then, in order to scale we needed to be able to build the same stack multiple times.

The Serverless Application Model (SAM) offers some relief from rebuilding and configuring stacks by hand in the AWS console, but the YAML syntax and the specifics of the AWS implementation make it pretty difficult to use freehand. For example, a timer to periodically trigger a Lambda is not a top-level element nor is it a direct child of the Lambda. Rather, it’s a ‘rule’ on a Lambda. There are no examples of this in the AWS SAM documentation.

At one point, we were so frustrated that we gave up and manually zipped up the package and uploaded via the AWS Console UI… at every change to our Lambdas! Scaling a lot of AWS services is simple, but we needed help to come up with a deployment and management process that could scale.

How Stackery Helps

It’s no surprise that when people first see the Stackery Operations Console, they assume it’s just a tool for diagramming AWS stacks. Connecting a Lambda to DynamoDB involves half a dozen menus on the AWS console, but Stackery makes it as easy as drawing a line.

Stackery outputs SAM YAML, meaning we don’t have to write it ourselves, and the changes show up as commits to our code repository so we can learn from the edits that Stackery makes.

It was very difficult to run a service even as simple as ours from scratch and now it’s hard to imagine ever doing it without Stackery. But if we ever did stop using the service, it’s nice to know that all of our stacks are stored in our repositories, along with the SAM YAML I would need to deploy those stacks via CloudFront.


With the headaches of managing the infrastructure out of the way, it meant we could focus our efforts on the product and new features. Within a few months were able to offload maintenance of the tool to a contractor. A simple request a few times a day starts the scanning/scraping process and the only updates needed are to the CSS selectors used to find pertinent elements.

Lastly, since we’re using all of these services on AWS, there’s no need to setup extra monitoring tools, or update them every few months, or generate special reports on their costs. The whole toolkit is rolled into AWS and best of all, upkeep is minimal!

Get the Serverless Development Toolkit for Teams

now and get started for free. Contact one of our product experts to get started building amazing serverless applications today.

To Top