Stacks on Stacks

A serverless/FaaS technology blog by Stackery

Introducing the CDN node

Introducing the CDN node

Author | Apurva Jantrania

With Stackery, you could use the Object Store node to serve files to your users - it provides a simple way to host files for your users, from static websites to large video files and everything in between. Hosted on Amazon’s S3, you can be assured that the files in the Object Store node will have high reliability. However, users today demand instantaneous access and are more likely than not to leave if your site takes too long to load for them.

This is where a CDN (Content Delivery Network) comes into play. Providing a large number of geographically distributed servers, your user’s request is routed to the nearest CDN server for a quick turnaround rather than having to travel half the world to get to the server where your site is, adding seconds to the response time.

Today, we are happy to announce the CDN node, which makes setting up a CDN in front of your Object Store trivially easy. We take care of all the work needed to configure CloudFront, connect it to the S3 bucket along with all the permissions, and set up SSL for you.

Just put the CDN node in your stack and connect it to an Object Store node, tell us what domain to run on, and deploy your stack. Once you deploy your stack, your site admin will get an email to approve the SSL cert, then in 10-20 minutes, your CDN will be fully up and running. The only step remaining is for you to create a DNS record for the CDN.

The people side of serverless adoption

The people side of serverless adoption

Author | Nate Taggart

Making the technology transition to serverless architecture is a frequent topic these days. Less frequently discussed is the human component of this change, but it’s an equally important topic. Aligning your organization’s people to your technology transition is a critical component in a successful serverless strategy.

Preparing for change

Let’s face it, change can be uncomfortable. With any technology change, in particular, there’s an underlying expectation that your teams learn new skills, step out of their comfort zones, risk failure and embarrassment, and change their existing workflows and dynamics. That’s a lot to ask.

While the technology transition may be well-justified by significant organizational benefits – dramatic infrastructure cost savings, improved scalability and operational efficiencies – these benefits may mean very little in the day-to-day work of individual developers on your staff.

Ideally, the human transition to serverless begins before the technological shift, by preparing your team for the change. Some companies who have been especially successful in their transition started by creating a serverless advocacy group in advance of any implementation. The serverless advocacy group is generally composed of front-line practitioners (developers, ops, security, etc) from across the organization. Their role is to 1) identify low-risk opportunities for serverless usage across the technology stack, 2) become serverless advocates in their respective teams (educating and building enthusiasm across the company), and 3) collecting feedback and concerns from their teams and working with the rest of the advocacy group to create org-wide best practices to make the transition easier for everyone.

Best (people) practices

Serverless technology is still emerging. Of course, this means that the best practices, processes, patterns, and tools are also still emerging. Surfacing the concerns from your organization is one of the most effective ways to ensure that your transition to serverless goes smoothly. Various stakeholders across the enterprise are going to be able to identify a variety of risks and techniques to improve your serverless adoption. The serverless advocacy group can be a hub where risks can be addressed and best practices can be shared quickly throughout the organization.

A commonly identified concern is that serverless will give up a lot of visibility and control over your applications. For example, many teams may be relying on New Relic for application performance management, and will lose this tool in serverless applications. Instead of ignoring this need, introducing a replacement APM solution (like IOPipe) can alleviate a lot of the reluctance to embrace a serverless architecture, while simultaneously shortening the learning curve across the company.

Related concerns may be a lack of understanding of the new architecture (Stackery’s architecture design canvas can help!), uncertainty around building automated release pipelines for serverless (another key advantage of Stackery), finding application testing methodologies, unfamiliar security models, and how operational responsibility for these applications should be organized. Each of these can be addressed (and the solutions may vary from company to company), but they will be solved much more quickly when there are people specifically identified and assigned with managing these risks.

Process iteration

As with all new processes, expect an ongoing need to learn and iterate. Over time the serverless ecosystem will evolve and patterns will emerge, but until then be prepared to embrace iteration during your serverless adoption.

By identifying people responsible for shepherding the changes and spreading awareness of the risks and techniques to address them, you’ll be well positioned for a successful transition toward serverless technology. But more importantly, you’ll be including the success of your people as a component measurement of the success of your change strategy.

Stackery Support For NoSQL Tables

Stackery Support For NoSQL Tables

Author | Chase Douglas

One piece of technology that goes hand-in-hand with serverless tech is NoSQL databases. They tend to be more granularly scalable, as their scaling mechanism is some form of “just throw more shards into it.” And with recent improvements, services like AWS DynamoDB even have auto-scaling support.

Today, I’m pleased to announce Stackery support for NoSQL databases. You can now drag Table nodes into your stacks to provision AWS DynamoDB tables in your account. Even better is the fact that we set up autoscaling for you!

Stackery Tables can also take commands from Function nodes. You can send insert, put, select, update, and delete messages with easy-to-use structures that allows for conditions, atomic operations, limits, and ordering. And as with all our resource nodes, if you’re a DynamoDB power user you can get a reference to the table as a value of an environment variable. Then you can use whatever library you prefer to interact with the table.

At Stackery we use a traditional SQL database for most of our data, but we’ve already found a great use for our Table node. When we recently launched support for easier custom domain provisioning for Rest Api nodes we needed a way to keep track of SSL certificate requests issued by AWS Certificate Manager. When a new cert is requested via a CloudFormation Custom Resource in our customers’ stacks, we inserted the record into a new Table. Once a minute, we run a checker Function that checks on the status of certificates while we wait for customers to approve them. That function selects all the records from the table and checks them all in parallel.

One last interesting feature of the Table node is the output port. Table nodes send messages to Function nodes every time a record is inserted, updated, or deleted. This can be an excellent way to trigger side effects like sending an email after a transaction is processed.

We’ve found a lot of good use cases for the Table node at Stackery. Give it a spin yourself to see how easy it can make building your next serverless app!


Porting to serverless

Porting to serverless

Author | Nate Taggart

Unless you have the luxury of building your application from scratch, chances are you’ll face a point where you’ll have to decide on a strategy for migrating your application to a serverless/functions-as-a-service (FaaS) architecture. While some considerations, like support for your application’s language and runtime version, are fairly straightforward, your overall strategy will probably depend on a number of factors including your application’s maturity, architecture, and framework.

Service Oriented Architectures

If you’ve already servicified your application, including decoupling independent functionality, you’re likely in a good position to begin a serverless migration. Typically, you’ll identify a relatively low-risk service to migrate and build confidence and experience with your new FaaS infrastructure. Good candidates for testing serverless architectures are background task queues and internal analytics services which store data for retrieval at a later time.

While this is a relatively simple starting point, you may need to put some consideration into how your service will handle state and how service discovery will work within your distributed system. As you build success with individual services, the task of bringing over the entirety of the application should get easier.


If your application is predominately an API, you’re in luck – FaaS architecture is well-suited to API use cases. You have at least two reasonable paths forward for an API migration: decompose your API to individual functions or port your API framework.

Decomposing your API is easy enough; define your routes and connect them to individual functions. This works well if your functions are reasonably transactional and don’t require much interdependency, but it can also be time consuming to break apart your application in this way and increases the complexity of deploying and updating your API.

Alternatively, depending on your API framework, it may be possible to load the entire API into a single AWS Lambda function, like we did with the Hapi framework. This technique is often easier to maintain and deploy, but does require a bit of handler logic to parse your API request.

Monolithic and Legacy Applications

If your goal is to bring a legacy or monolithic application to a FaaS provider, you’re likely in for a bit of a challenge. While it may be theoretically possible to load certain applications in Lambda, it’s likely that the resource requirements, cold boot times, and architectural differences of FaaS infrastructure will make this approach all but impossible.

This could be your opportunity to start to break apart your application into a service oriented architecture. This, of course, takes some effort and planning but brings all of the SOA benefits of increased maintainability, decreased developer ramp-up time, and service reusability. As you break services out of your monolith you can build them onto serverless architecture natively.

On the other hand, if servicification of your application is untenable for any number of reasons, you may find significant benefits in a hybrid infrastructure approach. Using services like AWS ECS or Kubernetes can allow you to run cloud-native applications with higher infrastructure utilization rates, and can help to consolidate your operations and infrastructure strategy across both containers and FaaS.


There are a number of pragmatic paths to migrating your applications onto serverless infrastructure. Each come with their own considerations, limitations, and with various levels of effort. Organizations making this switch often find that their applications become easier to maintain, quicker to update, and have significantly better infrastructure cost models. In general, the transition is easier than you might expect and gets faster after a little initial experience.

Bastion Nodes For Your Virtual Network

Bastion Nodes For Your Virtual Network

Author | Apurva Jantrania

So you’ve got a Virtual Network set up to secure your resources, fantastic! But sometimes, your users or developers will need access to those secured resources from outside the Virtual Network. Maybe they need to make a quick update to a database, or an unexpected debug session requires a peek into your tables. That’s exactly what a Bastion node is there to do.

The Bastion node allows you to easily grant specific users SSH access to a server inside the Virtual Network that will then let them access your private resources. We also make it easy for you to manage which users have access - all you need is their SSH public key and username, and we will do all the work to create an account on the Bastion server and grant them SSH access. No pesky passwords needed. Users and their keys can even be specified in Configuration Stores, making it easy to manage access from a central location.

We are releasing the new Bastion node today. If you need easy access to your Databases and Docker Services hosted inside a Virtual Network be sure you check it out!

Using Virtual Networks To Secure Your Resources

Using Virtual Networks To Secure Your Resources

Author | Chase Douglas

In this post, I’m going to highlight one of Stackery’s more interesting nodes. The Virtual Network node provides a powerful mechanism for securely deploying resources inside a private network.

Why Do We Need Private Networks

As an example of what a private network enables, let’s take a look at how to secure a database. When you connect to most databases, you provide a username and password to gain access. But some databases are easy to set up without requiring user credentials to gain access. As an example, hundreds of millions of passwords were recently leaked via an unprotected database accessible on the internet. It is unfortunately too easy to misconfigure database security settings when initially setting up the database or when updating or changing settings.

But let’s say you have set up a database with proper credential-based access controls. This sounds like a good amount of security by itself. If you don’t have the proper credentials, you won’t be able to access the database. What could go wrong?

Unfortunately, relying only on credentials for database security presents many problems. If your database is accessible on the internet people will find ways to either break into it or cause other nasty problems. Many databases do not have effective countermeasures for brute force password attacks. You can easily find articles, like this one, demonstrating how to use common tools to perform brute force password attacks on databases.

But even if you use a strong password with a database that uses proper password hashing and salting techniques to prevent brute force attacks from being successful, you can still end up overloaded via a denial of service attack where many malicious clients attempt to connect to your database simultaneously and exhaust available resources.

The solution to these problems is simple: put your databases inside private networks that only your services can access.

Stackery Virtual Network Node

Stackery’s Virtual Network node makes it easy to place your databases and services inside private networks. When you add a Virtual Network node, Stackery creates a Virtual Network with private and public subnets. Resources placed in public subnets can be accessed from the internet, while resources placed in private subnets can only be accessed by other resources within the same Virtual Network.

As an example, when you place a Database node in the Virtual Network, the Database is provisioned inside a private subnet. The same is true of Docker Cluster nodes. But when you place a Load Balancer in the Virtual Network, the Load Balancer is provisioned inside a public subnet. This allows internet traffic to reach the Load Balancer, which then routes traffic to Docker Services running in private subnets of the same Virtual Network. For serverless use cases, Function nodes can also be placed inside Virtual Network nodes to ensure they execute within a private subnet of the Virtual Network.

Stackery Best Practices

The Virtual Network node is another example of how Stackery helps engineers go from concept to implementation using industry best practices. Under the covers, a Virtual Network node is implemented using over a dozen AWS resources. But the magic of Stackery ensures the Virtual Network and all the resources placed inside it are properly networked to provide the right level of security.


Try Stackery For Free

Gain control and visibility of your serverless operations from architecture design to application deployment and infrastructure monitoring.