Stacks on Stacks

The Serverless Ecosystem Blog by Stackery.

Posts on Stackery

Stackery 2018 Product Updates
Sam Goldstein

Sam Goldstein | May 16, 2018

Stackery 2018 Product Updates

Our product engineering team ships every single day.

That means Stackery’s product gets better every single day. Stackery engineers commit code into git which marches into our continuous delivery pipeline. We promote each version of our microservices, frontend, and CLI through multiple testing environments, rolling shiny new features into production or notifying the team of failures. This is the best way we know to develop modern software and explains why our team is able to ship so much functionality so rapidly.

However, because we’re constantly shipping, it means we need to pause periodically to take note of new features and improvements. In this post I’ll summarize some of the most significant features and changes from our product team over the past few months. For a more detailed list of changes, you can read and/or follow Stackery’s Release Notes.

Referenced Resource

One of the best things about microservice architecture is the degree which you can encapsulate and reuse functionality. For example, if you need to check if a user is authorized to perform a certain action, there’s no need to scatter permissioning code throughout your services. Put it all in one place (an AuthorizationService perhaps), and call out to that in each service that needs to check permissions.

Stackery’s Referenced Resource nodes let’s you reference existing infrastructure resources (be they Lambda functions, S3 buckets, VPCs, you name it) by their AWS ARN and seamlessly integrate these into your other services.

One of the best uses I’ve seen for Referenced Resources is using it as the mechanism to implement centralized error reporting for serverless architectures. Write one central Lambda function that forwards exceptions into your primary error reporting and alerting tool. Configure every other stack to send error events to this central handler. Viola! Complete visiblity into all serverless application errors.

Support for Multiple AWS Accounts

Every company we work with uses multiple AWS accounts. Sometimes there’s one for production and one for everything else. In Stackery’s engineering team each engineer has multiple accounts for development and testing, as well as shared access to accounts for integration testing, staging, and production. Splitting your infrastructure across multiple accounts has major benefits. You can isolate permissions and account-wide limits, minimizing risk to critical accounts (e.g. production).

However, managing deployment of serverless architectures across multiple accounts is often a major PITA. This is why working across multiple accounts is now treated as a first class concern across all of Stackery’s functionality. Multiple AWS accounts can be registered within a Stackery account using our CLI tool. Stackery environments are tied to an AWS accounts, which maps flexibly into the vast majority of AWS account usage patterns.

Managing multiple AWS accounts is a key part of most organizations’ cloud security strategy. Stackery supports this by relying on your existing AWS IAM policies and roles when executing changes. If the individual executing the change doesn’t have permission in that AWS account, the action will fail. This makes it straightforward to set up workflows where engineers have full control to make changes in development and testing environments, but can only propose changes in the production account, which are then reviewed and executed by an authorized individual or automation tool.

You can read more in our knowledge base article about Working with multiple AWS accounts in Stackery

CloudFormation Resource Nodes

Sometimes you need to do something a little different, which is why we built custom CloudFormation Resource nodes. You can use these to provision any AWS resource and take advantage of the full power and flexibility of CloudFormation, for situations when that’s required or desireable.

What’s been coolest about rolling this feature out is the variety of creative uses we’ve seen it used. For example use CloudFormation Resource nodes to automatically configure and seed a database the first time you deploy to a new environment. You can also use them to automatically deploy an HTML front end to CloudFront each time you deploy your backend serverless app. The possibilities are endless.

AWS Resource Tagging

Resource Tagging may not be the most glamorous of features, but it’s a critical part of most organizations’ strategies for tracking cost, compliance, and ownership across their infrastructure. Stackery now boasts first class support for tagging provisioned resources. We also provide the ability to require specific tags prior to deployment, making it orders of magnitude to get everyone on the same page on how to correctly tag resources.

Always Shipping

Our goal is to always be shipping. We aim to push out valuable changes every day. Customer’s gain more control and visiblity over their serverless applications each day, so they can ship faster and more frequently too. Look out for more great changes rolling out each day in the product, and watch this blog for regular announcements summarizing our progress. We also love to hear what you think so if you have wants or needs managing your serverless infrastructure, don’t hesitate to let us know.

Building a Reddit Bot with Stackery
Stephanie Baum

Stephanie Baum | January 18, 2018

Building a Reddit Bot with Stackery

I’ve always wanted to build a Reddit bot, however, I didn’t want to go through the hassel of actually setting up cloud based hosting for it to run on. One of the most powerful aspects of serverless architectures is how simple it is to implement a task pipeline. In this case, I created a fully live Reddit bot in about an hour, that scrapes the top posts from /r/cooking and emails them to me. It’s easy to see how these atomic types of tasks can be chained together to create powerful applications. For example, with a bit more work, instead of an AWS SNS topic we could feed the Reddit posts into an AWS Kinesis Stream, then attach consumer lambda functions to the stream to perform context analytics. One can see how this can apply to a CI/CD pipeline, and in fact we use similar processes with our own serverless continuous integration (CI) and continuous delivery (CD) pipeline. Read more about Stackery’s CI/CD here.

Overview of Components

  • “Timer” node to ping a function, triggering the reddit bot to work once a day
  • “RedditBot” node, a lambda function that once triggered, authenticates with reddit using the snoowrap library and scrapes the hot /r/cooking posts, sending along the good ones via SNS
  • “HotCookingPosts” SNS topic node, an SNS topic that forwards all messages to my email address
Implementation Details

Create a reddit account for your bot. Then navigate to and select Create App, making sure you select “script” in the radios underneath the name. Note down the client id and client secret, these will go into the function configuration along with the reddit username and password for your bot account.

Configure a stack using the Stackery dashboard with 3 nodes:

Timer -> Function -> SNS Topic

Attached to the function are some configuration values that are necessary for reddit’s authentication mechanism. Stackery automatically includes certain information about a function based on what it’s attached to (in this case, the SNS topic). Read more about the output port data here. We can leverage this when specifying the topic node ARN for forwarding on the selected posts, implemented in this file.

Function Settings:

Configuration Environment Variables:

Fill in your saved client id, secret, Reddit bot username, and Reddit bot password and store under environment variable names in the function editor panel. For more information on how to create a deployment environment configuration, visit the Environment Configuration Docs. It’s important not to add these sensitive variables directly, as they will then be committed to github, and (depending on your repository settings) exposed to the public. When added via an environment configuration, these key value pairs are automatically encrypted and stored in an S3 bucket on your AWS account.

The “bot” function will receive a timer event which then triggers it to scrape /r/cooking. The timer interval can be triggered every minute to test functionality, then I’d recommend changing it to a more sane interval.

The function looks through the hot submissions and any with > 50 comments get forwarded to the SNS topic. See the code for this here:

You can also insert log statements in your own code to debug the lambda function via Cloudwatch Logs (which you can easily get to the logs in from the function’s metrics tab section in the Stackery deployments view).

Currently, the code sends a json object directly to email. This is done by navigating to your AWS accounts SNS service, to the topic that Stackery has automatically provisioned, and clicking the Create Subscription button, with the Protocol field set to email, and value as your email. For more on the capabilities of SNS visit Amazon’s SNS Docs.

As you can see, it’s really straightforward to build a Reddit bot (and many other types of bots) using serverless resources and Stackery’s cloud management capabilities. Bots are functionally lightweight by nature, and fit easily into serverless architectures. With the composability of AWS Lambda, they can be orchestrated and chained together to perform a variety of tasks, ranging from emailing scraped posts off Reddit, to managing CI/CD.

Previewable Pull Requests
Anna Yovandich

Anna Yovandich | December 03, 2017

Previewable Pull Requests

Reviewing changes in a UI as the result of a pull request is a common occurrence in a development team. This typically involves switching the local working branch to the PR branch, compiling a build, viewing it on localhost, then giving functional/behavioral/visual feedback. There are certainly many solutions to alleviate this context and code switch. One we have built and adopted recently uses Stackery as a CI tool to clone, compile, and preview a pull request.

Check out our guide that details how we built it with step-by-step instruction and sample code.

Deploying with the Stackery CLI
Apurva Jantrania

Apurva Jantrania | November 16, 2017

Deploying with the Stackery CLI

Developing functions for serverless can get pretty tedious. While there are some solutions to developing a function locally, at this time, they are generally limited in scope and capability and all too often you will find yourself needing to iterate in a real stack. This can end up being quite slow, involving a number of steps that each take time, but not enough to be able to switch off onto another task - develop, commit, prepare, deploy, test, repeat. Some of these steps take time that is somewhat unavoidable, but as this is a problem we face internally at Stackery often, we’ve worked on trying to reduce this with our Stackery CLI tooling.

Stackery enables you to easily design, connect and deploy complicated architectures with an intuitive UI console. However, we recognize that requiring users to switch back and forth from a code editor to a browser to develop, deploy and monitor can be jarring and distracting. That’s why users also have the ability to deploy using Stackery CLI.

Many users have probably only ever used the Stackery CLI when first getting started with Stackery (If this is you or if you haven’t updated the Stackery CLI in a while, you should first run stackery update) . However, the Stackery CLI also lets you easily deploy your stack - just provide it the stack name, environment to deploy into, and a branch name, tag, or commit SHA.

For example, stackery deploy flightTracker dev advancedSearch will deploy the head of the advancedSearch branch of your stack called flightTracker into your dev environment. If you use an editor that has an inline terminal, you can edit, commit, push, deploy and monitor the deployment without ever needing to switch apps. The Stackery CLI also provides the status of the deployment so there’s no longer any need to keep refreshing the AWS CloudFormation console to monitor the status of the deployment.

We’ve been using this capability internally for a while and we’re hoping that you find it as useful as we have. The Stackery CLI also has a number of flags that enable deploying stacks without user input, so we’ve integrated it into our internal CI/CD pipeline, but thats a topic for another post.

Easy Slack Integration
Chase Douglas

Chase Douglas | October 09, 2017

Easy Slack Integration

Slack is an indespensible tool for many organizations. People have found many uses beyond simple chat communication. One prime example is ChatOps, which enables teams of developers to collaboratively communicate and perform ops actions at the same time in a shared tool.

We have a new guide demonstrating how to build a SlackBot that receives webhook events from Slack, stores data in AWS DynamoDB, and publishes a daily report in a Slack channel. While it’s not hard to set up a trivial Slack integration via other means, Stackery makes it easy to build integrations that involve data stores and the ability to access other resources in your existing cloud services account. Here’s an example stack from the guide showing how Stackery helps you piece together a real-world use case for a Slack integration:

Check out our guide here!

Why Stackery is Launching at ServerlessConf
Nate Taggart

Nate Taggart | October 03, 2017

Why Stackery is Launching at ServerlessConf

ServerlessConf A little over a year ago, Chase Douglas (the brains behind New Relic Browser, and a friend and former colleague) reached out to me. He had grown interested in AWS Lambda (as had I), but wondered if it was really ready for production use (as had I). So he pitched me an idea: what if we built operations tools, similar to what we relied on at New Relic, but entirely focused on serverless architectures. He wanted serverless in production.

In some revisionist history, maybe I jumped on board immediately. I didn’t. I had also been tinkering with AWS Lambda and was definitely interested, but I had a great job at GitHub with the Data Science team and was a little reluctant to walk away from the interesting work we were doing. So Chase built a proof-of-concept.

You’d probably have to know Chase to understand, but he’s a genius. He doesn’t do half-measures. When he has an idea, he pursues it relentlessly and he’s built things that I would’ve sworn were impossible. So again, in some revisionist history, I quit and joined him immediately. But again, I didn’t.

Instead, I made Chase a deal: we’d apply to Y Combinator and Techstars. If we got in, I’d go all in with him. When YC flew us down to interview, and Techstars invited us up, I knew I was in trouble. To my credit, I finally jumped in with both feet and together we founded Stackery.

This is when I learned about the serverless community. I had just quit a great job at GitHub to go all in on a startup. I had just given up two-thirds of my salary and doubled my hours and I was scared. So I did something I’ve never done before: I emailed strangers on the internet for help.

And, whoa, did they help. I got on Hangouts with organizers from ServerlessConf. They connected me to some of their previous speakers and attendees. And then they connected me to more people. This community welcomed me in, and it’s been my home ever since.

Which is why six months ago we sponsored ServerlessConf in Austin. We didn’t have a product ready (and so we didn’t have a booth), but we did want to give back to this community in a small way. It was a big part of our budget, but we were so proud to show our support to this community.

This time around, we’re sponsoring again. In fact, we’re increasing our sponsorship because now we can afford to, and we’ll have a booth because we have some cool stuff to show you. We’ve spent the last year working our asses off to build the first Serverless Operations Console, and we’d love to help you and your team run serverless in production.

This community has been good to us. We’re so glad to be a part of it. That's why we're launching at ServerlessConf.

Yes, there’s more work for us to do. Sure, we could’ve kept building and refining and polishing. We know we can keep making this better. And, believe me, we will. But we want you to see it and try it, and so we’re launching now.

Chase and I are betting big on serverless. We’re betting our time and our careers and our livelihoods on this. We hope you’ll bet on us, too, and give Stackery a try.

Introducing the CDN node
Apurva Jantrania

Apurva Jantrania | September 01, 2017

Introducing the CDN node

With Stackery, you could use the Object Store node to serve files to your users - it provides a simple way to host files for your users, from static websites to large video files and everything in between. Hosted on Amazon’s S3, you can be assured that the files in the Object Store node will have high reliability. However, users today demand instantaneous access and are more likely than not to leave if your site takes too long to load for them.

This is where a CDN (Content Delivery Network) comes into play. Providing a large number of geographically distributed servers, your user’s request is routed to the nearest CDN server for a quick turnaround rather than having to travel half the world to get to the server where your site is, adding seconds to the response time.

Today, we are happy to announce the CDN node, which makes setting up a CDN in front of your Object Store trivially easy. We take care of all the work needed to configure CloudFront, connect it to the S3 bucket along with all the permissions, and set up SSL for you.

Just put the CDN node in your stack and connect it to an Object Store node, tell us what domain to run on, and deploy your stack. Once you deploy your stack, your site admin will get an email to approve the SSL cert, then in 10-20 minutes, your CDN will be fully up and running. The only step remaining is for you to create a DNS record for the CDN.

Introducing Redis Cache Cluster Support
Chase Douglas

Chase Douglas | March 01, 2017

Introducing Redis Cache Cluster Support

We are proud to release Cache Cluster support using Redis! Redis is an advanced in-memory key-value store that can be used for caching content and data. For example, Stackery itself uses Redis to cache authentication and account data to deliver fast responses from our Rest Api service.

The addition of our Cache Cluster node continues our focus on infrastructure best practices. The official Redis security guidelines recommend running Redis only within private networks where trusted clients can connect. Although Redis supports authentication and encrypted connections using TLS, DDOS and bruteforce authentication attacks are still possible. Luckily, Stackery makes it super easy to run a Redis Cache Cluster inside the private subnets of a Virtual Network node. This privides access to Function and Docker Service nodes within the same Virtual Network node, while preventing connections from outside the Virtual Network node.

On top of the base functionality of provisioning a Redis node, we also added support for Function nodes to run commands by outputting messages to a connected Cache Cluster:

To run a command, output an Array message with the command and arguments as elements of the array:

output(['set', 'foo', 'bar'])
  .then(() => output(['get', 'foo'])
  .then((responses) => console.log(responses[0])) // Will log the string 'bar'

Give the new Cache Cluster a spin today to improve the performance of your app!

Ready to Get Started?

Contact one of our product experts to get started building amazing serverless applications quickly with Stackery.

To Top