featured-article-image

Stacks on Stacks

A serverless/FaaS technology blog by Stackery


Fargate and Cucumber-js: A Review

Fargate and Cucumber-js: A Review


Author | Stephanie Baum

Lately, here at Stackery, as we’ve begun shipping features more rapidly into the product, we’ve also been shifting some of our focus towards reliability and integration testing in preparation. I decided to try out AWS Fargate for UI integration testing using BDD and Cucumber-js in a day long experimental POC. Cucumber is a behavior driven development testing framework with test cases written in a language called gherkin that focuses specifically on user features. AWS Fargate is a recently released abstraction on top of ECS services that gets rid of managing EC2 instances. These are my conclusions:

1. Fargate is awesome. Why would you not use Fargate?

If you’re configuring a Fargate task via the AWS UI it’s somewhat confusing and clumsy. With Stackery, you can configure Fargate while avoiding the pain of the AWS UI entirely. The communication between AWS Lambda to a Fargate task is the same as it would be for a normal ECS service, so moving existing ECS clusters/services to Fargate is straightforward application logic-wise. Here’s a simplified code snippet, dockerTaskPort refers to the conveniently provided Stackery Port environment variable. See our docs for the Docker Task node for more information.

  const repoName = `cross-region-us-east`;
  const browserCiRepo = `https://${token}@github.com/sbaum1994/${repoName}.git`;

  const dockerCommands = [
    `echo 'Running node index.js'`,
    `node index.js`
  ];

  const env = {
    ENV_VAR: 'value'
  };

  let dockerCommand = ['/bin/bash', '-c', dockerCommands.join('; ')];

  const params = {
    taskDefinition: dockerTaskPort.taskDefinitionId,
    overrides: {
      containerOverrides: [
        {
          name: '0'
        }
      ]
    },
    launchType: 'FARGATE'
  };

  params.networkConfiguration = {
    awsvpcConfiguration: {
      subnets: dockerTaskPort.vpcSubnets.split(','),
      assignPublicIp: (dockerTaskPort.assignPublicIPAddress ? 'ENABLED' : 'DISABLED')
    }
  };

  params.overrides.containerOverrides[0].command = dockerCommand;

  params.overrides.containerOverrides[0].environment = Object.keys(env).map((name) => {
    return {name, value: env[name]};
  });

  const ecs = new AWS.ECS({ region: process.env.AWS_REGION });
  return ecs.runTask(params)...

It’s a nice plus that there are no EC2 configurations to worry about, and it also simplified scaling. In the past we’ve had to use an ECS cluster and service for CI when the integration testing has been too long running for AWS lambda. Here, my Fargate service just scales up and down nicely without having to worry about configuration, bottlenecks or cost.

Here’s my UI integration testing set up, triggered by an endpoint that specifies the environment to test.

With Fargate there is still technically an ECS cluster that needs configuring on set up, and when using a load balancer and target group. You are still creating a task definition, containers, and a service. Stackery’s UI makes it easy to understand and configure, but if I were doing this on my own I’d still find it a PIA. Furthermore, I could see Fargate not being ideal in some use cases, since you can’t select the EC2 instance type.

Stackery UI setting up Fargate:

2. Cucumber is pretty cool too. BDD creates clear tests and transparent reporting.

I really like the abstraction Cucumber provides between the test definitions and underlying assertions/implementations. For this POC I created a simple “login.feature” file as follows:

Feature: Login
  In order to use Stackery
  As a single user
  I want to login to my Stackery account

  Background:
    Given I've navigated to the Stackery app in the browser
    And it has loaded successfully

  Scenario: Logging in as a user with a provider set up
    Given a test user account exists with a provider
    When I login with my username and password
    Then I'm taken to the "Stacks" page and see the text "Select a stack"
    And I see the "Stackery Stacks" section populated in the page
    And I see the "CloudFormation Stacks" section populated in the page

Each step maps to a function that uses Selenium Webdriver on headless chrome under the hood to run the tests. I also pass in configuration that lets the test know what the test account username and password is, what Stackery environment is being tested, and other definitions like the timeout settings. In my pipeline, I also added an S3 bucket to hold the latest Cucumber reporting results for visibility after a test finishes.

Report generated:

Overall, I think this can potentially be a great way to keep track of adding new features while maintaining existing ones / making sure everything is regressively tested on each merge. Furthermore it’s clear, organized and user flow oriented, which can work well for a dashboard style app like ours with multiple, repeatable, extensible steps (Create Environment, Deploy a Stack To Environment) etc.

READ MORE >
To Do Serverless Right, You Need A New IAM Approach

To Do Serverless Right, You Need A New IAM Approach


Author | Nate Taggart

Identity and Access Management (IAM) is an important tool for cloud infrastructure and user management. It governs access control for both cloud services and users, and can incorporate features around auditing, authentication policies, and governance.

Use of IAM involves a multiple-step process of creating roles and permissions and then assigning those roles and permissions to users, groups, and resources. In static (or relatively stable) environments, like those on legacy infrastructure, this is a task that can be configured once and only periodically updated. For a critical, once-and-done type task like this, it has historically been a responsibility of a highly-privileged operations team which could own this responsibility and develop IAM permissioning as a core competency. In serverless environments, however, manual provisioning and assignment of IAM roles and permissions can have a dramatically negative impact on team velocity – one of the key advantages of serverless infrastructure.

Serverless Velocity and IAM

Serverless infrastructure is highly dynamic and prone to frequent change. As developers develop functions for deployment into a FaaS-style architecture, they’re fundamentally creating new infrastructure resources which must be governed. Since these changes can occur several times per day, waiting for an operations team to create and assign IAM policies is an unnecessary and highly impactful bottleneck to the application delivery cycle.

As a further challenge, FaaS architectures are difficult (if not impossible) to recreate in local enviornments. This means that the development cycle is likely to involve iterating and frequently deploying into a development account or environment. Having an operations team manually creating IAM policies in the course of this development cycle is prohibitively challenging.

These bottlenecks notwithstanding, IAM policies continue to play a critical role in security, governance, and access control. Organizations must find a way to create and assign IAM policies without blocking the product development team from their high-velocity serverless application lifecycle.

The New Serverless IAM Strategy

There are generally two approaches to IAM policy-making for serverless. The first is to extend the responsibility from your specialized operations team to your entire development group. This approach has a number of drawbacks, including the need for extensive training, a human-error risk, a reduction in development velocity, and a broad extension of access which dramatially reduces control.

The second, and preferred, solution is to automatically provision IAM policies based on a rule-set of best-practices and governance standards. In this scenario, a company would either develop their own release tooling or purchase a pre-built solution like Stackery’s Serverless Operations Console. This software would then be responsible for encapsulating principles of “Least Privilege,” environment management, policy creation, and policy application for all serverless stacks.

In this way, your product engineering team can focus on developing code and can have permissions to provision their services into development environments which are automatically sandboxed and isolated. Once development has been satisfied, this software can premote the new service into a new sandboxed environment for integration testing and QA. Your CI/CD pipeline can continue to promote the service all the way to production, using appropriate roles and permissions at each step, thereby ensuring both IAM policy compliance and high-velocity through automation.

This automatic creation and assignment of IAM policies reduces the risk for human error, ensures that resources are appropriately locked down in all stages of release, and encapsulates DevOps best practices for both high velocity and consistent control.

If you’re still manually creating and assigning IAM policies in your serverless development cycle, I encourage you to consider the advantages of modernizing this workflow with specialized serverless operations software.

READ MORE >
Diversity Tips for Startups

Diversity Tips for Startups


Author | Sam Goldstein

Originally published on hirediversity.us/blog

It’s no secret that tech has a diversity problem and over the last several years there are an increasing number of tech companies working to improve this. There are great resources online about how to approach diversity and inclusion, for example projectinclude.org which provides recommendations for building an effective Diversity & Inclusion (D&I) program. However, a lot of the information is geared towards large companies. This makes it challenging for early stage companies to find actionable advice on inclusion and diversity. Over my career I’ve worked at a variety of software companies ranging from 3 to 1200 people and I’ve seen a lot of successful and unsuccessful attempts at D&I in that time. In my current role, leading an early-stage product engineering team at Stackery, we’re building inclusiveness into our company from the earliest stages. So how should a small startup approach recruiting and retaining a diverse team? How do we create a company environment where people of all genders and backgrounds will feel empowered and excel? Here are some of the practices that are most relevant to leaders at early-stage companies:

1. Start Early

When you’re starting a company from scratch you have to prioritize constantly. You’re building a product, finding customers, courting investors, hiring a team, providing customer support, looking for offices, doing taxes, writing docs. It’s easy to convince yourself diversity can wait. A lot of the best practices (e.g. develop an effective employee handbook) don’t really make sense at an early stage when you’re focused on recruiting your first few employees. However, two of the most important responsibilities of every startup leadership team are to hire a strong team and build a strong company culture. You should be focused on these priorities from the earliest stages of your company, and these two areas, hiring and company culture, are where it makes the most sense for startup leaders to focus their diversity and inclusion efforts.

2. Rewrite Your Job Postings

One effective way to attract more diverse applicants is to look closely at the language in your job posting. The way you present your company and team will have a big impact on who applies. Avoid using language that tends to skew the applicant pool male, like overemphasis on how hard your technical problems are or how aggressively you pursue your goals. An effective technique is to describe the team environment, the company culture, and the technical stack. Talk about how you work together and what you value. Every applicant is interested in what the day to day environment will be like, and this takes on additional importance for individuals who don’t fit the typical white male programmer mold.

Avoid describing your ideal candidate or listing requirements. This encourages many potential applicants to disqualify themselves. Candidates used to having people consistently assume they’re “not that technical” (which is very common for underrepresented candidates) are even more likely to skip past your posting and move on. I’ve found it’s useful to explicitly encourage candidates to apply, even if they’re not sure they’re qualified.

3. Plan Your Interview Process

A big part of encouraging inclusiveness and diversity is discussing it with your team. Planning an interview is one perfect opportunity to do this. Communicate why D&I is important to you and the company, and how that factors into your hiring practices. Your team should be discussing what’s being assessed in each part of the interview process, since without a shared understanding of the criteria for the hiring decision you’ll be relying primarily on unconscious bias. Make sure you’re coaching your team to avoid vague statements when giving feedback. Statements like “wouldn’t fit in” or “doesn’t seem that technical” often mask unconscious biases. Make sure your team grounds their feedback in concrete observations (e.g. “was able to implement the program, but struggled to implement optimization X,” “interrupted and talked over me repeatedly”). Encourage your team to ask themselves “what does this person bring that we don’t already have?” and “how would this person add to our company culture?” A group with diverse skills, strengths, and weaknesses will be more resilient than one where everyone shares similar strengths and blind spots.

Structure your interview process to avoid putting candidates on the spot. Interviews are stressful for the candidates and different people show stress in different ways. Your goal is to assess whether the candidate will succeed in the role, not whether they speak eloquently under pressure while discussing CS 101 concepts with a whiteboard. Many people will get flustered and freeze up in these situations. Does this mean they’re bad programmers? No, it doesn’t. In addition, when you consider societal factors like women being perceived as “pushy” instead of “confident” when they strongly state an opinion, it is even more important to think through the way you structure interactions in the interview process. Ideally you should be telling the candidate what to expect and how they should prepare so they can put their best foot forward throughout your process.

4. Plan Inclusive Activities

There’s a lot of data which shows that underrepresented people leave the software industry at higher rates than white males. Why? A lot of it boils down to tiny things that happen every day that indicate to an employee that they don’t belong or don’t fit in. This is why creating an inclusive culture and work environment is a critical part of promoting diversity in your company. One thing that many young companies get wrong is planning team-building exercises and social activities which unintentionally make some employees feel excluded. Look for activities that can be enjoyed by individuals with a wide range of physical abilities, personalities, ages, ethnicities, religions, and sexual orientations. Avoid highly physical activities which some people can’t participate in. Avoid venues that have a likelihood of making anyone feel unsafe or uncomfortable. Minimize off-hours activities which may be challenging for employees with children or other caregiver responsibilities. Make sure that if alcohol is available it isn’t the primary focus and consider the impact on employees who have experienced addiction. Even if everyone on your current team is really into paintball and brewskis, the effort you put into ensuring work-related activities are inclusive will help you attract and retain diverse individuals.

5. Talk About Diversity & Inclusion

One of the first rules of management is that if something is important, talk about it a lot. People look to their leaders for cues on what they should care about and to understand what’s valued by the company. One-on-one meetings are an excellent opportunity to emphasize the importance you place on building an inclusive environment. Ask for feedback and suggestions. Explain why diversity and inclusion matter to you. Encourage employees to share with you (or other leaders) if and when they encounter uncomfortable situations. Keep in mind that many employees may feel uncomfortable sharing situations where they felt excluded or unwelcome for fear of being ostracized or further excluded, so it’s important to build a strong foundation of trust, and emphasize that any concerns they do share will be handled thoughtfully. Meetings related to hiring and team activities also provide great opportunities to provide updates on steps you’re taking to promote D&I, to solicit input from your team members, and to reiterate the importance of building a strong and welcoming company culture.

Starting Inclusively

At Stackery, our leadership team made the decision to emphasize inclusiveness from day one. We believe this is not only the right thing to do, but that it makes us a stronger team. It is a core component of our strategy for building a successful growth business. We’re striving to be a company where people of every flavor can see people like themselves playing important roles and succeeding.

Leaders at startups today have the opportunity to sidestep the diversity problems that plague the majority of tech companies. There’s more awareness and useful info available than ever before on how to solve tech’s diversity problems. It won’t happen overnight. It will require the hard work of many people over many years. But, if you’re in a leadership role at an early stage company you have the potential to avoid the all-too-common situation, where you wake up one morning to realize you’re a company of 50 or 100 or 250 white men with a diversity problem. Instead you can build intentionally towards a better future where people of all shades, shapes, and backgrounds can feel welcome, contribute in meaningful ways, and achieve incredible results. I hope you’ll find these tips helpful for encouraging inclusion and diversity at your startup.

READ MORE >

How Do You Know What Customers Want?

How Do You Know What Customers Want?


Author | Susan Little

Now that serverless technology enables building and releasing applications without spending precious time coordinating infrastructure changes, enterprises in particular are finding it easier to deliver a stream of innovative features, updates and products to market faster. And while Stackery is developing a product to uniquely solve the operational challenges serverless brings to enterprises, we've often wondered how do you capture what's on your customer's mind amidst all of this rapid development?

Do you know what your customers are really passionate about? What truly keeps them up at night? We probably won't know for sure until we hear the passion in their voices as they talk about their problems.

That is why I'm taking a closer look at what a Customer Advisory Group – where a group of customers advise you on topics ranging from industry trends, business priorities and product direction – can do to provide even better insight into what customers want.

I've had the opportunity to start and lead Customer Advisory Groups and I've found that engaging key customers in a more structured approach can go a long way in building a strong loyal customer and learning more about what is actually important to them.

Since serverless technology enables you to rapidly prototype new product innovations, including multiple directions, one of the greatest benefits of a Customer Adisory Group is to share these prototypes for validation. It’s a great way to get early and quick feedback on your product from your customers.

Additional benefits of a Customer Advisory Group include:

  • Direct, unfiltered, and candid feedback on all aspects of how your company engages within your marketplace – your products, people, and services
  • Early warnings of shifts in customer/market requirements and emerging opportunities
  • New Product Development feedback that can drive innovation
  • Critical insights into both the obvious and the below-the-surface problems customers may experience
  • Intelligence on competitors' tactics and strategies - what's working and what's not

I've found the key to success in setting up a program is to define your mission, identify the benefits to your customers and determine the meeting cadence and follow up. Equally as important is to identify who might be the best fit in an advisory role. Look for leaders within your customer's organizations who are willing to express their point of view and also represent the views of other customers.

Here are three important steps in starting a program:

1. Define your mission

You'll need to gather input from your internal stakeholders and craft a mission for the group. Describing your mission helps explain the group's purpose to your customers and it can also help to set expectations internally.

Creating a mission is helpful to customers and your internal teams:

2. Explain what's in it for your customer

It is important to explain to your customer how they will benefit from being your advisor since they are carving out valuable time from their busy schedule. Often times you might be hesitant to ask too much of your customers because you don't want to bother them, however, it's important to remember your customer's success is directly tied to your success – so they will more than likely be interested in joining forces with you.

Highlighting the key benefits will inspire customers to join the group:

3. Get Buy-In

It's important for your customer to understand what is expected of them. Ideally, you'll have face-to-face sessions in addition to virtual sessions. Hosting these events is a great way to give back to your Customer Advisory Group - customers love it because it's a great way to meet their peers and discuss business challenges in person.

The cadence of the meetings can be flexible – I've found Customer Advisors willing to meet more often to provide input into the product development process. This is helpful with the accelerated development enabled by serverless technology.

Sharing expectations builds buy-in, faster:

Lastly, creating and distributing a Customer Advisory Group meeting summary is important so contributions have been captured and expectations are set for any follow up items.

Conclusion

It is exciting to see the benefits of serverless technology and how Stackery is helping enterprises find an easier way to predictably spin up new environments, automate the build and deploy process, and have operational control and a line of sight into the health of their serverless applications. By adding a Customer Advisory Group to the mix of this rapid product delivery cadence means we can truly delight our customers.

If you are interested in joining our Customer Advisory Group, please feel free to reach out to me at: susan@stackery.io.

READ MORE >
Why All The Monolithic Serverless API Hate?

Why All The Monolithic Serverless API Hate?


Author | Chase Douglas @txase

A schism exists in serverless land. There are about equal numbers of people on two sides of an important architectural question: Should your APIs be backed by a monolithic function or by independent functions for each endpoint? To the serverless ecosystem’s credit, this schism hasn’t devolved into warring factions.

What fights are like in the serverless ecosystem

That said, some have rationalized splitting API functionality up into independent functions with arguments that boil down to a combination of the following:

  • We can now split functionality into nano-services like never before, so why not?
  • Justifications on how independent functions are actually easier to track, even though we all probably agree that most tools to track the explosion of serverless resources are still lacking in maturity*
  • Reasons why monolithic serverless functions are bad based on code architecture preferences or long cold start times due to inefficient implementations

(*Shameless plug for how Stackery can help with this)

Because the first two arguments are fairly weak, the main justification for APIs backed by independent functions are predicated on perceived problems with monolithic APIs more than why APIs backed by independent functions are better. I’ve yet to see a good argument for why all APIs should be broken up into independent functions. Just because it’s possible for monolithic architectures to have problems doesn’t prove that swinging to the opposite extreme is ideal.

There are certainly limits to the monolithic approach, but these limits are more due to Conway’s law than technical architecture. Conway’s law suggests that a reasonable approach would be to split an API up into independent functions when there are separate teams managing different parts of the API.

Some worry that cold start times of a monolithic function may be worse due to needing to initialize all the components of the API at once. However, in every language there are reasonable strategies for reducing cold start time. For example, both Node.js and Python make lazy-loading of functionality easy, while Go has low cold start times simply because it’s a fully compiled executable.

This naturally leads to a broader discussion of the effect of function architecture on cold starts. Thankfully, most API use cases are forgiving of cold start latency, but it’s not something that can be ignored entirely. For example, we are all aware of the famous studies showing how latency can have an extraordinary impact on revenue for e-commerce. In general, almost everyone building a public serverless API should be monitoring for cold starts in some form.

Yan Cui (aka The Burning Monk and all around brilliant developer) recently wrote that monolithic serverless functions won’t help with cold starts. He makes a valid point that at scale, cold starts become noise. It’s true as well that we will never get rid of cold starts without help from the service providers. But the main thrust of his argument is that you will have the same number of cold starts whether you use a monolithic function or independent functions.

However, there is one incorrect assumption underlying the argument. Yan puts forward an API where every endpoint is hit with the same amount of traffic to analyze the effects of cold starts. In reality, almost no APIs have uniform traffic across all endpoints. Most API endpoint traffic follows a pattern similar to the natural power law. A few endpoints will have a high amount of traffic, but most will have much less. A few endpoints will have very little traffic.

When your API is backed by one monolithic function, the cold starts are spread out among all API requests in proportion to their throughput. In other words, the percentage of requests that trigger a cold start will be the same for all endpoints.

Now let’s examine the implications for APIs backed by independent functions. Imagine you have an endpoint that is hit with 1000 requests per hour, and one that is hit with 5 requests per hour. Depending on the cold start rate for your function, you may find that while you have very few cold starts for the high throughput endpoint, almost every request to the low-throughput function causes a cold start.

Maybe it is ok for your API to have different rates of cold starts per endpoint. But for many APIs this is problematic. Imagine your API has endpoints for both listing items and creating items, where listings are requested much more frequently than item creation requests are. You may have a service-level agreement on latency to be met by each endpoint. It would be better to spread cold starts across all endpoints in this scenario.

While it’s possible to use triggers to keep functions warm, if you have one monolithic function it is much easier to keep it warm than it is to keep many independent functions warm. And, contrary to popular belief, there are meaningful ways to warm functions even at high throughputs, though I’ll leave that discussion for another post.


All architectural choices come with trade offs. But the choice between monolithic and independent API functions is a false dichotomy. There’s actually a broad spectrum between all functionality held in a single monolithic function and every three-line helper deployed as a separate microservice. Neither of these is desirable in all cases, which is why the arguments against one or the other are often weak or nonsensical. What folks should be doing is considering how they determine the appropriate boundaries for their API components, and how they manage that over time as their total lines of code, architectural complexity, and number of people involved grows.

READ MORE >
Quickly iterating on developing and debugging AWS Lambda functions

Quickly iterating on developing and debugging AWS Lambda functions


Author | Apurva Jantrania

Recently, I found myself having to develop a complex lambda function that required a lot of iteration and the need for interactive debugging. Iterating on lambda functions can be painful due to the amount of time it takes to re-deploy an update to lambda and trying to attach a debugger to lambda just isn’t an option. If you find yourself re-deploying more than a handful of times, the delay introduced by the redeployment processes can feel like watching paint dry. I thought I’d take this opportunity to share some of my strategies to alleviate the issues I’ve encountered developing and debugging both simple and complex lambda functions.

I find that it is always useful to log the event or input (depending on your language of choice) for any deployed lambda function - while you can mock this out (and should for unit tests!), I’ve found that having the full event has been critical for some debug cases. Even with AWS X-Ray enabled on your function, there isn’t enough information to usually recreate the full event structure. Depending on your codebase you may want to also log the context object, but in my experience, this is isn’t usually necessary.

Method 1: A quick and dirty method

With the event logged, it is straightforward to build a quick harness to run the failure instance locally in a way that is usually good enough.

Let’s look at an example in Python - if for example, our handler is handler() in my_lambda.py:

def handler(message):
    print('My Handler')
    print(message)
    # Do stuff

    # Error happens here
    raise Exception('Beep boop bop')

    return None

First, open your cloud watch logs for this lambda function (If you are using Stackery to manage your stack, you can find a direct link to your logs in the deployment panel) and capture the message that the function printed Cloud Watch Log

Then, we can create a simple wrapper file tester.py and import the handler inside. For expediency, I also just dump the event into a variable in this file.

import my_lambda

message = {
  'headers': {
    'accept': '...',
    'accept-language': '...',
    # ...
  }
}


my_lambda.handler(message)

With this, you can quickly iterate on the code in your handler with the message that caused your failure. Just run python tester.py.

There are a handful of caveat’s to keep in mind with this implementation:

  • ENV vars: If your function requires any ENV vars to be set, you’ll want to add those to the testing harness.
  • AWS SDK: If your lambda function invokes any AWS SDK’s, they will run with the credentials defined for the default user in ~/.aws/credentials which may cause permission issues
  • Dependencies: You’ll need to install any dependencies your function requires

But, with those caveats in mind, I find this usually is good enough and is the fastest way to replicate an error/iterate on lambda development.

Method 2: Using Docker

For the times you need to run in a sandboxed environment that is identical (or as close to as possible) as Lambda, I turn to using Docker with the images provided by LambCI.

When debugging/iterating, I find that my cycle time is sped up by using the build images versions of the LambCI images and running bash interactively. Eg, if my function is running on Python 2.7, I’ll use the lambci/lambda:build-python2.7 image. I prefer launching into bash rather than having Docker run my lambda function directly, because otherwise, any dependencies will need to be downloaded & installed each run, which can add significant latency to the run.

So in the above example, my command would be docker run -v /path/to/code:/test -it lambci/lambda:build-python2.7 bash. Then, once bash is loaded in the Docker Container, I first do the following:

  1. CD to the test directory: cd /test
  2. Install your dependencies
  3. Run test tester: python /test/tester.py

With this, since we are running docker run with the -v flag to mount the handler directory inside the container as a volume, any changes you make to your code will immediately affect your next run, enabling the same iteration speed as Method 1 above. You can also attach a debugger of your choice if needed.

While this method requires some setup of Docker and thus is a little more cumbersome to start up than Method 1, it will enable you to run locally in an environment identical to Lambda.

READ MORE >

Try Stackery For Free

Gain control and visibility of your serverless operations from architecture design to application deployment and infrastructure monitoring.