Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How to solve the testing challenges that come with serverless apps

public://pictures/glennbuckholz.jpg
Glenn Buckholz Technical Manager, Coveros
 

Serverless technologies can go a long way toward helping organizations with certain application needs avoid the complexity of infrastructure management. You can just throw code at the cloud and have an application. But implementing serverless apps has consequences.

Because of the tight integration with your cloud provider, and the fact that you no longer have direct access to the thing that executes your code, with serverless you have less control over the environment in which you work.

There are several implications here that directly affect testing. To properly onboard a serverless application in your testing organization you need to understand how testing serverless apps deviates from traditional application testing and how to adapt your current techniques to deal with this new platform. 

Here's how to best approach testing challenges with serverless apps.

Serverless testing: The good and the bad

The good—and bad—part about going serverless is the lack of infrastructure that you’ll need to maintain. If you have no native infrastructure, then there is no machine from which admins can get logs. So when you find an application deficiency, you can no longer ask the sysadmin to give you the logs. There are none.

For this reason, both testers and developers now need limited access to the cloud console to do their jobs. The console is no longer the sole province of the cloud architect or cloud administrator. You need a practical approach to testing serverless applications, and the best way to understand that is by looking at a theoretical application and diving into the details of how to test it.

A serverless test case scenario

Consider the case of a two-tier web application that runs in Amazon Web Services (AWS). The web layer is driven by API Gateway, S3, and Lambda (node for this example), with Dynamo as the database. The AWS API gateway is a high-capacity web server that routes incoming web traffic to almost anything in your AWS account. S3 houses the static content (images, static HTML, etc.), while Lambda represents all of your active server-side content, and Dynamo works with the Lambdas as the persistence layer. There are other serverless features you could add to this example, but let’s keep it simple for now.

Now that the application is defined, here’s how you pull it apart and test it. 

Serverless infrastructure

The first difference you’ll find is that you need to take into consideration that serverless infrastructure is now part of the application. It’s defined in XML or JSON. The markup language tells your cloud provider what resources to stand up and what code to put in them automatically!

You can check the entire provisioned application architecture into a code repository for auditing and traceability. You can create a simple end-to-end test and check any change to the architecture into your repository to quickly test it. Now you have a quick, reliable way to vet structural changes to the structure of your application. 

Logging

In addition to dealing with application infrastructure, you face the issue of logging. Since you don’t have access to the underlying hardware, virtual or otherwise, you must rely on CloudWatch, Amazon’s monitoring service for AWS cloud resources, to provide information about what is going on in your application.

There is one big disadvantage here: You will be able to see only what your cloud provider, Amazon in this case, lets you see. This limits the type of troubleshooting you can do to application-specific errors.

If there is a problem with the underlying platform, there’s unlikely to be any information in the logs about that. In addition, through your application configuration, you need to tell CloudWatch to aggregate your logs to keep them organized. Although CloudWatch will group everything from your application into a few buckets, you need a more granular breakdown to quickly find individual component messages. 

Now that you’ve sorted the logs, you can look at some familiar elements, such as the source code. To create a valid test, you can’t just use the node interpreter to execute your Lambda. The node app in and of itself is only part of what you use to execute your code. You need to replicate the execution environment, with all the default libraries and nuances of how things will be at runtime in AWS. 

For this you’ll need the Serverless Application Model (SAM), which replicates the execution engine in AWS for Lambda using Docker. All you need to do is configure SAM locally and in your CI environment, and you’ll have a platform you can use to run unit and integration tests against your code. The major difference in running tests locally is that you must use the cloud provider’s application runtime to execute your code for accurate reporting, and you must avoid using the native compiler or interpreter if you want to receive accurate results. 

Coverage is an important metric that helps testers determine the completeness of the testing effort. In order for coverage to work and be accurate, you need to execute the coverage instrumentation framework inside SAM. In the early days of serverless, you needed a serious amount of duct tape to get everything to work together, or you had to make compromises surrounding your runtime environment.

Fortunately, Mocha and Istanbul, the node coverage tools, now have a mode that deals specifically with serverless applications. With a little bit of extra configuration on the SAM side to include Mocha, you can run your test suite against SAM and get accurate results. The key for coverage is to ensure that your tool set has a mode for working with serverless execution engines. 

Those are the differences to look for in serverless testing. Next I'll show you how, by making a few minor changes, you can reuse several of your traditional testing techniques. These include end-to-end testing, database testing, dead-link testing, and load and performance (L&P) testing. 

Database testing

Database testing is almost a free pass. Since most testers and tests have already abstracted the database into a connection URL, almost everything looks the same. Specifically for Amazon's DynamoDB, you have one of three options. One is that QA can stand up a testing instance in the cloud console for more in-depth testing. This looks and acts like traditional RDBMS or NoSQL test instances, and there is now no change at all to how you execute your tests and view results. 

AWS also provides a Docker container for a local instances of DynamoDB. The Dockerized instance of it allows for risky or destructive testing locally, in a throwaway environment. Lastly, for people interested in continuous integration (CI), use a maven plugin to test your DynamoDB changes in your CI pipeline, just as you would do when testing using H2, the industry standard in memory database. Due to the nature of databases, there is almost no difference in how to approach testing a serverless instance of your persistence layer versus testing traditional database back ends. 

End-to-end testing and dead-link testing are almost exactly the same in terms of test development and execution when compared with traditionally developed web-based applications. In this case you point your testing scripts, people, or tools toward the application’s front-end URL and collect the results. 

But there is one minor difference here: In AWS at least, all API gateways are open to the Internet, so your site must have authentication, and you must manage test users and treat them as real users. You must change passwords regularly, someone should monitor where people are hitting the test site from, and so on.

If you ignore gating the test environment, then real users—or your competitors—may accidentally gain access to material that’s not meant to be public. Other than needing to have a strategy for how to gate access to test instances of your application, there’s virtually no difference in how you structure, execute, or think about your end-to-end or broken link tests. 

Load and performance testing

This type of testing falls under the same broad strokes as end-to-end testing. There is virtually no difference in how you develop and execute tests. But there’s one minor gotcha here: How to troubleshoot bottlenecks or errors that your performance test suite may uncover. 

On the error side, be wary of the execution cap for Lambda functions. It may be the case that your code is executing as expected, but if it takes just a smidge too long to finish, depending on your Lambda settings, the AWS execution engine may kill the process with very little communication.

If you don’t know what you are looking for, this can represent hours of lost time looking for a non existent programming error. Also, you can no longer instrument the underlying system to find bottlenecks. There will be no system monitor on the database to tell you there is disk contention or memory exhaustion. 

When developing the application, you need to have something that lets you observe the overall performance of each layer, whether it’s instrumentation or a crafty test. While creating performance tests will be almost the same as how you do it now, gathering results and troubleshooting issues will differ significantly in some circumstances. Specifically, gathering metrics and finding bottlenecks can be more troublesome.

Serverless applications are not low-code, but they can be low-infrastructure. For certain organizations this can add up to a significant savings in the IT budget. When shifting to this paradigm, know ahead of time what the differences are when compared to traditional software development. Certain differences will result in your QA organization no longer being able to give an informed analysis of whether your applications are good or not.

One step at a time

These are just some of the major gotchas you'll face when moving to a serverless paradigm in terms of CI, local testing, end-to-end testing, database testing, and performance testing.

With a little forethought, and by using some of the mitigations mentioned above, you can ensure that you have the proper tooling, both skill and technology wise, to minimize the heartache of moving to a serverless platform.

Don't miss my talk, "Getting Started with Microservices and Serverless," at Agile + DevOps East in Orlando, Florida. The conference runs November 3-8, 2019.

Image: MaxPixel/CC0 1.0

Keep learning

Read more articles about: App Dev & TestingTesting