10 hints to make Java Lambda faster

Technical
20.05.2022

Broscorp created quite a lot of different Lambda functions for different clients. All of them served different purposes but share something in common. As Broscorp has vast experience working with Java, we write and fine-tune Lambda functions in Java. Usually, we use the iterative approach to building Lambda functions. First, we solve the business problem according to the terms of reference. After solving the business problem correctly, we tune it to achieve the best AWS Lambda performance.

Why do we do so? There are two reasons:

  1. Latency
  2. Cost

Let’s look deeper at both reasons. Usually, serverless applications aren’t perfect when you have to achieve the smallest possible latency. But still, it shouldn’t run for ages even if the features are critical. Letting your client wait for one second is a far cry from ten seconds. 

AWS Lambda costs depend on your execution time. Literally, every millisecond counts and can impact the invoice at the end of the month.

Keeping all of that in mind, I decided to create a cheat sheet to collect all the tricks you have to do to improve your AWS Lambda performance.

1. Use SDKv2

I’ve seen a lot of times when AWS SDKv1 and v2 are completely incompatible. Migrating from v1 to v2 can be easy but sometimes API changes are massive, and you simply can’t find the relevant method in V2. But if you can, then you’d better do so. V2 adds a lot of performance improvements, so here’s a rule of thumb: if it’s possible, then use V2 whenever you can.

2. Using a specific credential provider

AWS SDK offers quite an exciting way to detect credentials. It passes multiple steps, trying to figure out the proper credentials until it finds something or fails to do so. 

  1. Java system properties
  2. Environment variables
  3. Web identity token from AWS STS
  4. Shared credentials and config files
  5. Amazon ECS container credentials
  6. Amazon EC2 instance profile credentials

All these steps take time, and you can save some milliseconds by specifying a specific credential provider. Like this:

S3Client client = S3Client.builder()
   .credentialsProvider(EnvironmentVariableCredentialsProvider.create())
      .build();

This way, SDK wouldn’t traverse all possible sources of credentials and immediately detect the correct ones.

3. Initialize everything prior to execution

Here’s a simple piece of advice to follow. You may want to make things simpler and put all the initialization into the handler method. You’d better not do this—try to put as much initialization into the constructor as possible. It may reduce latency for repetitive lambda invocations.

4. Reduce your jar size

To reduce lambda cold start, one of the not obvious recommendations is to reduce the jar size. Java developers usually don’t care to include a few more libraries to avoid reinventing the wheel. But with lambda, you’d better take a closer look at your pom.xml and clean everything unnecessary because a bigger jar means a longer cold start.

5. Avoid using any DI

It’s quite hard to imagine a modern Java application without dependency injection. It’s super easy, simplifies coding and testing, and brings a lot of other benefits. Nonetheless, avoid using it with Lambda. To reduce lambda execution time, you want to keep your lambda as simple and as small as possible. Imagine that you have a handler class and a few other classes to perform the business logic. Now tell me honestly, “Do you really need a DI to combine four or five classes altogether?” If otherwise, your Lambda function consists of 20+ classes, I have bad news for you. If for any reason you still need a DI, then I recommend taking a look at lightweight DIs such as Guice.

6. Use tiered compilation

Just-in-time compilation offers a cool feature such as tiered compilation introduced with Java 8. JIT is designed to run the code and eventually reach the native code performance. It can’t be done immediately. But by running the code and analyzing the hot spots, JIT eventually interprets the code almost as good as native. You can achieve this by collecting the profiling information in the background. It makes sense with your monolith application running in a servlet container for ages. But short-lived Lambda can’t benefit from these optimizations, and it’s better to turn it off completely. To do so, use these env variables:

For better understanding, I would refer to Oracle docs.

7. Specify a region and HttpClient explicitly

By default, AWS SDK comes with three different HTTP libs such as Apache, Netty, and a built-in JDK HTTP client. Apache and Netty have a lot of features that the standard built-in solution doesn’t, but we have to reduce a cold start using the built-in one and scrap two others to keep less dependency in the resulting jar.

<dependency>
            <groupId>software.amazon.awssdk</groupId>
            <artifactId>s3</artifactId>
            <exclusions>
                <exclusion>
                    <groupId>software.amazon.awssdk</groupId>
                    <artifactId>netty-nio-client</artifactId>
                </exclusion>
                <exclusion>
                    <groupId>software.amazon.awssdk</groupId>
                    <artifactId>apache-client</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

You want to do almost the same with the region. It took some time to figure out the region before lambda is deployed, and this time can be reduced by specifying the region explicitly.  Overall, the resulting configuration should look like this:

S3Client client = S3Client.builder()
       .region(Region.US_WEST_2)
       .httpClient(UrlConnectionHttpClient.builder().build())
       .build();

8. Use RDS Proxy to get connection pooling

When it comes to lambda optimisation, it’s also worth mentioning the case when Lambda connects to RDS. In “normal” Java applications, it’s common to use a connection pool to reuse existing ones and save some time on establishing the new ones. RDS Proxy service is coming to the rest when you’re using Lambda.

9. Increase the memory allocated

Here’s a bit of simple yet powerful advice. It can be that your Lambda can run out of memory with the standard 128 Mb allocated. And it seems correct to increase your memory in this case.  What’s hidden and not obvious is that increasing the memory allocated gives your lambda more CPU available. So, the combination of more memory allocated and more virtual CPU power available, of course, decreases the execution time. 

Giving your lambda and CPU more memory means increased costs. But less execution time. So what’s the optimal choice?  You can do this manually, writing your own set of tests but take a look at this beautiful tool made for us.

10. Use provisioned concurrency

The last way to improve your lambda processing time is to specify provisioned concurrency. It means that AWS will keep the execution context ready for you to be used and thereby decrease your lambda cold start. Of course, it comes with additional costs, but it can be crucial to web applications because your clients may become bored waiting for lambda to spin up. You can specify the number of provisioned instances and enjoy lambda being warmed up for you.

Applying all of these recommendations will improve your aws lambda invocation time and cut costs. However, I may say the serverless architecture isn’t applicable to every single application. Broscorp gained enough experience to correctly decide and find the best approach to solving your business problems. No matter which approach we choose, your application will be fast, reliable, and cost-effective.

Get your free project assessment

Leave us your email and we will contact you shortly


    No, thanks
    Get a project estimation now! Contact us to discuss your project and to get an estimation!