Back

Top AWS Lambda metrics to monitor

0

AWS metrics


Building scalable micro-service applications these days is no longer a tedious chore, thanks to the advent of Serverless, at the heart of which lies the popular AWS Lambda service.


While Lambda is called Serverless, it does still internally rely on compute power provided by a physical server at the backend. But configuring and managing this physical server is automatically handled by AWS, hence the term serverless. You simply write the code, upload it and you're done. Easy peasy!


But not quite yet. Troubleshooting & maintaining these types of applications has become that much more difficult due to the highly distributed nature of micro-services built on Lambdas. 


There are tons of moving parts in an application that require tracking each and every path individually, which can be time-consuming & futile. Instead, focusing on relevant key metrics and their potential impact on the application makes all the difference in ensuring your applications are always running healthy.


This article outlines the key AWS Lambda metrics that should be monitored and how they impact your serverless application.



1. Errors


Why: The first and most obvious metric. Without it, you'd have no clue if your app is running healthy or not. It is the first indication that something is wrong and needs to be further debugged in the logs.


Impact: The more errors there are, the more downtime your customers face. In essence, your app isn't working as you had intended it to. Quick alerting, troubleshooting and fixes will save the day for you and your customers.




2. Max. Duration


Why: Every Lambda instance has a lifeline, which means it cannot run longer than the configured timeout value. The default timeout value is at 3 seconds, which can be extended to a maximum of 15 minutes.


Impact: AWS will forcibly shut your Lambda down if you exceed this timeout value, resulting in errors on your applications and for your customers. Close monitoring of this metric will yield insights into whether you have the right value set, or need to increase it to avoid being kicked out by AWS.




3. Invocations


Why: This is the number of times your Lambda has been called upon to serve customer requests or process backend jobs. Most of the time, this metric will reflect on the scalability and use of your application. But at times, a higher value on this metric could also indicate failed retry attempts for errors in your application.


Impact: While this does not directly impact your application negatively, other than the fact that if it is 0 then there is no activity on your app, it signifies the potential cost you could incur. AWS charges 20 cents for every million invocations of a Lambda. It may not seem a lot, but if ignored, it can lead to unwanted cost spikes.




4. Max. Memory


Why: Even though Lambda is serverless, they are assigned CPUs and Memory from a physical server in the backend so that they get enough computing power to complete their jobs. This assignment is based on how much memory you choose to assign to a Lambda. AWS will then assign a number of CPU cores proportionate to the memory selected. E.g if you select 2 GB memory, you would get 2 CPU cores to run your application.


Impact: Monitoring the maximum memory used, allows you to keep a check of whether your code memory requirements are in-line with the configured memory. If not, the lambda will run out of memory, faulting your application. A good rule of thumb is to provision at least 30% more than your normal memory requirements.




5. Throttling


Why: Each instance of lambda can only serve a single request at a time, hence AWS will spawn and scale to multiple instances depending on the load request it receives. Think of it as being similar to auto-scaling groups in EC2. Every AWS account has a limit on the total number of concurrent Lambda instances that can run at a time. It starts with a limit of 1000 Lambda concurrent instances but this limit can be raised on request from AWS support.


Impact: If you start to hit throttling limits for a Lambda, it implies customer requests are being dropped or delayed until the total number of Lambda instances drops below the account threshold. This already beats the purpose of auto-scalability in serverless applications, but it's easy to remedy by requesting a higher limit depending on your usage patterns.




6. Cold Starts


Why: Cold start refers to the time AWS takes to load your code onto a server before it can start executing it. This time is usually in hundreds of milliseconds, but could also be higher in the seconds depending on code size and runtime environment.


Impact: A higher cold start means higher latency for your application to respond. The total time to respond would be a combination of Cold start time + code execution time. This metric becomes more crucial to monitor if your Lambda applications are customer-facing.




7. Cost


Why:  It's easy to select Lambda for micro-service-based architectures due to its very low implied costs. But this does not always result in a cost-effective architecture.


AWS charges for 2 things on a Lambda:

      A. Per invocation - 20 cents per million invocations

      B. Duration of the lambda based on memory selected - billed as per GB-second


Impact: The obvious impact of all this is on your budget and pockets. Lambdas are tricky services that appear cheap from the on-set, but the costs can shoot up unknowingly once they scale. 


Hence, it's important to keep an eye out for 3 important metrics that directly impact cost - Max memory, average duration & invocations. If your application only needs 128 MB memory on average and has 5 GB configured, both money and compute power are wasted.




Summary


The next thing to wonder will be "how & where do I find all this data"

It is all in your AWS account, but you have to find and piece it together.


Monitoring and quick troubleshooting of serverless applications doesn't have to be difficult or costly if the right monitoring tools are used. It will not only be beneficial for your application, but also for your developers & customers.


With a single click of a button, Montrix will put all this information and more, right at your fingertips:






Share

This may also interest you

Top AWS Lambda metrics to monitor

Top AWS Lambda metrics to monitor

Building scalable micro-service applications these days is …