Serverless computing with AWS Lambda

  • Home
  • blog
  • Serverless computing with AWS Lambda

As cloud technology has grown, one of the services that providers can offer is serverless computing. And one of the most popular, and fastest growing, serverless products out there is AWS Lambda (link to AWS).

There are many benefits to serverless computing, the biggest of which are not having to provision or manage hardware.  In this post (link to serverless computing), we talked in greater depth about the advantages of, and when you might want to use a serverless service such as Lambda, versus running something on a managed server.

What is AWS Lambda

First released in 2014, Lambda is a serverless computing platform from AWS.  It’s event-driven, and can run code in response to a wide variety of triggers. All of the computing resources needed to run this code, and the automated scaling of it, are managed by AWS.

The biggest advantage to serverless computing in general, and Lambda specifically, is that there are no servers to manage. The code runs automatically without having to provision instances. Once your code is ready, you simply deploy it as a Lambda function and go.

One of the immediate benefits to using Lambda is it’s part of the AWS free tier.  Lambda has no cost for up to 1,000,000 requests, and up to 400,000 GBs of compute time. After the first one million requests, it’s just $0.20 per million requests.

Coding Flexibility

With Lambda, you aren’t limited to coding in one particular language or another.  Right out of the gate, you can program your Lambda function in Java, Python, Node.js, Go, C# (.NET Core), C# / Powershell, or Ruby. Depending on what your function will do, you can code in it the language that is best suited for it, or the one you are most comfortable with.

You can also use a Custom Runtime API with Lambda, but it has to be community supported.  An example of this would be Rust.

Creating a Lambda function

Creating a Lambda function is easy, and there are three ways to do it.  The first is through the AWS Lambda Console.

From the AWS main console, select Lambda.  From the landing page, select Create Function.

The next screen allows you to author from scratch, or choose from an existing Lambda blueprint.

For more experienced AWS users, you can also use the AWS command line interface (CLI) to create Lambda functions.

CloudFormation

The other way of creating Lambda functions is through CloudFormation. And this can be done in two approaches.  The first is inline, where you define the Lambda function in the CloudFormation code. This approach is geared for very simple Lambda functions.

The other way is through S3.  You would store the Lambda zip in S3, and then refer to the S3 zip location in the CloudFormation code.

Lambda Triggers

Lambda is easily integrated with many other AWS services. Some of the most common that trigger lambda functions are the API Gateway, Cognito, DynamoDB, CloudFront, CloudWatch Events, CloudWatch Logs, Kinesis, S3, SNS and SQS.

As an example of how easy it is to set up a service to trigger Lambda, let’s look at an S3 use case. When a new object, such as an image, is put into S3, we want Lambda to be triggered to take that image and create a thumbnail to also store in S3. Once the Lambda function has been created, all you have to do is go in to the S3 Bucket properties, in the Events section, select to add notifications to the Lambda function for new object events.  You select the Lambda function to trigger, and that’s pretty much it. You’ve integrated S3 as a trigger event for Lambda.

Once this setup is complete, it will also then reflect in the Lambda dashboard as an event trigger.  Part of this process also adds the necessary resource policy on the lambda function to allow the notifications from S3. But it’s that simple.

Lambda functions can be run either synchronously or asynchronously.  If run as a synchronous function, error handling, such as retries and exponential backoff, must be designed on the client side.

Lambda Event Source Mapping

Event source mapping is what allows Lambda to be triggered from Kinesis Data Streams, SQS and SQS FIFO queues, and DynamoDB Streams.  In each of these scenarios, Lambda polls the service (PULL) synchronously.

For Kinesis and DynamoDB streams, Lambda processes items in order, starting with new items, from the beginning, or from a timestamp. Can also use batch processing to accumulate records before processing, and you can process multiple batches in parallel. If Lambda encounters an error, the entire batch is reprocessed until the function succeeds, or the items in the batch expire.

For SQS and SQS FIFO queues, Lambda will poll the queue, and process in batches of 1-10 messages. Can also setup a DLQ (Dead Letter Queue) in SQS to handle in case of exception.

Handling failed transactions with Lambda Destinations

For asynchronous invocations, you can use this setting to define destinations for both successful and failed events, such as SQS, SNS, another Lambda, as well as an EventBridge bus.  AWS recommends using destinations instead of a DLQ in case of a failure, but either can be used.

Lambda@Edge

Another cool application of Lambda is called Lambda@Edge.  With this, you can run Lambda functions in CloudFront, at the edge. Some examples of the things you might do with this are things like as Dynamic Web content, Search Engine Optimization, Bot Mitigation, Real Time image transformation, User Tracking, and more.

Lambda Performance

With AWS Lambda, you control the resource configuration through the amount of RAM allocated.  Lambda allows you to assign from 128MB to 3,008MB of RAM, in 64MB increments.  As you add RAM, you also get additional vCPU credits. 

At 1,792MB, the function will have the equivalent of one full vCPU.  Above this amount, you should use multi-threading in your code to benefit from having more than one CPU.

Most importantly, if your Lambda application is computation heavy, you can increase the processing capability simply by increasing the amount of RAM allocated to it.   

Lambda Limitations

Lambda is a great product to use for micro-services or other short-running applications.  It is not designed to be used for long-running processes. It comes with a default timeout of 3 seconds, and this can be increased up to a maximum of 15 minutes.  Anything that may run above the timeout limit should be run somewhere else, such as on an ec2 instance instead.

As a serverless function, Lambda can rapidly auto scale up to handle demand.  By default, up to 1,000 concurrent instances of Lambda can be initiated. 

The 1,000 instance limit is at an account level.  In order to prevent throttling issues in other Lambda functions when you have one that has large peaks in demand, you should set up a reserve limit. This set aside ensures during one of those peaks you have Lambda capacity to run your other functions when called.

If the throttling limit is hit, Lambda will continue to retry until there is capacity available.

Lambda Access Control

As with other AWS services, Lambda is managed through the assignment of an IAM role.  This role must then be assigned permissions to access other services, such as S3, CloudWatch, Kinesis, and more.  A best practice is to create a role for each Lambda function individually.

Logging

Lambda logs are automatically sent to and stored in CloudWatch, as long as the function has an execution role with a policy to access it. From there you can then access your logs as you would for any other AWS service.