Deploying a Lambda Function with Terraform
Table of Contents
Building on my previous post about deploying a Lambda function using AWS SAM, this guide demonstrates how to deploy a containerized Lambda function using Terraform. The example application queries an external API (https://api.github.com) and returns the response.
You can check out the source code from this GitHub repository.
#
Prerequisites
- Terraform installed
- AWS CLI with credentials configured
#
Application
The application logic is straightforward. Defined in lambda/app.py, it uses the requests library to query https://api.github.com and returns the API response.
import requests
def lambda_handler(event: dict, context: dict) -> dict:
response = requests.get("https://api.github.com")
return {
"statusCode": 200,
"body": response.json(),
}
#
Containerization and Package Manager uv
Before examining the Dockerfile, it’s worth noting why I prefer containerization for Lambda functions. While zipping code (with dependencies) is the simplest deployment method, AWS Lambda imposes a strict 250MB limit on unzipped deployment packages. Applications with large dependencies can easily exceed this. Therefore, I recommend containerizing Lambda functions, particularly those relying on third-party libraries, to avoid these constraints.
For dependency management, this project uses uv, a modern package manager similar to poetry but significantly faster. It uses pyproject.toml for dependency definition and uv.lock for locking, both of which are included in the repository.
The Dockerfile employs a multi-stage build:
- Stage 1: Copies
pyproject.tomlanduv.lock, then installs dependencies and source code. - Stage 2: Copies the installed dependencies and code to the Lambda task root (
/var/task) and defines the execution command.
FROM public.ecr.aws/lambda/python:3.12 AS builder
COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /bin/
# Change the working directory
WORKDIR /app
# Copy the project files
COPY pyproject.toml uv.lock ./
# Install dependencies
RUN uv sync --no-dev
# Copy the source code
COPY lambda/ ./lambda/
FROM public.ecr.aws/lambda/python:3.12
# Copy the virtual environment contents to Lambda task root (/var/task)
COPY --from=builder /app/.venv/lib/python3.12/site-packages/ /var/task/
# Copy the source code
COPY --from=builder /app/lambda/ /var/task/
CMD ["app.lambda_handler"]
#
Terraform Configuration
The Terraform configuration is divided into three key sections:
- Providers: Configures necessary providers, such as AWS.
- IAM Roles and Policies: Establishes the execution role and permissions for the Lambda function.
- Lambda Resources: Defines the function itself and associated resources like the ECR repository.
##
Providers
Terraform providers serve as plugins that enable Terraform to interact with infrastructure platforms. For this project, the AWS provider is utilized to provision the Lambda function and its dependencies, with the configuration primarily defined in providers.tf.
# A
terraform {
required_version = "~> 1.13"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 6.0"
}
}
# B
backend "s3" {
bucket = "terraform-state-bucket-3517e72f-9d72-c38f-b9c6-6a025a99c78b"
key = "lambda/terraform.tfstate"
region = "eu-central-1"
}
}
# C
provider "aws" {
region = "eu-central-1"
default_tags {
tags = {
ManagedBy = "Terraform"
Project = "lambda-requests-example"
}
}
}
Here is a breakdown of the providers.tf configuration:
###
A. Terraform Version and Providers
Specifies the required Terraform version and providers. We invoke the AWS provider to provision the Lambda function and its underlying infrastructure.
###
B. Backend
Using a remote backend is best practice for team collaboration and state accessibility. We utilize an S3 bucket for state storage. You can set up this bucket using my bootstrap project (remember to update the bucket name).
###
C. Provider Configuration
Configures the active provider and applies default tags. Every resource created by this project will automatically receive ManagedBy = "Terraform" and Project = "lambda-requests-example" tags.
##
IAM Roles and Policies
This section configures the Execution Role assumed by the Lambda function.1 You can think of it as a role-playing game:
- Trust Policy (
aws_iam_policy_document): Authorizes the “actor” — in our case, the Lambda service (lambda.amazonaws.com) — to play (or “assume”) the role. - IAM Role (
aws_iam_role): Creates the IAM role itself, representing the “character” the actor will play. - Policy Attachment (
aws_iam_role_policy_attachment): Grants the character its abilities. Here, it attaches the permissions needed to write logs to CloudWatch.
data "aws_iam_policy_document" "assume_role" {
statement {
effect = "Allow"
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
actions = ["sts:AssumeRole"]
}
}
resource "aws_iam_role" "lambda_execution_role" {
name = "lambda_execution_role_lambda_requests_example"
assume_role_policy = data.aws_iam_policy_document.assume_role.json
}
resource "aws_iam_role_policy_attachment" "lambda_logs" {
role = aws_iam_role.lambda_execution_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
##
Lambda Resources
This section establishes the Lambda function and its dependencies. We begin by defining the ECR repository, which will store the container image.
# A
data "aws_ecr_authorization_token" "token" {}
# B
locals {
docker_image_tag = md5(join("", [
filemd5("../Dockerfile"),
filemd5("../pyproject.toml"),
join("", [for f in fileset("../lambda", "**") : filemd5("../lambda/${f}")])
]))
}
# C
resource "aws_ecr_repository" "lambda_repository" {
name = "lambda-requests-example"
force_delete = true
}
# D
resource "null_resource" "build_and_push_image" {
triggers = {
image_tag = local.docker_image_tag
}
provisioner "local-exec" {
command = <<-EOT
# Login to ECR
echo ${data.aws_ecr_authorization_token.token.password} | docker login --username AWS --password-stdin ${data.aws_ecr_authorization_token.token.proxy_endpoint}
# Build the image
docker build --platform linux/arm64 -t ${aws_ecr_repository.lambda_repository.repository_url}:${local.docker_image_tag} ..
# Push the image
docker push ${aws_ecr_repository.lambda_repository.repository_url}:${local.docker_image_tag}
EOT
}
depends_on = [aws_ecr_repository.lambda_repository]
}
###
A. Getting ECR Authorization Token
Retrieves the token required to authenticate with the ECR registry.
###
B. Calculating Image Tag
Generates a unique image tag by hashing the source code, Dockerfile, and pyproject.toml. This ensures that any change to the application logic or dependencies triggers a rebuild.
###
C. Creating ECR Repository
Provisions the ECR repository to host the container images. The force_delete = true argument ensures the repository can be destroyed even if it contains images.
###
D. Building and Pushing Docker Image
Uses a null_resource with a local-exec provisioner to build and push the container image to ECR. This resource is triggered whenever the calculated image tag changes.
The lambda.tf file defines the actual Lambda function resource. It specifies package_type = "Image", points to the image URI in ECR, and attaches the execution role created earlier. Additional settings like timeout and memory allocation are also configured here.23
resource "aws_lambda_function" "lambda" {
function_name = "lambda-requests-example"
role = aws_iam_role.lambda_execution_role.arn
image_uri = "${aws_ecr_repository.lambda_repository.repository_url}:${local.docker_image_tag}"
package_type = "Image"
architectures = ["arm64"]
timeout = 10
memory_size = 128
depends_on = [null_resource.build_and_push_image]
}
Finally, the outputs.tf file exposes the Lambda function name as a Terraform output, making it easily retrievable after deployment.
output "lambda_function_name" {
description = "The name of the Lambda function"
value = aws_lambda_function.lambda.function_name
}
#
Deploying the Lambda Function
To deploy the function, first initialize the Terraform working directory using terraform init. This sets up the backend and installs the necessary providers. The -chdir=terraform flag directs Terraform to execute commands within the terraform subdirectory.
terraform -chdir=terraform init
Once initialized, run terraform plan to preview the infrastructure changes, followed by terraform apply to execute the deployment.
terraform -chdir=terraform plan
terraform -chdir=terraform apply
Upon successful deployment, verify the ECR repository to confirm the image has been pushed.
Next, visit the AWS Lambda console to locate the newly created function. Note that the Image URI matches the one in your ECR repository.
Navigate to the Test tab to invoke the function and view the API response.
To demonstrate how the image is rebuilt upon source code modification, update lambda/app.py with the following:
import datetime
import requests
def lambda_handler(event: dict, context: dict) -> dict:
response = requests.get("https://api.github.com")
print(f"Request timestamp: {datetime.datetime.now()}")
return {
"statusCode": 200,
"body": response.json(),
}
Rerun terraform -chdir=terraform apply to rebuild the image and update the Lambda function. Once deployment completes, you’ll observe a new image in the ECR repository and the updated function in the AWS console.
Navigate to the Test tab and invoke the function again. Checking the CloudWatch Logs will show the newly added request timestamp.
#
Cleaning Up
To remove all resources created by this deployment, execute terraform -chdir=terraform destroy.
#
Learn More
-
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function#basic-function-with-nodejs ↩︎
-
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function#container-image-function ↩︎
-
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_function#argument-reference ↩︎