Introduction

Here is the scenario: I want to deploy a AWS lambda function depends on one or more lambda Layers. For example, python folder contains code for the lambda layer

1
2
3
4
root/
|_ python/
|_ example.py
|_ main.tf

with content of example.py be

1
2
def main():
pass

To create a lambda layer, you upload a zip file containing the source to AWS. For example, following link and link, you zip the python/ folder and upload to AWS to create a new layer. With Terraform, you run terraform apply with

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# ... provder config, credentials ...
# package tf
resource "null_resource" "build_layer" {
provisioner "local-exec" {
working_dir = "${path.module}"
command = "zip -r layer.zip python/"
}
}

# deploy layer
resource "aws_lambda_layer_version" "lambda_layer" {
depends_on = [ null_resource.build_layer ]
layer_name = "lambda_layer_name"
filename = "${path.module}/layer.zip"
}

However, after making new changes to .py files and zip them again, re-runing terraform apply will not create a new version of the lambda layer because the “aws_lambda_layer_version” definition does not depends on any variables and will not update.

One way to force the re-creation of the layer, run terraform destroy followed by terraform apply. But this will create unnecessary versions even when .py files are unchanged.

Conditional update

To only re-create layer when example.py changes, we need to create a trigger for the resource. For this scenario, I think the best way is to calculate the hash value of the zip file generated and use it as the trigger. For instance,

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# ... provder config, credentials ...
# package files
resource "null_resource" "prebuild" {
provisioner "local-exec" {
working_dir = "${path.module}"
command = "zip -r layer.zip python/"
}
}

# obtain hash
resource "terraform_data" "replacement_trigger" {
depends_on = [ null_resource.prebuild ]
input = filesha256("${path.module}/layer.zip")
}

# The target resource
# will be destroyed and recreated whenever the value of terraform_data.replacement_trigger changes
resource "aws_lambda_layer_version" "lambda_layer" {
lifecycle {
replace_triggered_by = [ terraform_data.replacement_trigger ]
}

layer_name = "lambda_layer_name"
filename = "${path.module}/layer.zip"
}

Here, I introduce “terraform_data” as a placeholder for the hash value, because based on link, you can only reference managed resources in replace_triggered_by expressions.

To have a simulate run, this is the output of terraform plan for the first time

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
❯ terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create

Terraform will perform the following actions:

# aws_lambda_layer_version.lambda_layer will be created
+ resource "aws_lambda_layer_version" "lambda_layer" {
+ arn = (known after apply)
+ created_date = (known after apply)
+ filename = "./layer.zip"
+ id = (known after apply)
+ layer_arn = (known after apply)
+ layer_name = "lambda_layer_name"
+ signing_job_arn = (known after apply)
+ signing_profile_version_arn = (known after apply)
+ skip_destroy = false
+ source_code_hash = (known after apply)
+ source_code_size = (known after apply)
+ version = (known after apply)
}

# null_resource.prebuild will be created
+ resource "null_resource" "prebuild" {
+ id = (known after apply)
}

# terraform_data.replacement_trigger will be created
+ resource "terraform_data" "replacement_trigger" {
+ id = (known after apply)
+ input = "139821af806d684acb1bdb7d7f8b74d03aac992eb328ff6b19416f12a1834be7"
+ output = (known after apply)
}

Plan: 3 to add, 0 to change, 0 to destroy.

Change example.py to

1
2
3
def main():
print("Hello World")
pass

After re-zip and re-run terraform plan. The output is

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
❯ terraform plan
terraform_data.replacement_trigger: Refreshing state... [id=xxxxxx-xxxx-xxxx-xxxx-xxxxxx]
null_resource.prebuild: Refreshing state... [id=xxxxxxxxxxxx]
aws_lambda_layer_version.lambda_layer: Refreshing state... [id=arn:aws:lambda:us-west-1:xxxxxxxxx:layer:lambda_layer_name:1]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
-/+ destroy and then create replacement

Terraform will perform the following actions:

# aws_lambda_layer_version.lambda_layer will be replaced due to changes in replace_triggered_by
-/+ resource "aws_lambda_layer_version" "lambda_layer" {
~ arn = "arn:aws:lambda:us-west-1:xxxxxxxxxxx:layer:lambda_layer_name:1" -> (known after apply)
- compatible_architectures = [] -> null
- compatible_runtimes = [] -> null
~ created_date = "2023-11-01T19:50:16.596+0000" -> (known after apply)
~ id = "arn:aws:lambda:us-west-1:xxxxxxxxxxx:layer:lambda_layer_name:1" -> (known after apply)
~ layer_arn = "arn:aws:lambda:us-west-1:xxxxxxxxxxx:layer:lambda_layer_name" -> (known after apply)
+ signing_job_arn = (known after apply)
+ signing_profile_version_arn = (known after apply)
~ source_code_hash = "E5ghr4BtaErLG9t9f4t00DqsmS6zKP9rGUFvEqGDS+c=" -> (known after apply)
~ source_code_size = 346 -> (known after apply)
~ version = "1" -> (known after apply)
# (3 unchanged attributes hidden)
}

# terraform_data.replacement_trigger will be updated in-place
~ resource "terraform_data" "replacement_trigger" {
id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
~ input = "139821af806d684acb1bdb7d7f8b74d03aac992eb328ff6b19416f12a1834be7" -> "df63d70f577e1cdb868998d77e79869a16d5c064783a213b53b3b0818459f888"
~ output = "139821af806d684acb1bdb7d7f8b74d03aac992eb328ff6b19416f12a1834be7" -> (known after apply)
}

Plan: 1 to add, 1 to change, 1 to destroy.

aws_lambda_layer_version.lambda_layer: Destroying... [id=arn:aws:lambda:us-west-1:xxxxxxxxx:layer:lambda_layer_name:1]
aws_lambda_layer_version.lambda_layer: Destruction complete after 0s
terraform_data.replacement_trigger: Modifying... [id=xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx]
terraform_data.replacement_trigger: Modifications complete after 0s [id=xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx]
aws_lambda_layer_version.lambda_layer: Creating...
aws_lambda_layer_version.lambda_layer: Creation complete after 5s [id=arn:aws:lambda:us-west-1:xxxxxxxxx:layer:lambda_layer_name:2]

As you can see, the layer’s version number changes from 1 to 2. And further terraform apply will prompt

1
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

A problem

Apparently that function filesha256() cannot use path of dynamically generated file. So thee workaround right now is to use

1
2
zip -r layer.zip python/
terraform apply