Here is the scenario: I want to deploy a AWS lambda function depends on one or more lambda Layers. For example, python folder contains code for the lambda layer
1 2 3 4
root/ |_ python/ |_ example.py |_ main.tf
with content of example.py be
1 2
defmain(): pass
To create a lambda layer, you upload a zip file containing the source to AWS. For example, following link and link, you zip the python/ folder and upload to AWS to create a new layer. With Terraform, you run terraform apply with
However, after making new changes to .py files and zip them again, re-runing terraform apply will not create a new version of the lambda layer because the “aws_lambda_layer_version” definition does not depends on any variables and will not update.
One way to force the re-creation of the layer, run terraform destroy followed by terraform apply. But this will create unnecessary versions even when .py files are unchanged.
Conditional update
To only re-create layer when example.py changes, we need to create a trigger for the resource. For this scenario, I think the best way is to calculate the hash value of the zip file generated and use it as the trigger. For instance,
# The target resource # will be destroyed and recreated whenever the value of terraform_data.replacement_trigger changes resource "aws_lambda_layer_version" "lambda_layer" { lifecycle { replace_triggered_by = [ terraform_data.replacement_trigger ] }
Here, I introduce “terraform_data” as a placeholder for the hash value, because based on link, you can only reference managed resources in replace_triggered_by expressions.
To have a simulate run, this is the output of terraform plan for the first time
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create
Terraform will perform the following actions:
# aws_lambda_layer_version.lambda_layer will be created + resource "aws_lambda_layer_version" "lambda_layer" { + arn = (known after apply) + created_date = (known after apply) + filename = "./layer.zip" + id = (known after apply) + layer_arn = (known after apply) + layer_name = "lambda_layer_name" + signing_job_arn = (known after apply) + signing_profile_version_arn = (known after apply) + skip_destroy = false + source_code_hash = (known after apply) + source_code_size = (known after apply) + version = (known after apply) }
# null_resource.prebuild will be created + resource "null_resource" "prebuild" { + id = (known after apply) }
# terraform_data.replacement_trigger will be created + resource "terraform_data" "replacement_trigger" { + id = (known after apply) + input = "139821af806d684acb1bdb7d7f8b74d03aac992eb328ff6b19416f12a1834be7" + output = (known after apply) }
Plan: 3 to add, 0 to change, 0 to destroy.
Change example.py to
1 2 3
defmain(): print("Hello World") pass
After re-zip and re-run terraform plan. The output is
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: ~ update in-place -/+ destroy and then create replacement
Terraform will perform the following actions:
# aws_lambda_layer_version.lambda_layer will be replaced due to changes in replace_triggered_by -/+ resource "aws_lambda_layer_version" "lambda_layer" { ~ arn = "arn:aws:lambda:us-west-1:xxxxxxxxxxx:layer:lambda_layer_name:1" -> (known after apply) - compatible_architectures = [] -> null - compatible_runtimes = [] -> null ~ created_date = "2023-11-01T19:50:16.596+0000" -> (known after apply) ~ id = "arn:aws:lambda:us-west-1:xxxxxxxxxxx:layer:lambda_layer_name:1" -> (known after apply) ~ layer_arn = "arn:aws:lambda:us-west-1:xxxxxxxxxxx:layer:lambda_layer_name" -> (known after apply) + signing_job_arn = (known after apply) + signing_profile_version_arn = (known after apply) ~ source_code_hash = "E5ghr4BtaErLG9t9f4t00DqsmS6zKP9rGUFvEqGDS+c=" -> (known after apply) ~ source_code_size = 346 -> (known after apply) ~ version = "1" -> (known after apply) # (3 unchanged attributes hidden) }
# terraform_data.replacement_trigger will be updated in-place ~ resource "terraform_data" "replacement_trigger" { id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx" ~ input = "139821af806d684acb1bdb7d7f8b74d03aac992eb328ff6b19416f12a1834be7" -> "df63d70f577e1cdb868998d77e79869a16d5c064783a213b53b3b0818459f888" ~ output = "139821af806d684acb1bdb7d7f8b74d03aac992eb328ff6b19416f12a1834be7" -> (known after apply) }
Plan: 1 to add, 1 to change, 1 to destroy.
aws_lambda_layer_version.lambda_layer: Destroying... [id=arn:aws:lambda:us-west-1:xxxxxxxxx:layer:lambda_layer_name:1] aws_lambda_layer_version.lambda_layer: Destruction complete after 0s terraform_data.replacement_trigger: Modifying... [id=xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx] terraform_data.replacement_trigger: Modifications complete after 0s [id=xxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx] aws_lambda_layer_version.lambda_layer: Creating... aws_lambda_layer_version.lambda_layer: Creation complete after 5s [id=arn:aws:lambda:us-west-1:xxxxxxxxx:layer:lambda_layer_name:2]
As you can see, the layer’s version number changes from 1 to 2. And further terraform apply will prompt
1
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
A problem
Apparently that function filesha256() cannot use path of dynamically generated file. So thee workaround right now is to use