Table of Contents
- Terraform File and Directory Structure Best Practices
- Terraform Type Constraints Explained (Through an Azure VM Example)
- Terraform Resource Meta-Arguments: count and for_each
- Terraform Lifecycle Rules: create_before_destroy
- Terraform Lifecycle ignore_changes
- Terraform Lifecycle prevent_destroy: What It Is and How to Demo It
- Terraform Lifecycle replace_triggered_by: What It Is and How to Demo It
- Terraform Custom Conditions: What They Are and How to Demo Them
- Terraform Dynamic Expressions: Why We Need Dynamic Blocks and How They Work with Azure NSG
- Terraform Conditional Expressions: Dynamically Naming an NSG Based on Environment
- Terraform Splat Expression: Collecting Values from Multiple Resources
- Terraform Built-in Functions: Useful String, List & Map Helpers
Terraform File and Directory Structure Best Practices
As your Terraform projects grow, keeping everything in a single file becomes messy and hard to maintain.
In this section, we’ll learn how to structure Terraform files properly and how Terraform decides the order in which resources are created using dependencies.
This will help you write clean, scalable, and error-free Terraform code.
Splitting Terraform Code into Multiple Files
Terraform allows you to split your configuration into multiple .tf files.
✔ You can move each block (provider, resources, variables, outputs, etc.) into different files
✔ Terraform automatically loads all .tf files in a directory
✔ File names can be anything meaningful
Example of a Clean File Structure
You might organize your project like this:
main.tf→ main resourcesproviders.tf→ provider configurationvariables.tf→ input variablesoutputs.tf→ output variableslocals.tf→ local variablesbackend.tf→ backend configuration
⚠️ Important: File names don’t control execution order — dependencies do.
Some Blocks Must Be Inside Parent Blocks
Certain Terraform configurations must be nested inside parent blocks, such as the backend.
Terraform Backend Block Example
terraform {
backend "azurerm" {
resource_group_name = ""
storage_account_name = ""
container_name = ""
key = ""
}
}
Line-by-line Explanation
terraform { ... }
This is the main Terraform configuration blockbackend "azurerm"
Specifies that the backend is Azure Resource Manager (Azure storage)resource_group_name
Name of the resource group where the backend storage existsstorage_account_name
Azure Storage Account used to store Terraform statecontainer_name
Blob container where the state file is keptkey
The name of the Terraform state file
👉 This ensures Terraform stores its state remotely instead of locally, which is crucial for team projects.
Understanding Terraform Load Sequence
Terraform does not execute resources based on file order.
Instead, it determines the order using dependencies.
Some resources must exist before others, for example:
- A resource group must exist before a storage account
- A virtual network must exist before subnets
To handle this, Terraform supports:
- Implicit dependencies
- Explicit dependencies
Implicit Dependency (Automatic)
Terraform automatically understands dependencies when a resource uses values from another resource.
Example: Implicit Dependency
resource "azurerm_storage_account" "example" {
name = "mytmhstorageaccount10021"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "GRS"
tags = {
environment = local.common_tags.environment
}
}
Line-by-line Explanation
resource "azurerm_storage_account" "example"
Declares a storage account resource in Azurename = "mytmhstorageaccount10021"
The name of the storage accountresource_group_name = azurerm_resource_group.example.name
Refers to another resource’s name
👉 This creates an implicit dependencylocation = azurerm_resource_group.example.location
Uses the location of the resource group
👉 Reinforces the dependencyaccount_tier = "Standard"
Sets the performance tieraccount_replication_type = "GRS"
Enables geo-redundant storagetags = { ... }
Applies tags using local variables
✅ Terraform automatically knows that the resource group must be created first.
Explicit Dependency (Manual)
Sometimes Terraform cannot automatically detect a dependency, especially when:
- No attribute is directly referenced
- Order still matters logically
In those cases, we use depends_on.
Example: Explicit Dependency
resource "azurerm_storage_account" "example" {
name = "mytmhstorageaccount10021"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "GRS"
tags = {
environment = local.common_tags.environment
}
depends_on = [ azurerm_resource_group.example ]
}
Line-by-line Explanation
Everything above is the same as before, plus:
depends_on = [ azurerm_resource_group.example ]
Forces Terraform to create the resource group first
Even if Terraform wouldn’t detect the dependency automatically
⚠️ Use explicit dependency only when necessary — implicit is preferred.
Best Practices Summary
To keep your Terraform projects clean and reliable:
✔ Split code into meaningful files
✔ Don’t rely on file name order for execution
✔ Always use resource references to create implicit dependencies
✔ Use depends_on only when required
✔ Keep backend configuration inside the terraform block
✔ Organize directories logically as projects grow
Terraform Type Constraints Explained (Through an Azure VM Example)
In this section, we’ll understand Terraform Type Constraints by actually creating an Azure Virtual Machine step by step.
Instead of theory alone, we’ll see how each data type is used in real Terraform code.
We’ll cover:
- Primitive types:
string,number,bool - Collection types:
list,map,set - Structural types:
tuple,object
Starting Point: Azure VM Terraform Documentation
To understand which fields expect which types, we first look at the official Azure VM resource documentation:
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine
From here, we copy the sample VM code and then replace hardcoded values with typed variables.
Primitive Types
Primitive types hold only one value.
String Variable Example
variable "prefix" {
default = "tfvmex"
}
Line-by-line Explanation
variable "prefix"
Declares a variable namedprefixdefault = "tfvmex"
Assigns a default string value
This variable is commonly used to build resource names.
Number Variable Example
From the Azure VM documentation, inside storage_os_disk, we see:
disk_size_gbexpects a number
We define a number variable:
variable "storage_disk_size" {
type = number
description = "size of storage disk"
default = 80
}
Line-by-line Explanation
type = number
Enforces that only numeric values are alloweddefault = 80
Sets the disk size to 80 GB by default
Now we use it in the VM resource:
storage_os_disk {
name = "myosdisk1"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Standard_LRS"
disk_size_gb = var.storage_disk_size
}
Explanation
disk_size_gb = var.storage_disk_size
Assigns the numeric variable to the disk size field
Boolean Variable Example
Azure VM has this property:
delete_os_disk_on_termination = true
This controls whether the OS disk is deleted when the VM is deleted.
We replace this with a boolean variable.
variable "is_disk_delete" {
type = bool
description = "delete the OS disk automatically when deleting the VM"
default = true
}
Line-by-line Explanation
type = bool
Onlytrueorfalseis alloweddefault = true
Disk will be deleted by default
Now use it:
delete_os_disk_on_termination = var.is_disk_delete
Important Note
If you want to preserve data, set this to:
default = false
Verifying with Terraform Plan
Run:
terraform init
terraform plan
To see only the resources that will be created:
terraform plan | Select-String "will be created"
Example output:
# azurerm_network_interface.main will be created
# azurerm_resource_group.example will be created
# azurerm_subnet.internal will be created
# azurerm_virtual_machine.main will be created
# azurerm_virtual_network.main will be created
This confirms Terraform is reading your types correctly.
List Type (Collection Type)
A list holds multiple values of the same type, in a fixed order.
Original Hardcoded Resource Group
resource "azurerm_resource_group" "example" {
name = "${var.prefix}-resources"
location = "West Europe"
}
We replace the hardcoded location with a list variable.
Defining a List Variable
variable "allowed_locations" {
type = list(string)
description = "allowed locations for the creation of resources"
default = ["West Europe", "East Europe", "East US"]
}
Line-by-line Explanation
type = list(string)
This is a list where every element must be a stringdefault = [ ... ]
Defines three allowed locations in order
Now use it:
resource "azurerm_resource_group" "example" {
name = "${var.prefix}-resources"
location = var.allowed_locations[0]
}
Explanation
var.allowed_locations[0]
Accesses the first element of the list
Index starts from0, so"West Europe"is selected
Map Type
A map is a set of key-value pairs.
We’ll use a map to define resource tags.
Defining a Map Variable
variable "allowed_tags" {
type = map(string)
description = "allowed tags for resources"
default = {
"environment" = "staging"
"department" = "devops"
}
}
Line-by-line Explanation
type = map(string)
Keys are strings, values are strings- Inside
default
Defines two tags: environment and department
Now use the map:
tags = {
environment = var.allowed_tags["environment"]
department = var.allowed_tags["department"]
}
Explanation
var.allowed_tags["environment"]
Fetches the value for the key"environment"var.allowed_tags["department"]
Fetches the department tag
Tuple Type
A tuple can hold multiple values of different types in a fixed order.
We define network configuration as a tuple.
Defining a Tuple Variable
variable "my_network_config" {
type = tuple([string, string, number, bool])
description = "VNet address, subnet address, subnet mask, a test flag"
default = ["10.0.0.0/16", "10.0.2.0/24", 24, true]
}
Line-by-line Explanation
type = tuple([string, string, number, bool])
Defines the exact type of each position in orderdefault = [ ... ]
Four values in the exact order of the tuple definition
Original Virtual Network Code
address_space = ["10.0.0.0/16"]
We replace it with tuple value:
address_space = [element(var.my_network_config, 0)]
Explanation
element(var.my_network_config, 0)
Gets the first element of the tuple ("10.0.0.0/16")[ ... ]
Wraps it into a list, becauseaddress_spaceexpects a list of strings
⚠️ Important:
Even though the tuple gives a string, address_space requires a list, so we must use [].
Set Type
A set is like a list, but:
- No duplicate values allowed
- No guaranteed order
- Cannot use direct indexing
We define allowed VM sizes as a set.
Defining a Set Variable
variable "allowed_vm_sizes" {
type = set(string)
description = "allowed VM sizes"
default = ["Standard_DS1_v2", "Standard_DS2_v2"]
}
Line-by-line Explanation
type = set(string)
Unique collection of strings- Duplicates are automatically removed
Accessing a Set Value
We cannot do:
var.allowed_vm_sizes[1] # ❌ Invalid
We must convert it to a list first:
vm_size = tolist(var.allowed_vm_sizes)[1]
Explanation
tolist(var.allowed_vm_sizes)
Converts the set into a list[1]
Selects the second element from the converted list
⚠️ Note: Order is not guaranteed when converting a set.
Object Type
An object groups multiple named fields of any type, like a configuration object.
We define a VM configuration object.
Defining an Object Variable
variable "vm_config" {
type = object({
size = string
publisher = string
offer = string
sku = string
version = string
})
description = "VM Configuration"
default = {
size = "Standard_DS1_v2"
publisher = "Canonical"
offer = "0001-com-ubuntu-server-jammy"
sku = "22_04-lts"
version = "latest"
}
}
Line-by-line Explanation
type = object({ ... })
Defines the exact structure and types of each field- Each field has a name and a type
defaultprovides values for all fields
Using the Object in VM Resource
storage_image_reference {
publisher = var.vm_config.publisher
offer = var.vm_config.offer
sku = var.vm_config.sku
version = var.vm_config.version
}
Explanation
var.vm_config.publisher
Accesses thepublisherfield from the object- Same pattern for
offer,sku, andversion
This keeps VM image configuration clean and centralized.
Summary
In this section, you learned how Terraform type constraints work by using:
string→ resource names and prefixesnumber→ disk sizebool→ delete OS disk flaglist(string)→ multiple locationsmap(string)→ tagstuple(...)→ mixed network configurationset(string)→ unique VM sizesobject({...})→ structured VM configuration
Understanding these types is essential to avoid type mismatch errors and to write robust, reusable Terraform code.
Terraform Resource Meta-Arguments: count and for_each
In this section, we’ll learn about Terraform Resource Meta-Arguments, specifically:
countfor_each
These meta-arguments allow you to create multiple resources in a loop using collections like lists, sets, and maps.
We’ll use a practical example: creating multiple Azure Storage Accounts, and we’ll also see how to output the names of created resources, which is a very common real-world requirement.
Why Meta-Arguments Are Needed
Without count or for_each, you would have to:
- Write one resource block per storage account
- Duplicate the same code again and again
With meta-arguments, you can:
- Write the resource once
- Dynamically create many instances
- Control creation using variables
This makes your Terraform code:
- Cleaner
- More scalable
- Easier to maintain
Using count to Create Multiple Resources
count is best suited when:
- You are working with a list
- The order of items matters
- You want to access elements using an index
Defining a List of Storage Account Names
variable "storage_account_names" {
type = list(string)
description = "storage account names for creation"
default = ["myteststorageacc222j22", "myteststorageacc444l44"]
}
Line-by-line Explanation
type = list(string)
Declares a list where every element must be a stringdefault = [ ... ]
Defines two storage account names in a fixed order
Creating Resources Using count
resource "azurerm_storage_account" "example" {
count = length(var.storage_account_names)
name = var.storage_account_names[count.index]
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "GRS"
tags = {
environment = "staging"
}
}
Line-by-line Explanation
count = length(var.storage_account_names)
Sets how many resources to create based on the list lengthcount.index
Provides the current loop index (0, 1, 2, …)var.storage_account_names[count.index]
Selects the correct name from the list using the index
This ensures:
- First storage account → first name
- Second storage account → second name
Output with count
Because count creates a list of resources, we can use the splat expression ([*]) to collect attributes from all instances.
output "created_storage_account_names" {
value = azurerm_storage_account.example[*].name
}
Line-by-line Explanation
azurerm_storage_account.example
Refers to all storage account instances created usingcount[*]
The splat operator means:
“Apply this to every resource in the list”.name
Extracts thenameattribute from each storage account
If two storage accounts are created, the output will be:
[
"myteststorageacc222j22",
"myteststorageacc444l44"
]
⚠️ This syntax works only because count creates a list.
Using for_each to Create Multiple Resources
for_each is best suited when:
- You are working with a set or a map
- You want stable resource identity
- Order does not matter
- You want to avoid index-based behavior
Why for_each Does Not Work with Lists
Lists:
- Can contain duplicate values
- Are ordered
- Are not ideal for stable addressing
for_each requires:
- A set (unique values), or
- A map (key-value pairs)
Defining a Set of Storage Account Names
variable "storage_account_names" {
type = set(string)
description = "storage account names for creation"
default = ["myteststorageacc222j22", "myteststorageacc444l44"]
}
Line-by-line Explanation
type = set(string)
Declares a collection of unique strings- Duplicates are automatically removed
- Order is not guaranteed
Creating Resources Using for_each
resource "azurerm_storage_account" "example" {
for_each = var.storage_account_names
name = each.key
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "GRS"
tags = {
environment = "staging"
}
}
Line-by-line Explanation
for_each = var.storage_account_names
Iterates over each element in the seteach.key
For a set, the key is the value itself
This becomes the storage account nameeach.value
For a set,each.keyandeach.valueare the same
If this were a map:
each.key→ map keyeach.value→ map value
Output with for_each (Important Difference)
With for_each, this will not work:
azurerm_storage_account.example[*].name # ❌ Invalid
Why?
countcreates a list of resourcesfor_eachcreates a map of resources
So we must use a for expression.
Correct Output with for_each
output "created_storage_account_names" {
value = [for sa in azurerm_storage_account.example : sa.name]
}
Line-by-line Explanation
azurerm_storage_account.example
This is a map of resourcesfor sa in ...
Iterates over each resource in the mapsa.name
Extracts thenameattribute from each storage account[ ... ]
Collects all names into a list of strings
This produces:
[
"myteststorageacc222j22",
"myteststorageacc444l44"
]
Key Differences: count vs for_each
| Feature | count | for_each |
|---|---|---|
| Input type | Number / List | Set / Map |
| Resource collection | List of resources | Map of resources |
| Access pattern | count.index | each.key, each.value |
Output with [*] | ✅ Works | ❌ Does not work |
| Stable identity | ❌ Index-based | ✅ Key-based |
| Handles duplicates | ❌ Yes | ✅ No (unique only) |
Summary
In this section, you learned:
- Why Terraform meta-arguments are needed
- How to use
countwith a list andcount.index - How to output resource names using splat syntax with
count - Why
for_eachworks with sets and maps, not lists - How
each.keyandeach.valuework - Why outputs with
for_eachrequire aforexpression
This section gives you a strong foundation for writing dynamic, scalable Terraform configurations.
Terraform Lifecycle Rules: create_before_destroy
In this section, we’ll focus only on the Terraform lifecycle rule create_before_destroy:
- What it does
- Why it exists
- When you should use it
- How to clearly demo it in practice using Azure
This lifecycle rule is essential for building safe, zero-downtime infrastructure changes.
What Is create_before_destroy?
By default, when a Terraform change requires a resource replacement, Terraform follows this order:
- Destroy the old resource
- Create the new resource
This is called destroy-before-create.
For many critical resources, this can cause:
- Downtime
- Broken dependencies
- Temporary service outages
The lifecycle rule:
lifecycle {
create_before_destroy = true
}
Changes the behavior to:
- Create the new resource first
- Then destroy the old resource
This is called create-before-destroy.
Why create_before_destroy Is Important
You should use create_before_destroy when:
- A change forces resource replacement
- The resource is critical (network, storage, compute)
- You want to avoid downtime
- Other resources depend on this resource
Common scenarios:
- Renaming a resource
- Changing immutable properties
- Blue-green style deployments
- High-availability systems
When Does Terraform Replace a Resource?
Terraform replaces a resource when:
- An attribute is marked as ForceNew by the provider
- The change cannot be applied in-place
Examples:
- Changing a storage account name
- Changing a VM OS disk image
- Changing certain network properties
In such cases, Terraform shows:
-/+ resource_name (replace)
This means:
- The resource will be destroyed and recreated
- But the order is not shown in the plan
Demo create_before_destroy
A very important learning point:
You cannot see the difference in
terraform plan.
The difference appears only duringterraform apply, in the execution order.
We demo this by:
- Creating a resource
- Changing an immutable field
- Watching the order of operations during apply
Step 1: Create a Simple Azure Storage Account
resource "azurerm_resource_group" "example" {
name = "rg-lifecycle-demo"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "lifecycledemoacc01abc"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
Apply once:
terraform apply
This creates the initial infrastructure.
Step 2: Force a Replacement (Without Lifecycle Rule)
Now change the storage account name:
name = "lifecycledemoacc02abc"
Run:
terraform apply
You will see logs like:
Destroying azurerm_storage_account.example
Destruction complete
Creating azurerm_storage_account.example
Creation complete
What This Shows
Order is:
- Destroy old resource
- Create new resource
This is the default Terraform behavior.
Step 3: Add create_before_destroy
Now add the lifecycle rule:
resource "azurerm_storage_account" "example" {
name = "lifecycledemoacc02abc"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
lifecycle {
create_before_destroy = true
}
}
Change the name again:
name = "lifecycledemoacc03abc"
Run:
terraform apply
Now you will see:
Creating azurerm_storage_account.example
Creation complete
Destroying azurerm_storage_account.example
Destruction complete
What This Shows
Order is now:
- Create new resource
- Destroy old resource
This proves that create_before_destroy changes the execution order.
Making the Demo Clearer with Sequential Execution
Terraform may run operations in parallel, which can hide the order.
To make the demo very clear, run:
terraform apply -parallelism=1
This forces Terraform to:
- Execute one operation at a time
- Clearly show:
- Destroy → Create (default)
- Create → Destroy (
create_before_destroy)
This is ideal for:
- Screen recordings
- Blog screenshots
- Teaching demos
Important Azure Limitation
Azure storage account names must be:
- Globally unique
So for this demo:
- You must use a new unique name each time
Example sequence:
lifecycledemoacc01abclifecycledemoacc02abclifecycledemoacc03abc
If you try to reuse the same name, Azure will block creation and the demo will fail.
Key Points to Remember
create_before_destroyapplies only when a resource is being replaced- It does not affect in-place updates
- It may temporarily create two resources at the same time
- The platform must allow both to exist simultaneously
- The difference is visible only during
terraform apply, not interraform plan
Summary
In this section, you learned:
- Default Terraform behavior: destroy → create
- What
create_before_destroychanges: create → destroy - Why this rule is important for zero-downtime changes
- How to demo it by:
- Changing an immutable field
- Running
terraform apply - Observing the execution order in logs
This lifecycle rule is a core building block for writing safe, production-ready Terraform configurations.
Terraform Lifecycle ignore_changes
In this section, we’ll learn about another very important Terraform lifecycle rule: ignore_changes.
We’ll cover:
- What
ignore_changesdoes - Why it is needed
- When you should use it
- How to demo it clearly using an Azure Resource Group and Storage Account
This rule is essential when you want Terraform to stop managing certain attributes of a resource.
What Is ignore_changes?
By default, Terraform continuously tries to make the real infrastructure match exactly what is written in your configuration.
If someone changes a resource manually in the Azure Portal, Terraform will:
- Detect the difference during
terraform plan - Try to revert it back during
terraform apply
The lifecycle rule:
lifecycle {
ignore_changes = [ ... ]
}
Tells Terraform:
“If this specific attribute changes outside Terraform,
do not treat it as drift and do not try to fix it.”
In simple words:
- Terraform will ignore changes to selected fields
- Those fields become partially unmanaged by Terraform
Why ignore_changes Is Useful
You should use ignore_changes when:
- Some attributes are modified by:
- Other teams
- Other tools
- The cloud platform itself
- You do not want Terraform to:
- Overwrite manual changes
- Continuously show drift in every plan
Common real-world examples:
- Tags managed by a governance tool
- Auto-generated fields (timestamps, IDs)
- Scaling values changed by autoscaling
- Temporary hotfix changes
How to Demo ignore_changes
We will demo this using:
- One Azure Storage Account
- One attribute:
tags.environment
We will:
- Create the resource
- Change the tag manually in Azure
- Run
terraform plan - Observe the difference:
- Without
ignore_changes - With
ignore_changes
- Without
Step 1: Create a Storage Account with a Tag
resource "azurerm_resource_group" "example" {
name = "rg-ignore-demo"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "ignoredemostore01abc"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "staging"
}
}
Apply it:
terraform apply
This creates a storage account with:
environment = "staging"
Step 2: Change the Tag Manually in Azure
Go to:
- Azure Portal
- Open the storage account
- Go to Tags
Change:
environment = "staging"
To:
environment = "production"
Save the change.
Now Terraform state and real infrastructure are out of sync.
Step 3: Run terraform plan (Without ignore_changes)
Run:
terraform plan
You will see something like:
~ azurerm_storage_account.example
tags.environment: "production" => "staging"
What This Shows
Terraform is saying:
- The real value is
"production" - The config says
"staging" - Terraform wants to change it back to staging
This is normal default behavior.
Step 4: Add ignore_changes
Now update the resource with a lifecycle block:
resource "azurerm_storage_account" "example" {
name = "ignoredemostore01abc"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
tags = {
environment = "staging"
}
lifecycle {
ignore_changes = [
tags.environment
]
}
}
Line-by-line Explanation
lifecycle { ... }
Declares lifecycle rules for this resourceignore_changes = [ tags.environment ]
Tells Terraform to ignore drift in theenvironmenttag only
Terraform will still manage:
- The resource
- All other attributes
But it will stop managing this one field.
Step 5: Run terraform plan Again
Run:
terraform plan
Now you will see:
- No changes detected
- Terraform does not try to revert the tag
Even though:
- Config says:
"staging" - Azure says:
"production"
Terraform stays silent.
Ignoring Multiple Attributes
You can ignore multiple fields:
lifecycle {
ignore_changes = [
tags,
access_tier,
account_replication_type
]
}
This tells Terraform to ignore changes to:
- All tags
- Access tier
- Replication type
Important Rules About ignore_changes
- It applies only to future drift, not past
- It does not delete the attribute from state
- Terraform still manages the resource itself
- Only the specified fields are ignored
- Overuse can hide real configuration problems
When Not to Use ignore_changes
Avoid using it when:
- The field is critical for correctness
- You want full control from Terraform
- You are trying to hide frequent mistakes
ignore_changes should be:
- Used carefully
- Documented clearly
- Limited to specific attributes
Key Takeaway
You can summarize this clearly in your blog:
- Default behavior:
- Terraform detects drift
- Terraform tries to fix drift
- With
ignore_changes:- Terraform detects drift
- Terraform intentionally ignores it
This is how you allow controlled manual changes without fighting Terraform.
Summary
In this section, you learned:
- What
ignore_changesdoes - Why it is useful in real projects
- How Terraform behaves without it
- How to demo it by:
- Changing a field manually in Azure
- Running
terraform plan - Observing drift detection
- Adding
ignore_changesand re-running plan
- How to safely ignore selected attributes
This lifecycle rule is essential for handling partial ownership and real-world drift scenarios in Terraform.
Terraform Lifecycle prevent_destroy: What It Is and How to Demo It
In this section, we’ll learn about the Terraform lifecycle rule prevent_destroy:
- What it does
- Why it exists
- When you should use it
- How to demo it clearly using Azure
This rule is designed to protect important resources from accidental deletion.
What Is prevent_destroy?
By default, Terraform allows you to:
- Delete resources with
terraform destroy - Delete resources when you remove them from configuration
- Delete resources when a replacement is required
The lifecycle rule:
lifecycle {
prevent_destroy = true
}
Tells Terraform:
“This resource must never be destroyed by Terraform.”
If any plan or apply would destroy this resource, Terraform will:
- Stop the operation
- Return an error
- Refuse to continue
This acts as a safety lock on critical infrastructure.
Why prevent_destroy Is Important
You should use prevent_destroy when:
- The resource is critical
- Deleting it would cause:
- Data loss
- Service outage
- Compliance violations
Common real-world examples:
- Production databases
- Key Vaults and secrets
- Storage accounts with important data
- Shared networking components
In short:
It protects you from human mistakes.
How to Demo prevent_destroy
We will demo this using:
- One Azure Resource Group
- One Azure Storage Account
We will:
- Create the resource
- Enable
prevent_destroy - Try to destroy it
- Observe how Terraform blocks the operation
Step 1: Create a Basic Storage Account
resource "azurerm_resource_group" "example" {
name = "rg-prevent-destroy-demo"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "preventdestroydemo01abc"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
Apply it:
terraform apply
This creates the resource normally.
Step 2: Add prevent_destroy
Now protect the storage account with a lifecycle block:
resource "azurerm_storage_account" "example" {
name = "preventdestroydemo01abc"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
lifecycle {
prevent_destroy = true
}
}
Apply again:
terraform apply
No changes occur, but the resource is now protected.
Step 3: Try to Destroy the Resource
Now attempt to destroy the infrastructure:
terraform destroy
Terraform will fail with an error similar to:
Error: Instance cannot be destroyed
Resource azurerm_storage_account.example has lifecycle.prevent_destroy set,
but the plan calls for this resource to be destroyed.
What This Shows
Terraform is telling you:
- This resource is marked as non-destructible
- The operation is blocked
- Nothing will be deleted
This proves that prevent_destroy is working.
Step 4: How to Intentionally Destroy a Protected Resource
To destroy a resource with prevent_destroy, you must explicitly remove the protection first.
- Remove the lifecycle block:
lifecycle {
prevent_destroy = true
}
- Run:
terraform apply
- Then run:
terraform destroy
Only now will Terraform allow the resource to be deleted.
This ensures:
- Deletion is always a conscious, intentional action
Important Rules About prevent_destroy
- It blocks:
terraform destroy- Replacements that require destroy
- Deletions caused by config changes
- It does not block:
- In-place updates
- Reading the resource
- Drift detection
- It applies only to Terraform actions
- It does not prevent manual deletion in the Azure Portal
When Not to Use prevent_destroy
Avoid using it when:
- The resource is temporary
- You use frequent tear-down environments (dev, test)
- You rely on automated cleanup pipelines
Overusing prevent_destroy can:
- Block automation
- Cause stuck pipelines
- Require manual intervention
Use it only for truly critical resources.
Summary
In this section, you learned:
- What
prevent_destroydoes - Why it is essential for protecting critical infrastructure
- How Terraform behaves without it
- How to demo it by:
- Adding
prevent_destroy - Running
terraform destroy - Observing the blocked operation
- Adding
- How to safely remove the protection when deletion is required
This lifecycle rule is Terraform’s strongest safety mechanism for preventing catastrophic accidental deletions in production environments.
Terraform Lifecycle replace_triggered_by: What It Is and How to Demo It
In this section, we’ll learn about the Terraform lifecycle rule replace_triggered_by:
- What it does
- Why it exists
- When you should use it
- How to demo it clearly using Azure
This rule is used when you want Terraform to force replacement of a resource when some other resource or attribute changes.
What Is replace_triggered_by?
By default, Terraform replaces a resource only when:
- One of its own attributes changes
- And that change requires replacement
The lifecycle rule:
lifecycle {
replace_triggered_by = [ ... ]
}
Tells Terraform:
“If this other resource or attribute changes,
then recreate this resource as well,
even if this resource itself did not change.”
In simple words:
- You define a trigger
- When the trigger changes
- Terraform forces replacement of this resource
Why replace_triggered_by Is Important
You should use replace_triggered_by when:
- One resource is tightly coupled to another
- An in-place update is not safe
- You want to guarantee a fresh recreation
Common real-world examples:
- Recreate a VM when its image version changes
- Recreate an app when a config file changes
- Recreate a resource when a subnet changes
- Recreate a resource when a secret or key changes
In short:
It gives you explicit control over replacement behavior.
How to Demo replace_triggered_by
We will demo this using:
- One Azure Resource Group
- One Azure Storage Account
- One simple trigger resource
We will:
- Create the resources
- Link them using
replace_triggered_by - Change only the trigger
- Observe that Terraform replaces the storage account
Step 1: Create a Basic Resource Group
resource "azurerm_resource_group" "example" {
name = "rg-replace-trigger-demo"
location = "West Europe"
}
Apply once:
terraform apply
This creates the resource group.
Step 2: Create a Trigger Resource
We use a null_resource as a simple trigger.
resource "null_resource" "trigger" {
triggers = {
version = "v1"
}
}
Explanation
null_resource
A Terraform-only resource used for triggering behaviortriggers = { version = "v1" }
Any change to this value will cause this resource to be replaced
This will act as our replacement trigger.
Apply:
terraform apply
Step 3: Create a Storage Account Without Any Direct Dependency
resource "azurerm_storage_account" "example" {
name = "replacetriggerdemo01abc"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
Apply again:
terraform apply
At this point:
- Resource group exists
- Trigger resource exists
- Storage account exists
Step 4: Add replace_triggered_by
Now link the storage account lifecycle to the trigger.
resource "azurerm_storage_account" "example" {
name = "replacetriggerdemo01abc"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
lifecycle {
replace_triggered_by = [
null_resource.trigger
]
}
}
Apply:
terraform apply
No changes occur, but the dependency is now registered.
Step 5: Change Only the Trigger
Now change only the trigger value:
resource "null_resource" "trigger" {
triggers = {
version = "v2"
}
}
Note:
- We did not change anything in the storage account
- Only the trigger changed
Run:
terraform plan
You will see:
-/+ azurerm_storage_account.example (replace)
What This Shows
This proves that:
- The storage account is being replaced
- Not because its own attributes changed
- But because another resource changed
This is exactly what replace_triggered_by is designed for.
Using Real Resources as Triggers
Instead of null_resource, in real projects you often use:
- A subnet ID
- A VM image ID
- A Key Vault secret version
- A configuration resource
Example:
lifecycle {
replace_triggered_by = [
azurerm_subnet.example.id
]
}
This means:
If the subnet changes, recreate this resource.
Important Rules About replace_triggered_by
- It forces replacement, not in-place update
- It works only when the trigger resource is changed or replaced
- It does not override provider rules
- It can cause unexpected recreations if overused
Use it carefully and only when replacement is truly required.
Summary
In this section, you learned:
- What
replace_triggered_bydoes - Why it is useful for tightly coupled resources
- How Terraform behaves without it
- How to demo it by:
- Creating a trigger resource
- Linking it using
replace_triggered_by - Changing only the trigger
- Observing forced replacement in
terraform plan
- How this rule gives you explicit control over resource recreation
This lifecycle rule is a powerful tool for handling intentional, dependency-driven replacements in production Terraform configurations.
Terraform Custom Conditions: What They Are and How to Demo Them
In this section, we’ll learn about Terraform Custom Conditions, also called:
preconditionpostcondition
These allow you to validate assumptions about your infrastructure and fail early if something is wrong.
We’ll cover:
- What custom conditions are
- Why they are useful
- When to use
preconditionandpostcondition - How to demo them clearly using an Azure Storage Account
This feature is extremely useful for building safe, self-validating Terraform code.
What Are Custom Conditions?
Terraform custom conditions let you attach logical checks to:
- A resource
- A data source
- An output
There are two types:
precondition # Checked before creating or updating a resource
postcondition # Checked after the resource is created or read
If the condition is false, Terraform will:
- Stop the plan or apply
- Show a clear error message
- Refuse to continue
In simple words:
Custom conditions let you say:
“This must be true, otherwise Terraform should fail.”
Why Custom Conditions Are Important
You should use custom conditions when:
- You want to enforce rules in code
- You want to catch mistakes before deployment
- You want to protect against invalid configurations
Common real-world examples:
- Enforce allowed locations
- Enforce naming conventions
- Enforce minimum disk size
- Prevent use of unsupported VM sizes
- Validate relationships between resources
In short:
They turn Terraform into a self-validating system.
Difference Between precondition and postcondition
precondition- Checked before creating or updating a resource
- Prevents invalid plans from running
postcondition- Checked after a resource is created or read
- Validates what was actually provisioned
Most beginner demos start with precondition, because it is easier to understand.
How to Demo Custom Conditions
We will demo this using:
- One Azure Storage Account
- One simple rule:
- Storage account name must start with
"demo"
- Storage account name must start with
We will:
- Create a resource with a valid name
- Add a
precondition - Change the name to an invalid value
- Observe Terraform failing with a custom error
Step 1: Create a Basic Storage Account
resource "azurerm_resource_group" "example" {
name = "rg-condition-demo"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "democonditionacc01"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
Apply once:
terraform apply
This works normally.
Step 2: Add a precondition
Now add a custom condition to the storage account.
resource "azurerm_storage_account" "example" {
name = "democonditionacc01"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
lifecycle {
precondition {
condition = startswith(self.name, "demo")
error_message = "Storage account name must start with 'demo'."
}
}
}
Line-by-line Explanation
lifecycle { ... }
Declares lifecycle rules for this resourceprecondition { ... }
Defines a validation rule that runs before creation or updatecondition = startswith(self.name, "demo")
Checks that the storage account name begins with"demo"error_message = "..."
Message shown if the condition fails
Apply again:
terraform apply
No change occurs, because the condition is satisfied.
Step 3: Break the Condition Intentionally
Now change the name to an invalid value:
name = "invalidacc01"
Run:
terraform plan
You will see an error like:
Error: Resource precondition failed
Storage account name must start with 'demo'.
What This Shows
This proves that:
- Terraform evaluated the condition
- The condition returned false
- Terraform stopped before creating or modifying anything
This is the core power of custom conditions.
Demo Using postcondition
Now let’s see a simple postcondition.
We will check that the storage account location is really "West Europe".
resource "azurerm_storage_account" "example" {
name = "democonditionacc01"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
lifecycle {
postcondition {
condition = self.location == "West Europe"
error_message = "Storage account was not created in West Europe."
}
}
}
What This Does
- Terraform creates or reads the resource
- Then checks the condition
- If the actual location is not
"West Europe", Terraform fails
This validates the real result, not just the input.
Where Else Can You Use Custom Conditions?
You can use custom conditions in:
resourceblocksdatablocksoutputblocks
Example on output:
output "storage_account_name" {
value = azurerm_storage_account.example.name
precondition {
condition = length(self) > 3
error_message = "Storage account name is too short."
}
}
This validates outputs before showing them.
Important Rules About Custom Conditions
- They fail the plan or apply immediately
- They do not fix problems, only detect them
- They improve safety, not automation
- Overuse can make configs too strict
- They should contain clear error messages
When Not to Use Custom Conditions
Avoid using them when:
- The rule is already enforced by the provider
- The rule is too flexible to express in code
- You want to allow experimentation in dev
Use them mainly for:
- Production guardrails
- Organizational policies
- Hard technical requirements
Summary
In this section, you learned:
- What custom conditions are
- The difference between
preconditionandpostcondition - Why they are important for safe Terraform code
- How to demo them by:
- Adding a
precondition - Breaking the rule intentionally
- Observing Terraform fail with a custom error
- Adding a
- How to validate real infrastructure using
postcondition
Custom conditions turn Terraform from a simple provisioning tool into a rule-enforcing, self-validating infrastructure platform.
Terraform Dynamic Expressions: Why We Need Dynamic Blocks and How They Work with Azure NSG
In this section, we’ll understand why Terraform dynamic blocks are needed, how NSG rules look without dynamic blocks, and why in this demo we store rule values in locals and use them inside a dynamic block instead of looping through a simple list.
This explanation is based on your exact Azure Network Security Group demo code.
Official documentation for Azure NSG using terraform:
The Core Problem: Repeated Nested Blocks
In Azure, an NSG can contain many security_rule blocks.
Without dynamic blocks, Terraform code looks like this:
resource "azurerm_network_security_group" "example" {
security_rule {
name = "Allow-SSH"
priority = 100
destination_port_range = "22"
description = "Allow SSH"
}
security_rule {
name = "Allow-HTTP"
priority = 200
destination_port_range = "80"
description = "Allow HTTP"
}
security_rule {
name = "Allow-HTTPS"
priority = 300
destination_port_range = "443"
description = "Allow HTTPS"
}
}
Problems with This Approach
- Every rule is hardcoded
- The same block structure is repeated many times
- Adding or removing rules requires:
- Editing the resource block itself
- Hard to reuse in modules
- Hard to scale when you have many rules
In simple words:
This is manual configuration, not scalable Infrastructure as Code.
Why We Need Dynamic Blocks
A dynamic block allows Terraform to:
- Generate nested blocks using a loop
- Separate data from logic
- Add or remove rules by changing only the data
- Keep the resource definition generic and reusable
In simple words:
Instead of writing rules as code,
we write rules as data,
and let Terraform generate the code.
This is the main reason dynamic blocks exist.
Why Store Values in locals Instead of Hardcoding?
In your demo, you defined NSG rules in locals:
locals {
nsg_rules = {
"allow_http" = {
priority = 100
destination_port_range = "80"
description = "Allow HTTP"
},
"allow_https" = {
priority = 110
destination_port_range = "443"
description = "Allow HTTPS"
}
}
}
This design is intentional and very important.
Why Not Hardcode Rules in the Resource?
If rules are hardcoded:
- You must edit the resource every time
- Code becomes long and repetitive
- Difficult to reuse in modules
- Hard to automate rule generation
By moving rules to locals:
- Resource code becomes clean and generic
- Rules become pure data
- Adding a rule means:
- Add one entry in
locals - No change to resource logic
- Add one entry in
Why Not Use a Simple List?
A simple list might look like this:
[
{
name = "allow_http"
priority = 100
port = "80"
},
{
name = "allow_https"
priority = 110
port = "443"
}
]
This works, but it has drawbacks:
- Rules are identified by index, not by name
- Reordering the list can cause unnecessary changes
- Harder to track which rule changed
- Less predictable behavior
Why Use a Map in locals?
Your nsg_rules is a map, not a list:
nsg_rules = {
"allow_http" = { ... }
"allow_https" = { ... }
}
This gives important advantages:
- Each rule has a stable identity (map key)
- Terraform tracks rules by key, not by index
- Reordering rules does not cause drift**
- Easy to add, remove, or rename rules
- More predictable plans and applies
In short:
Maps give stable, predictable behavior
Lists give fragile, index-based behavior
This is why maps are preferred for dynamic blocks.
How the Dynamic Block Uses the Local Map
From your main.tf:
dynamic "security_rule" {
for_each = local.nsg_rules
content {
name = security_rule.key
priority = security_rule.value.priority
destination_port_range = security_rule.value.destination_port_range
description = security_rule.value.description
}
}
How the Loop Works
for_each = local.nsg_rules
Terraform loops over each item in the map
For each iteration:
security_rule.key
→ The map key
→"allow_http"or"allow_https"security_rule.value
→ The object containing:prioritydestination_port_rangedescription
Why Use security_rule.key for the Name?
name = security_rule.key
This ensures:
- Rule name comes from the map key
- Rule identity is stable
- Renaming a key clearly means:
- Replace this specific rule
This is much safer than using list indexes.
What Terraform Generates Internally
From your two rules in locals, Terraform generates:
security_rule {
name = "allow_http"
priority = 100
destination_port_range = "80"
description = "Allow HTTP"
}
security_rule {
name = "allow_https"
priority = 110
destination_port_range = "443"
description = "Allow HTTPS"
}
But:
- You did not write these blocks manually
- You only maintained the locals data
- Terraform handled all repetition
Why This Design Is Better Than Without Dynamic Blocks
With locals + dynamic blocks:
- Resource code stays constant
- Rules are data-driven
- Easy to extend and modify
- Ideal for modules and production use
- Clean separation of:
- Configuration data
- Resource logic
Without dynamic blocks:
- Code grows quickly
- Hard to maintain
- High chance of mistakes
- Poor scalability
Summary
In this section, you learned:
- How NSG rules look without dynamic blocks
- Why hardcoding repeated
security_ruleblocks does not scale - Why dynamic blocks are needed for repeated nested blocks
- Why storing rules in
localsas a map is better than:- Hardcoding
- Using simple lists
- How
security_rule.keyandsecurity_rule.valuework - How Terraform converts data into real configuration
This pattern — maps in locals + dynamic blocks in resources — is a key step from basic Terraform to clean, scalable, production-grade Infrastructure as Code.
Terraform Conditional Expressions: Dynamically Naming an NSG Based on Environment
In this section, we’ll learn how to use a Terraform conditional expression to dynamically set the name of an Azure Network Security Group (NSG) based on the value of an environment variable.
This is a practical beginner example that shows how:
- One Terraform codebase
- Can create different resource names
- For different environments like dev and staging
- Without changing the code itself
We’ll explain this using the exact code and CLI output from your demo.
The Problem We Are Solving
In real projects, you rarely deploy only one environment.
You usually have:
- Development (
dev) - Staging (
staging) - Testing (
test) - Production (
prod)
Each environment must have:
- Different resource names
- To avoid conflicts
- To keep environments isolated
Without conditional logic, you would need:
- Separate Terraform files per environment, or
- Manual edits before every deployment
Terraform conditional expressions solve this cleanly.
The Conditional Expression in Your Code
From your NSG resource:
resource "azurerm_network_security_group" "example" {
name = var.environment == "dev" ? "mytestnsg10001dev" : "mytestnsg10001test"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
This single line controls the NSG name:
name = var.environment == "dev" ? "mytestnsg10001dev" : "mytestnsg10001test"
Understanding the Syntax
Terraform conditional expressions follow this format:
condition ? value_if_true : value_if_false
In your case:
var.environment == "dev" ? "mytestnsg10001dev" : "mytestnsg10001test"
This reads as:
- If
environmentis"dev"
→ Use the namemytestnsg10001dev - Otherwise (for any other value)
→ Use the namemytestnsg10001test
This decision is made during terraform plan, before any resource is created.
The Environment Variable That Drives the Logic
From your code:
variable "environment" {
type = string
default = "staging"
description = "Environmnet"
}
This means:
- If you do not pass
-var, Terraform uses:environment = "staging"
- You can override it from the CLI:
-var=environment=dev
This variable is the input that controls the conditional expression.
Case 1: Running Without Passing Any Variable
You ran:
terraform plan
Since no -var was provided, Terraform used the default:
environment = "staging"
Now evaluate the condition:
var.environment == "dev" ? "mytestnsg10001dev" : "mytestnsg10001test"
- Is
"staging" == "dev"?
→ No
So Terraform selected the false branch:
mytestnsg10001test
This is exactly what your plan output showed:
+ name = "mytestnsg10001test"
This proves:
The default value
"staging"caused Terraform to use
the test-style NSG name.
Case 2: Running with -var=environment=dev
Next, you ran:
terraform plan -var=environment=dev
Now Terraform used:
environment = "dev"
Evaluate the condition again:
var.environment == "dev" ? "mytestnsg10001dev" : "mytestnsg10001test"
- Is
"dev" == "dev"?
→ Yes
So Terraform selected the true branch:
mytestnsg10001dev
And your plan output showed:
+ name = "mytestnsg10001dev"
This clearly demonstrates that:
Changing only the variable value
Changed only the resource name,
Without changing any Terraform code.
Why This Pattern Is Important
With this one conditional expression, you achieved:
- One Terraform configuration
- Multiple environment behaviors
- No duplicate files
- No manual renaming
- Fully automated naming
This pattern is widely used for:
- Environment-specific resource names
- Avoiding name collisions
- Managing dev/test/prod with one codebase
A More Scalable Naming Pattern
Your current logic handles two cases: dev and “not dev”.
In real projects, a more scalable pattern is:
name = "mytestnsg10001-${var.environment}"
This automatically produces:
mytestnsg10001-devmytestnsg10001-stagingmytestnsg10001-testmytestnsg10001-prod
This avoids long conditional chains and scales naturally to many environments.
Summary
In this section, you learned:
- What a Terraform conditional expression looks like
- The syntax:
condition ? true_value : false_value
- How your exact expression works:
var.environment == "dev" ? "mytestnsg10001dev" : "mytestnsg10001test"
- Why:
- Default
"staging"producedmytestnsg10001test -var=environment=devproducedmytestnsg10001dev
- Default
- How conditional expressions let you:
- Dynamically name resources
- Use one codebase for many environments
- Build environment-aware Terraform configurations
This is a simple but very powerful example of how Terraform conditional expressions make your infrastructure flexible, automated, and production-ready.
Terraform Splat Expression: Collecting Values from Multiple Resources
In this section, we’ll learn about the Terraform splat expression and how it is used to collect values from multiple instances of a resource into a single list.
We’ll cover:
- What a splat expression is
- Why splat expressions are needed
- When you typically use them
- The syntax of splat expressions
- A simple demo with multiple resources
- How this is commonly used with
countandfor_each
Splat expressions are a key concept when you start working with multiple resource instances in Terraform.
What Is a Splat Expression?
A splat expression is a shortcut syntax used to:
Extract the same attribute
From all instances of a resource
And return them as a list.
Basic syntax:
resource_type.resource_name[*].attribute
Example:
azurerm_storage_account.example[*].name
This means:
- Take all instances of
azurerm_storage_account.example - Get the
nameattribute from each one - Return a list of names
Why We Need Splat Expressions
Splat expressions are useful when:
- You create multiple resources using:
countfor_each
- You want to:
- Output all names
- Pass all IDs to another resource
- Collect all IP addresses
- Build a list from many instances
Without splat:
- You would have to reference each instance manually:
example[0].nameexample[1].nameexample[2].name
With splat:
One expression
Collects everything automatically.
Splat Expression with count
Consider this resource created using count:
resource "azurerm_storage_account" "example" {
count = 2
name = "mystorage${count.index}"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
This creates:
example[0]example[1]
Now, to collect all storage account names:
output "storage_account_names" {
value = azurerm_storage_account.example[*].name
}
Line-by-line Explanation
azurerm_storage_account.example[*].name
azurerm_storage_account.example
Refers to all instances of this resource[*]
Means: “For every instance”.name
Extracts thenameattribute from each instance
The result is a list like:
[
"mystorage0",
"mystorage1"
]
Splat Expression with for_each
Now consider a resource created using for_each:
variable "storage_names" {
type = set(string)
default = ["stor1", "stor2"]
}
resource "azurerm_storage_account" "example" {
for_each = var.storage_names
name = each.key
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
Here:
- Instances are created as a map:
example["stor1"]example["stor2"]
To collect all names, splat still works:
output "storage_account_names" {
value = [for sa in azurerm_storage_account.example : sa.name]
}
In this case, we often prefer a for expression because:
for_eachcreates a map, not a list- Order is not guaranteed
- Explicit iteration is clearer
But conceptually, this is still the same idea as splat:
Collect one attribute from all instances.
When Splat Expressions Are Most Commonly Used
Splat expressions are frequently used for:
- Output variables
- Passing IDs to other resources
- Building lists for:
- Load balancers
- Security group associations
- Subnet attachments
- Backend pools
Example:
backend_address_pool_ids = azurerm_network_interface.example[*].id
This passes all NIC IDs into another resource.
Full vs Legacy Splat Syntax
Modern Terraform uses the full splat syntax:
resource[*].attribute
Older Terraform versions used:
resource.*.attribute
Example:
azurerm_storage_account.example.*.name # Legacy
azurerm_storage_account.example[*].name # Modern (recommended)
You should always use the modern [*] syntax.
Important Rules About Splat Expressions
- They work only when:
- The resource has multiple instances
- The result is always a list
- With
count:- Order is predictable (by index)
- With
for_each:- Order is not guaranteed
- Often better to use a
forexpression
- You can only extract:
- One attribute at a time
A Simple Real-World Example
Create two NSGs:
resource "azurerm_network_security_group" "example" {
count = 2
name = "nsg-${count.index}"
...
}
Collect all NSG IDs:
output "nsg_ids" {
value = azurerm_network_security_group.example[*].id
}
Terraform returns:
[
"/subscriptions/.../nsg-0",
"/subscriptions/.../nsg-1"
]
This list can now be passed to another resource.
Summary
In this section, you learned:
- What a Terraform splat expression is
- The syntax:
resource[*].attribute
- Why splat expressions are needed to collect values
- How splat works with:
countfor_each
- How to use splat in output variables
- The difference between:
- Legacy
*.syntax - Modern
[*]syntax
- Legacy
Splat expressions are one of the most important tools for working with multiple resource instances and building data flows between Terraform resources.
Terraform Built-in Functions: Useful String, List & Map Helpers
Terraform comes with a set of built-in functions you can use inside expressions to transform values, manipulate strings, work with lists or maps, and more. These functions are extremely helpful when you want to process values dynamically in a module, variable, local, or resource attribute.
Below are some commonly used functions with simple explanations and examples so you can start using them in your code confidently. For full reference, see the official docs: https://developer.hashicorp.com/terraform/language/functions
trim
What it does:
Removes whitespace from the start and end of a string.
Example:
locals {
messy = " hello world "
clean = trim(local.messy)
}
Result:
"hello world"
Use this when your values might have extra spaces you don’t want.
chomp
What it does:
Removes a trailing newline (end-of-line) from a string.
Example:
locals {
text_with_newline = "hello\n"
fixed_text = chomp(local.text_with_newline)
}
Result:
"hello"
This is useful when reading output that may include newline characters.
max
What it does:
Returns the largest numeric or alphabetic value from a list.
Example (numbers):
locals {
numbers = [10, 32, 5, 18]
largest = max(local.numbers...)
}
Result:
32
Example (strings):
locals {
words = ["apple", "banana", "grape"]
highest = max(local.words...)
}
Result:
"grape"
Note: You need ... to expand list into separate arguments.
lower
What it does:
Converts a string to all lowercase.
Example:
locals {
mixed = "HELLoTerraform"
lowercased = lower(local.mixed)
}
Result:
"helloterraform"
Great for normalizing strings when case doesn’t matter.
reverse
What it does:
Reverses a list (flips order).
Example:
locals {
numbers = [1, 2, 3, 4]
backwards = reverse(local.numbers)
}
Result:
[4, 3, 2, 1]
Works only on lists, not on maps or strings.
merge
What it does:
Combines two or more maps into one.
Example:
locals {
tags1 = { env = "dev" }
tags2 = { project = "blog" }
merged_tags = merge(local.tags1, local.tags2)
}
Result:
{ env = "dev", project = "blog" }
If maps have the same key, the last one wins.
substr
What it does:
Returns a part of a string given a start index and length.
Syntax:
substr(string, start, length)
Example:
locals {
full = "terraform"
part = substr(local.full, 0, 4)
}
Result:
"terr"
Indices start at 0 (first character).
replace
What it does:
Replaces all occurrences of a substring with another string.
Example:
locals {
original = "prod-environment"
fixed = replace(local.original, "prod", "production")
}
Result:
"production-environment"
Useful for transforming naming conventions.
split
What it does:
Splits a single string into a list based on a separator.
Syntax:
split(separator, string)
Example:
locals {
raw = "80,443,22"
ports = split(",", local.raw)
}
Result:
["80", "443", "22"]
You can then loop over this list in a dynamic block or for expression.
When To Use These in Real Terraform
These functions are most commonly used in:
- locals (to preprocess values)
- variables (to validate/transform inputs)
- dynamic blocks (to generate nested blocks)
- outputs (to format output values)
- resource arguments (to build names, tags, policies)
By combining conditions and functions, you can make your Terraform configurations more flexible, less repetitive, and more maintainable.
Summary
| Function | What It Does |
|---|---|
trim | Removes leading/trailing spaces |
chomp | Removes trailing newline |
max | Returns the largest numeric/string value |
lower | Converts string to lowercase |
reverse | Reverses a list |
merge | Combines maps |
substr | Extracts part of a string |
replace | Replaces substrings |
split | Splits a string into a list |

Leave a Reply