Cloud Security14 min read

How to Secure Terraform Remote State in Azure Storage Account

Terraform state files contain plaintext secrets, resource IDs, and access keys. Learn how to lock down your Azure Storage backend with Managed Identity, private endpoints, RBAC least privilege, and blob versioning — with full Terraform code examples.

I
Microsoft Cloud Solution Architect
TerraformAzureIaCSecurityCloud SecurityDevSecOps

The State File Incident I Still Think About

A few years back, a team I was reviewing had a beautiful Terraform setup — modular, well-structured, proper naming conventions. But their remote state was sitting in an Azure Storage Account with public blob access enabled, authenticated via a storage account key that was hardcoded in their CI/CD pipeline as a plain environment variable. The key had never been rotated.

The state file itself contained the connection string for their production SQL database, the primary key for their Event Hub namespace, and the client secret for a service principal with Contributor access to the entire subscription. Every single secret Terraform had ever touched was sitting there in JSON, readable by anyone with network access to the storage endpoint.

Nobody had done this maliciously. The default Terraform Azure backend documentation gets you to a working state fast, and most engineers stop there. Getting it *working* and getting it *secure* are two very different things.

This article walks through what a properly hardened Azure Storage backend looks like, with full Terraform code for every step.

Why the State File Is a High-Value Target

Before talking about hardening, it helps to understand exactly why state files are dangerous. The terraform.tfstate file is a JSON document that records the exact current state of every resource Terraform manages. That means it contains:

  • Plaintext secrets — any sensitive = true value in your config still appears in plaintext in the state file. Connection strings, passwords, API keys, client secrets — all of it.
  • Resource IDs — full ARM resource IDs for every managed resource, which tells an attacker exactly what exists and where.
  • Access keys — if you're managing Storage Accounts, Event Hubs, Service Bus, or Cosmos DB with Terraform, their access keys land in state.
  • Service principal credentials — if you create app registrations or managed identity federated credentials via Terraform, the details end up in state.

The Terraform documentation has a warning about this, but it's easy to miss when you're focused on getting infrastructure deployed. Treat your state file like a secrets vault, because that's effectively what it is.

What Most Teams Get Wrong With the Default Setup

The quickest path to an Azure backend looks something like this in the Terraform docs:

terraform {
  backend "azurerm" {
    resource_group_name  = "tfstate-rg"
    storage_account_name = "tfstateaccount"
    container_name       = "tfstate"
    key                  = "prod.terraform.tfstate"
  }
}

And the authentication method used in most quickstarts is the storage account key, passed via the ARM_ACCESS_KEY environment variable or the access_key backend parameter. This is the path of least resistance, and it creates several problems:

  1. The storage account key gives full read/write access to everything in the account — not just the Terraform state container.
  2. Keys don't expire. If one leaks, it's valid forever until manually rotated.
  3. Keys are often stored as CI/CD secrets that get copy-pasted across pipelines and rarely audited.
  4. There's no audit trail of *who* accessed the state — just that the key was used.
  5. Public blob access is often left enabled, meaning the state blob can potentially be accessed without authentication if someone knows the URL.

Let's fix all of this.

Step 1: Create the Storage Account With Proper Security Settings

Start by creating the infrastructure that will host your state. We'll define this in a separate "bootstrap" Terraform config — a pattern sometimes called the "chicken and egg" problem. You need state storage before you can use state storage, so this bootstrap is typically applied manually once and never touched again.

# bootstrap/main.tf

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~> 3.0"
    }
  }
}

provider "azurerm" {
  features {}
}

resource "azurerm_resource_group" "tfstate" {
  name     = "rg-tfstate-prod"
  location = "westeurope"

  tags = {
    purpose     = "terraform-state"
    environment = "production"
    managed-by  = "bootstrap"
  }
}

resource "azurerm_storage_account" "tfstate" {
  name                     = "stprotegotfstateprod"
  resource_group_name      = azurerm_resource_group.tfstate.name
  location                 = azurerm_resource_group.tfstate.location
  account_tier             = "Standard"
  account_replication_type = "GRS"

  # Security hardening
  allow_nested_items_to_be_public  = false   # Disable public blob access
  https_traffic_only_enabled       = true    # Enforce HTTPS
  min_tls_version                  = "TLS1_2"
  shared_access_key_enabled        = false   # Disable storage account keys entirely

  # Versioning for state recovery
  blob_properties {
    versioning_enabled = true

    delete_retention_policy {
      days = 30
    }

    container_delete_retention_policy {
      days = 7
    }
  }

  # Deny all public network access by default
  public_network_access_enabled = false

  network_rules {
    default_action = "Deny"
    bypass         = ["AzureServices"]
  }

  tags = {
    purpose     = "terraform-state"
    environment = "production"
  }
}

resource "azurerm_storage_container" "tfstate" {
  name                  = "tfstate"
  storage_account_name  = azurerm_storage_account.tfstate.name
  container_access_type = "private"
}

A few things worth calling out here:

shared_access_key_enabled = false — This is the most impactful single setting. When set to false, the storage account key authentication is completely disabled. Nobody can authenticate with a storage key, even if they somehow get hold of one. All access must go through Azure AD. This is only available on Azure Resource Manager storage accounts and requires your workloads to use Azure AD auth.

allow_nested_items_to_be_public = false — This prevents anyone from accidentally setting a container to anonymous/public access. Belt-and-suspenders, but important.

public_network_access_enabled = false — Blocks all inbound traffic from the public internet. Access is only possible through private endpoints (which we'll add shortly) or Azure services bypass for diagnostics.

Versioning — With versioning_enabled = true, every write to the state blob creates a new version. If a Terraform run corrupts state, you can restore a previous version from the Azure portal or via CLI. This has saved me from catastrophic accidents more than once.

Step 2: Authentication — Why Managed Identity Wins

There are three ways to authenticate the Terraform backend to Azure Storage:

Storage Account Key — As discussed, this is the default and the worst option. Full access, no expiry, no audit identity.

SAS Token — Better than the account key because tokens can be scoped and time-limited. But you still have to manage token generation and distribution, and the tokens need to be stored somewhere (likely your CI/CD secret store). Tokens also can't be revoked individually — you'd have to regenerate all tokens for the account.

Managed Identity — This is the right answer for any automated workload running in Azure. A Managed Identity is an Azure AD identity automatically managed by the platform. There's no credential to store, no secret to rotate, and every access is tied to a real identity that shows up in audit logs.

For Azure DevOps or GitHub Actions running in Azure-hosted runners, you configure the pipeline to use a User-Assigned Managed Identity with the appropriate role on the state container. For GitHub Actions with OIDC federation, no secret is required at all — GitHub proves its identity to Azure AD using a JWT.

Here's how to configure the Terraform backend to use Azure AD auth (which is what Managed Identity uses):

terraform {
  backend "azurerm" {
    resource_group_name  = "rg-tfstate-prod"
    storage_account_name = "stprotegotfstateprod"
    container_name       = "tfstate"
    key                  = "prod.terraform.tfstate"
    use_azuread_auth     = true   # Use Azure AD instead of storage key
  }
}

When use_azuread_auth = true, Terraform uses the Azure CLI credentials, Managed Identity, or environment-based service principal credentials (via ARM_CLIENT_ID, ARM_CLIENT_SECRET, ARM_TENANT_ID). In a CI/CD pipeline with a Managed Identity, this works with zero credential configuration.

Step 3: RBAC — Least Privilege Role Assignment

The minimum permissions a Terraform backend needs on the state container are:

  • Read and write blobs (for state read/write)
  • Manage blob leases (for state locking)

The built-in role Storage Blob Data Contributor covers all of this. Do not assign Contributor or Owner at the storage account or resource group level — that's far more access than needed.

Assign the role at the container level, not the storage account level, to further limit blast radius:

# The principal ID of your CI/CD Managed Identity or service principal
variable "cicd_principal_id" {
  description = "Object ID of the CI/CD managed identity or service principal"
  type        = string
}

# Role assignment scoped to the container, not the storage account
resource "azurerm_role_assignment" "tfstate_contributor" {
  scope                = "${azurerm_storage_account.tfstate.id}/blobServices/default/containers/${azurerm_storage_container.tfstate.name}"
  role_definition_name = "Storage Blob Data Contributor"
  principal_id         = var.cicd_principal_id
}

# Read-only access for developers who need to inspect state
resource "azurerm_role_assignment" "tfstate_reader" {
  scope                = "${azurerm_storage_account.tfstate.id}/blobServices/default/containers/${azurerm_storage_container.tfstate.name}"
  role_definition_name = "Storage Blob Data Reader"
  principal_id         = var.dev_team_group_id
}

For teams managing multiple environments (dev/staging/prod), create separate containers per environment and assign the CI/CD identity for each environment only to its own container. Your dev pipeline identity should never have access to the production state.

Step 4: Private Endpoint — Block Public Internet Access

Even with public_network_access_enabled = false, the storage account is still technically reachable over the public internet via its public DNS name — Azure just blocks the traffic at the network layer. A private endpoint goes further: it places the storage account directly on your VNet with a private IP address, and you resolve the storage FQDN to that private IP via Private DNS.

resource "azurerm_private_endpoint" "tfstate_blob" {
  name                = "pe-tfstate-blob"
  location            = azurerm_resource_group.tfstate.location
  resource_group_name = azurerm_resource_group.tfstate.name
  subnet_id           = var.private_endpoint_subnet_id

  private_service_connection {
    name                           = "psc-tfstate-blob"
    private_connection_resource_id = azurerm_storage_account.tfstate.id
    subresource_names              = ["blob"]
    is_manual_connection           = false
  }

  private_dns_zone_group {
    name                 = "tfstate-dns-group"
    private_dns_zone_ids = [azurerm_private_dns_zone.blob.id]
  }
}

resource "azurerm_private_dns_zone" "blob" {
  name                = "privatelink.blob.core.windows.net"
  resource_group_name = azurerm_resource_group.tfstate.name
}

resource "azurerm_private_dns_zone_virtual_network_link" "blob" {
  name                  = "vnet-link-blob"
  resource_group_name   = azurerm_resource_group.tfstate.name
  private_dns_zone_name = azurerm_private_dns_zone.blob.name
  virtual_network_id    = var.vnet_id
  registration_enabled  = false
}

With this in place, DNS for stprotegotfstateprod.blob.core.windows.net resolves to the private IP inside your VNet. Traffic never leaves the Azure backbone, and any attempt to access the storage from the public internet returns a connection refused.

Your CI/CD agents need to be running inside the VNet (or a peered network) for this to work. Self-hosted GitHub Actions runners or Azure DevOps agents on a VM or container inside the VNet satisfy this requirement.

Step 5: How State Locking Works With Azure Blob Leases

State locking prevents two Terraform processes from modifying state simultaneously — a critical safety mechanism for team environments. The Azure backend implements locking using blob leases.

When Terraform starts a plan or apply, it acquires an exclusive lease on the state blob. The lease is held for the duration of the operation and released on completion. If another process tries to run while the lease is held, it gets a 409 Conflict response and backs off.

Leases are automatically released when Terraform finishes, but they can get stuck if a process dies unexpectedly mid-run. The lease timeout is 60 seconds by default, and Azure will release it automatically after that window. You can also manually break a stuck lease using the Azure CLI:

az storage blob lease break \
  --account-name stprotegotfstateprod \
  --container-name tfstate \
  --blob-name prod.terraform.tfstate \
  --auth-mode login

You should not need to do this often. If you're regularly breaking leases manually, it's a sign that your pipelines are being killed ungracefully and you should look at adding proper signal handling.

Step 6: Soft Delete and Versioning for State Recovery

We already enabled versioning in the storage account config above. Here's what happens in practice when state corruption occurs:

Terraform writes the new state blob, and if blob versioning is enabled, Azure Storage automatically creates a snapshot of the previous version before overwriting. You can list all versions of a specific blob:

az storage blob list \
  --account-name stprotegotfstateprod \
  --container-name tfstate \
  --include v \
  --auth-mode login \
  --query "[?name=='prod.terraform.tfstate'].[name, versionId, isCurrentVersion, lastModified]" \
  -o table

To restore a previous version, copy it back as the current version:

az storage blob copy start \
  --account-name stprotegotfstateprod \
  --destination-container tfstate \
  --destination-blob prod.terraform.tfstate \
  --source-account-name stprotegotfstateprod \
  --source-container tfstate \
  --source-blob prod.terraform.tfstate \
  --source-version-id <VERSION_ID_HERE> \
  --auth-mode login

With delete_retention_policy set to 30 days, even if a blob is deleted (accidentally or maliciously), you can recover it within that window using az storage blob undelete.

Step 7: Diagnostic Settings — Audit Who Accessed State

Every storage operation should be logged to Log Analytics. This gives you a full audit trail of who read or wrote the state file, from which IP, and at what time.

resource "azurerm_monitor_diagnostic_setting" "tfstate_storage" {
  name               = "diag-tfstate-storage"
  target_resource_id = "${azurerm_storage_account.tfstate.id}/blobServices/default"
  log_analytics_workspace_id = var.log_analytics_workspace_id

  enabled_log {
    category = "StorageRead"
  }

  enabled_log {
    category = "StorageWrite"
  }

  enabled_log {
    category = "StorageDelete"
  }

  metric {
    category = "Transaction"
    enabled  = true
  }
}

Once this is in place, you can query for all accesses to your state container in Log Analytics:

StorageBlobLogs
| where AccountName == "stprotegotfstateprod"
| where ContainerName == "tfstate"
| project TimeGenerated, OperationName, AuthenticationType, CallerIpAddress, Identity, StatusCode, Uri
| order by TimeGenerated desc

Set up an alert on this query for any access where AuthenticationType == "AccountKey" — if someone is accessing state with a storage key when keys should be disabled, that's an incident.

Hardening Checklist

Use this to audit your current Terraform state backend:

  • [ ] Storage account has shared_access_key_enabled = false
  • [ ] Storage account has allow_nested_items_to_be_public = false
  • [ ] Storage account has https_traffic_only_enabled = true
  • [ ] Storage account has min_tls_version = "TLS1_2"
  • [ ] Storage account has public_network_access_enabled = false
  • [ ] Private endpoint deployed for blob service
  • [ ] Private DNS zone linked to VNet
  • [ ] Blob versioning enabled (versioning_enabled = true)
  • [ ] Soft delete configured (30-day retention)
  • [ ] Container-level RBAC: CI/CD identity has only Storage Blob Data Contributor
  • [ ] Terraform backend config uses use_azuread_auth = true
  • [ ] No storage account keys stored in CI/CD secrets
  • [ ] Diagnostic settings sending StorageRead/Write/Delete logs to Log Analytics
  • [ ] Alert configured for any storage key authentication attempts
  • [ ] State file containers are separate per environment (dev/staging/prod)
  • [ ] Bootstrap storage account is not managed by itself (separate state or local)

If you're working with Terraform at team scale, the [Terraform best practices for teams](/blog/terraform-best-practices-for-teams) article covers module versioning, workspace strategy, and CI/CD pipeline patterns that complement everything in this guide.

The Bigger Picture

State file security is one of those areas where the gap between "works" and "secure" is surprisingly wide. The default setup gets you operational in five minutes, but it takes actual infrastructure work to get it right. The good news is that everything described here is expressible as Terraform code itself — once you've built this bootstrap module, you can reuse it across every project and every team, and the security properties are guaranteed by the infrastructure rather than by hoping someone remembers to check the right boxes.

I

Microsoft Cloud Solution Architect

Cloud Solution Architect with deep expertise in Microsoft Azure and a strong background in systems and IT infrastructure. Passionate about cloud technologies, security best practices, and helping organizations modernize their infrastructure.

Share this article

Questions & Answers

Related Articles

Need Help with Your Security?

Our team of security experts can help you implement the strategies discussed in this article.

Contact Us