What You’re Actually Seeing When the Backend Won’t Initialize
Terraform remote backend initialization has gotten complicated with all the vague error output flying around. I spent three hours last Tuesday staring at the same failing terraform init before I realized I was dealing with five separate root causes simultaneously. Three hours. On a Tuesday. Don’t make my mistake.
The error messages Terraform throws are real clues — you just need to know which failure you’re actually looking at.
Here’s what you’ll likely see:
Error: error reading S3 Bucket in us-west-2: NotFound:
status code: 404, request id: ABC123XYZ
Error: error acquiring the state lock: AccessDenied:
User: arn:aws:iam::123456789:user/terraform is not authorized to perform:
dynamodb:GetItem on resource: arn:aws:dynamodb:us-west-2:123456789:table/terraform-lock
Error: Unsupported block type on module.tf line 5:
on module.tf line 5, in terraform:
5: backend "s3" {
The "backend" block type is not expected here.
Each one points somewhere different. So, without further ado, let’s fix them in order of how often they actually show up.
S3 Bucket Doesn’t Exist — or It’s in the Wrong Region
This one wins the frequency contest by a mile. Terraform will not create the S3 bucket during init. The bucket has to already exist. Full stop.
Your backend block probably looks something like this:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "prod/terraform.tfstate"
region = "us-west-2"
encrypt = true
dynamodb_table = "terraform-lock"
}
}
No bucket? You get the 404. Bucket exists but lives in a different region? Init either fails silently or throws a connection error — both equally frustrating. The fix is one AWS CLI command:
aws s3api create-bucket \
--bucket my-terraform-state \
--region us-west-2 \
--create-bucket-configuration LocationConstraint=us-west-2
Quick note: if your region is us-east-1, drop the --create-bucket-configuration flag entirely. AWS handles the default region differently — learned that one the hard way at around 11pm on a deployment night.
Verify the bucket is where you think it is:
aws s3api get-bucket-location --bucket my-terraform-state
Your region should appear in the output. Make sure it matches the region in your backend block exactly.
DynamoDB Table Is Missing or Named Wrong
Probably should have opened with this section, honestly — it burned me harder than the S3 issue ever did.
State locking exists to stop concurrent applies from shredding your infrastructure. Terraform needs a DynamoDB table for this. It won’t auto-create the table, and the error hides easily inside a wall of init output.
When the table is missing, you see:
Error: error acquiring the state lock: ResourceNotFoundException:
Requested resource not found
The table name in your backend block has to match exactly what exists in DynamoDB. Create it with this:
aws dynamodb create-table \
--table-name terraform-lock \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST \
--region us-west-2
The partition key must be named LockID. Not lockid, not lock_id. Terraform looks for that field by name and won’t tell you nicely if it’s wrong. I’m apparently someone who named mine lock_id once and PAY_PER_REQUEST billing still charged me for 20 minutes of confused debugging while HashiCorp stayed silent about the real issue.
Check that the table is ready:
aws dynamodb describe-table --table-name terraform-lock --query 'Table.TableStatus'
You want ACTIVE. If it says CREATING, wait 10 seconds and run it again.
IAM Permissions Blocking S3 or DynamoDB Access
You can have the bucket and the table set up perfectly — and init still fails. That’s the IAM problem. It looks like this:
Error: error reading S3 Bucket in us-west-2: AccessDenied:
status code: 403, request id: XYZ789
Terraform needs a minimum set of IAM actions to function at all:
- s3:GetObject
- s3:PutObject
- s3:DeleteObject
- s3:ListBucket
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:DeleteItem
Here’s a minimal IAM policy — replace the account ID and bucket name with your own values:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::my-terraform-state/*"
},
{
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-terraform-state"
},
{
"Effect": "Allow",
"Action": [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:DeleteItem"
],
"Resource": "arn:aws:dynamodb:us-west-2:123456789012:table/terraform-lock"
}
]
}
Attach this to your user, then run terraform init again. Using an assumed role in CI/CD? The role itself needs these permissions — not just inline on a user. That’s a separate attachment. Easy to miss.
Backend Block Typo or Version Constraint Fighting You
Frustrated by a 403 I’d already fixed, I once spent 45 minutes debugging a backend block before noticing I’d written bucket_name instead of bucket using nothing but tired eyes and misplaced confidence. Terraform doesn’t complain about unknown attributes. It just silently ignores them — which means your state configuration quietly does nothing.
Wrong:
terraform {
backend "s3" {
bucket_name = "my-terraform-state"
region = "us-west-2"
}
}
Right:
terraform {
backend "s3" {
bucket = "my-terraform-state"
region = "us-west-2"
}
}
Valid S3 backend attributes include: bucket, key, region, dynamodb_table, encrypt, acl, skip_credentials_validation, skip_region_validation, and a handful of others. Check your spelling against the official Terraform docs — takes 30 seconds and saves 45 minutes.
Separate issue entirely: version constraint conflicts. If your required_version or required_providers block is too tight, init fails before it even attempts to reach the backend. The error is at least readable:
Error: Incompatible Terraform version
on versions.tf line 2, in terraform:
2: required_version = "~> 1.4"
Your Terraform version (1.5.0) is incompatible.
Either upgrade Terraform or loosen the constraint. Change ~> 1.4 to ~> 1.4, < 2.0 — or remove it completely if you're in a pinch and trust your team to not go rogue with version selection.
Quick Checklist — Run Through These Before You Post to Slack
- S3 bucket exists in the same region your backend block specifies
- DynamoDB table exists with a partition key named exactly
LockID - Your AWS user or role has S3 and DynamoDB permissions scoped to those specific resources
- Backend block attributes match Terraform's expected names (
bucket, notbucket_name) - Your
required_versionconstraint isn't rejecting the Terraform version you actually have installed
Leave a Reply