What a Terraform 403 Error Actually Means
Terraform 403 errors have gotten complicated with all the misleading debugging advice flying around. Probably should have opened with this section, honestly — but most articles skip the one distinction that saves you thirty minutes of banging your head against IAM consoles.
A 403 is not an authentication failure. Let me say that again. You authenticated fine. Your credentials worked. What failed was authorization — the cloud platform read your request, recognized exactly who you were, and then said: no. Those are two completely different problems with two completely different fixes.
Here’s what it looks like across the three major clouds:
AWS:
Error: error putting S3 Bucket ACL: AccessDenied: User is not authorized to perform: s3:PutBucketAcl on resource
Azure:
Error: checking for presence of existing role assignment: Unauthorized to perform action 'Microsoft.Authorization/roleAssignments/read'
GCP:
Error 403: The caller does not have permission, forbidden
All three tell the same story. You’re logged in. You’re just not allowed. The resource type always appears somewhere in that error message — write it down on something, a sticky note, a text file, whatever. It’s your first real clue and you’ll need it in about sixty seconds.
Find the Exact Resource Terraform Tried to Touch
The error message gives you a starting point, but it rarely gives you the full picture. Terraform fires off dozens of API calls during a single apply run. One 403 on any of them stops everything cold. You need to know which specific call failed — not approximately, exactly.
Enable debug logging before you do anything else:
export TF_LOG=DEBUG
terraform apply 2>&1 | tee terraform-debug.log
Let it fail again. Then grep the log for what actually broke:
grep -i "403\|denied\|unauthorized" terraform-debug.log | tail -20
The output will surface the exact API method that tripped. Look for things like iam:GetRole, ec2:RunInstances, or compute.instances.create. That’s the specific action your identity can’t perform. Write down the exact string. Don’t paraphrase it. Don’t skip this step.
I learned this the hard way — spent roughly two hours adjusting random IAM policies on a Friday afternoon because I never actually looked at which API call failed. When I finally checked the debug logs, the real fix took about ninety seconds. Don’t make my mistake.
Fix It on AWS — IAM Policy and SCP Checks
AWS hides authorization failures behind two completely separate systems. Only one of them shows up obviously in Terraform output. That’s what makes AWS debugging so frustrating to everyone who touches it.
IAM Policy Check
Your IAM role is probably missing an inline or attached policy. Test this directly with simulate-principal-policy — it’s underused and genuinely useful:
aws iam simulate-principal-policy \
--policy-source-arn arn:aws:iam::123456789012:role/terraform-role \
--action-names s3:PutBucketAcl \
--resource-arns arn:aws:s3:::my-bucket
If the response comes back with "EvalDecision": "implicitDeny", the policy is simply missing. Add the required action to your role’s trust policy or inline policy. It’ll look roughly like this — swap in whatever action you actually identified in the logs:
{
"Effect": "Allow",
"Action": [
"s3:PutBucketAcl",
"s3:GetBucketAcl"
],
"Resource": "arn:aws:s3:::my-bucket"
}
Service Control Policy Check
Here’s the part most articles skip entirely. An SCP — Service Control Policy — sitting at the AWS Organizations level can block an action even when your IAM policy explicitly allows it. The Terraform error message looks completely identical in both cases. I’ve watched experienced engineers spend hours on IAM policies when the real blocker was an SCP they couldn’t even see.
Check whether an SCP is involved:
aws organizations list-policies-for-target \
--target-id r-abcd \
--filter SERVICE_CONTROL_POLICY
Then pull the actual content of each policy:
aws organizations describe-policy \
--policy-id p-xxxxxxx \
--query 'Policy.Content'
Look for an explicit "Deny" statement matching your action. Find one? You need an org admin. Full stop. You cannot override an SCP with IAM permissions — that’s the whole design point of SCPs.
Fix It on Azure and GCP
Azure — Role Assignment Scope
Azure’s permission model ties every role to a scope — and scope mismatches are probably the most common source of 403s I’ve seen on that platform. A Contributor role assigned at the Resource Group level does not automatically cover operations Terraform tries to run at the Subscription level. The role name looks right. The scope is wrong. That’s all it takes.
List your actual role assignments first:
az role assignment list \
--assignee your-service-principal-id \
--output table
Look hard at the “Scope” column. If it shows something like /subscriptions/xxx/resourceGroups/yyy, any Terraform operations touching resources outside that group will fail. Expand scope to the subscription level if your deployment actually requires it:
az role assignment create \
--role Contributor \
--assignee your-service-principal-id \
--scope /subscriptions/your-subscription-id
GCP — Distinguish IAM from Org Policy
GCP throws a 403 for two very different reasons — and they need completely different fixes. That’s what makes GCP debugging its own special headache for the engineers who work with it regularly.
Missing IAM binding: Your service account lacks a required role. Fix it directly:
gcloud projects add-iam-policy-binding your-project \
--member=serviceAccount:terraform@project.iam.gserviceaccount.com \
--role=roles/compute.admin
Org policy constraint: Your organization has a policy blocking the action entirely. IAM permissions won’t save you here — the constraint wins regardless. Check what’s active on your project:
gcloud resource-manager org-policies list \
--project=your-project
If something like compute.disableInstanceCreation shows up, an org admin needs to modify it. You cannot grant your way around an org policy constraint. That’s by design.
Still Failing — Three Edge Cases to Check
Wrong Account or Expired Credentials
I’m apparently bad at keeping credential files organized, and pointing at the wrong AWS account works for me while silently destroying forty minutes of debugging time. Your credential file might be referencing a completely different account, subscription, or project than the one you’re actually working in. Confirm what you’re authenticated as right now:
aws sts get-caller-identity
az account show
gcloud auth list && gcloud config get-value project
Wrong account in the output? Update your provider block or set the correct credentials in your environment variables before touching anything else.
Provider Alias Misconfiguration
Using provider "aws" { alias = "staging" } or assuming a cross-account role? Check that the alias is referenced correctly everywhere it appears. A mismatch silently falls back to your default credentials — no warning, no obvious error, just a quiet 403 that points you in completely the wrong direction. Verify using terraform state show output.
Resource Already Owned by Someone Else
Terraform is trying to modify a resource that already exists in the cloud but was created under a different principal. You have no permissions on it — not because your permissions are misconfigured, but because you genuinely don’t own it. Either import it explicitly with terraform import or track down whoever created it and get them to grant you access.
Here’s the sequence that clears most 403 errors in under ten minutes: enable debug logging, find the exact API action, simulate it against your current permissions, check for org-level blocks, confirm your account and credentials. That order matters. Everything else is noise.
Leave a Reply