S3 Storage Classes Explained — Which One Saves You the Most Money

S3 Storage Classes Explained — Which One Saves You the Most Money

If you’ve ever searched “aws s3 storage classes comparison cost” and landed on an article that just reformatted the AWS pricing documentation, you know exactly how frustrating that is. I’ve managed S3 buckets for workloads ranging from a scrappy 20GB side project to multi-terabyte data pipelines, and the honest truth is that most teams are dramatically overpaying — not because the savings are hard to get, but because nobody’s shown them a real decision framework with actual numbers. This article fixes that.

The S3 Storage Classes in 30 Seconds

Before we get into the decision logic, here’s the landscape. AWS offers seven main storage classes, and each one optimizes for a different tradeoff between cost, retrieval speed, and minimum commitment.

Storage Class Price per GB/mo Retrieval Cost Retrieval Speed Min Storage Duration
S3 Standard $0.023 None Milliseconds None
S3 Intelligent-Tiering $0.023 (frequent) / $0.0125 (infrequent) / $0.004 (archive) None Milliseconds (most tiers) None
Standard-IA $0.0125 $0.01 per GB Milliseconds 30 days
One Zone-IA $0.01 $0.01 per GB Milliseconds 30 days
Glacier Instant Retrieval $0.004 $0.03 per GB Milliseconds 90 days
Glacier Flexible Retrieval $0.0036 $0.01 per GB (standard) 3–5 hours 90 days
Glacier Deep Archive $0.00099 $0.02 per GB 12–48 hours 180 days

All prices above are for the US East (N. Virginia) region, which is the cheapest AWS region. Your actual costs will be slightly higher in regions like EU (Frankfurt) or Asia Pacific (Sydney). The differences are usually 10–25%, so run your own numbers once you know your region.

Now let’s talk about when each one actually saves money.

Intelligent-Tiering — The Set-and-Forget Option Most Teams Should Use

Probably should have opened with this section, honestly. Intelligent-Tiering is the single most underused cost optimization in S3, and I say that having watched multiple engineering teams pay Standard prices for over a year on data that nobody was touching.

Here’s how it works. When you put an object in Intelligent-Tiering, S3 monitors access patterns automatically. Objects accessed frequently stay in the Frequent Access tier at the Standard price of $0.023/GB/mo. Objects that haven’t been accessed for 30 days drop to the Infrequent Access tier at $0.0125/GB/mo. Leave something untouched for 90 days and it moves to the Archive Instant Access tier at $0.004/GB/mo. You pay no retrieval fees at any of these tiers. The only cost is a per-object monitoring fee of $0.0025 per 1,000 objects per month.

That monitoring fee is important. It’s what makes Intelligent-Tiering a bad choice for buckets with millions of tiny objects. If you’re storing 10 million 1KB files, you’re paying $25/month just in monitoring fees — and the storage itself would only cost you $230 at Standard prices. The math flips against you fast for small objects.

Real Cost Comparison — 100GB Bucket, Mixed Access Patterns

Let’s say you have a 100GB bucket. Maybe it’s user-uploaded assets, application logs, or processed data exports. Assume 60% of it gets accessed regularly and 40% sits untouched for months.

  • S3 Standard: 100GB × $0.023 = $2.30/month
  • Intelligent-Tiering: 60GB × $0.023 + 40GB × $0.0125 = $1.38 + $0.50 = $1.88/month (plus negligible monitoring fees at this scale)

That’s an 18% savings with zero work after enabling it. At 1TB with those same access patterns, you’re saving $43/month. At 10TB, you’re saving $430/month. Annually, that’s $5,160 back in your AWS bill — without a single engineer doing anything except flipping the storage class on a bucket.

Stumped by wildly variable access patterns on our media delivery bucket, I spent two weeks building a custom Lambda function to move objects between storage classes before a colleague pointed out Intelligent-Tiering existed. That’s two weeks I’d like back. Don’t repeat my mistake.

When Intelligent-Tiering Actually Makes Sense

  • Objects larger than 128KB (smaller objects don’t benefit from tiering)
  • You can’t predict access patterns in advance
  • Data that needs immediate retrieval but might go cold over time
  • Buckets with fewer than a few million objects (keep the monitoring fee manageable)

When Standard-IA and One Zone-IA Make Sense

Standard-IA (Infrequent Access) costs $0.0125/GB per month — exactly half the price of Standard. One Zone-IA costs $0.01/GB per month, storing data in a single availability zone instead of three. Both sound like obvious wins until you understand the minimums baked into the pricing.

AWS charges a 128KB minimum object size for IA classes. If you store a 10KB file in Standard-IA, you get billed as if it’s 128KB. And there’s a 30-day minimum storage duration — delete an object after 10 days and you still pay for the full 30.

The Small File Problem

Here’s the math that trips people up. Suppose you’re storing 10,000 files that average 5KB each. Total size is 50MB.

  • S3 Standard: 0.05GB × $0.023 = $0.00115/month
  • Standard-IA (billed at 128KB minimum): 10,000 × 128KB = 1,280MB = 1.28GB × $0.0125 = $0.016/month

Standard-IA costs 14x more than Standard for small files. This is not a corner case — it catches a lot of teams storing thumbnails, config snippets, or metadata JSON files in IA classes thinking they’re saving money.

Where Standard-IA Actually Wins

Standard-IA is the right choice when your objects are large, you access them rarely but need them immediately when you do, and they’ll live in S3 for more than 30 days. Backup files are the canonical example. So are database snapshots, quarterly reports, and compliance audit exports.

Concrete scenario: you’re storing 500GB of database backups. You need instant access if a restore is required, but you might touch these files twice a year.

  • S3 Standard: 500GB × $0.023 = $11.50/month
  • Standard-IA: 500GB × $0.0125 = $6.25/month (plus retrieval costs on the rare restore)

You save $63/year on storage alone. Two restores at 500GB each would cost $10 in retrieval fees. You’re still ahead by $53 annually. That math holds up.

One Zone-IA — The Extra Discount With One Catch

One Zone-IA saves you another 20% versus Standard-IA, but your data only lives in one AWS availability zone. If that AZ goes down — which AWS availability zones occasionally do — your data is unavailable until it recovers. More critically, if the AZ suffers a permanent loss event, your data is gone. AWS doesn’t replicate it.

Use One Zone-IA for data you can regenerate. Thumbnail images are the textbook case. Your original high-resolution photos are in Standard. The 200×200 thumbnails are in One Zone-IA. If they’re lost, you regenerate them. This approach works well. Using One Zone-IA for your only copy of customer records does not.

Glacier Tiers — Long-Term Storage Cost Math

The Glacier tiers are where the really dramatic savings live, and they’re also where teams most often miscalculate costs by ignoring retrieval fees.

Understanding the Three Glacier Options

Glacier Instant Retrieval at $0.004/GB/month gives you millisecond retrieval with a $0.03/GB retrieval fee. It’s designed for data you might need once a quarter — medical images, old user records you need to pull up occasionally, archival media files.

Glacier Flexible Retrieval at $0.0036/GB/month is nearly the same storage price, but retrieval takes 3–5 hours on the standard tier (or 1–5 minutes on expedited, which costs more). The standard retrieval fee is $0.01/GB.

Glacier Deep Archive at $0.00099/GB/month is the cheapest storage AWS sells in any form. Retrieval takes 12–48 hours and costs $0.02/GB.

Real Cost Comparison — 1TB Stored for One Year

Let’s run the actual numbers. Assume you store 1TB for 12 months and retrieve 50GB twice during the year (100GB total retrieval).

  • S3 Standard: 1,000GB × $0.023 × 12 = $276/year. No retrieval fees. Total: $276.
  • Glacier Instant Retrieval: 1,000GB × $0.004 × 12 = $48/year. Plus 100GB × $0.03 retrieval = $3. Plus 90-day minimum billing applies — but assuming you store for a full year, total: $51.
  • Glacier Flexible Retrieval: 1,000GB × $0.0036 × 12 = $43.20/year. Plus 100GB × $0.01 retrieval = $1. Total: $44.20.
  • Glacier Deep Archive: 1,000GB × $0.00099 × 12 = $11.88/year. Plus 100GB × $0.02 retrieval = $2. Total: $13.88.

Deep Archive at $13.88 versus Standard at $276. That’s a 95% reduction. Or to put it another way — Deep Archive is 23x cheaper than Standard for data that just needs to exist somewhere safe and legal.

When the Wait Is Worth It

The question is always whether the retrieval delay costs you something real. For compliance data, legal holds, financial records you’re required to keep for seven years but rarely access — the 12-hour wait for Deep Archive is completely irrelevant. Nobody needs a 2018 vendor invoice in the next 20 minutes.

For disaster recovery, the answer is more nuanced. Glacier Flexible Retrieval’s 3–5 hour window might be acceptable for secondary backups if your primary recovery path uses something faster. It’s not acceptable as your only recovery mechanism for production databases.

Watch the Minimum Storage Durations

Glacier Instant Retrieval and Glacier Flexible Retrieval both have 90-day minimum storage durations. Deep Archive is 180 days. Delete before those windows and you pay for the full minimum. If you’re archiving data that might get purged within a few months, run the math before committing — you might be better off in Standard-IA for short-term infrequent storage.

Choosing Between the Three Glacier Tiers

  • Need data immediately when you do need it (even quarterly) — Glacier Instant Retrieval
  • Can wait 3–5 hours, retrieve occasionally — Glacier Flexible Retrieval
  • Rarely or never retrieve, compliance-driven retention — Glacier Deep Archive

Putting It Together — A Quick Decision Framework

After working through these numbers across a few different accounts, here’s the shorthand I actually use.

  1. Data that’s accessed regularly and unpredictably → Intelligent-Tiering (for objects over 128KB)
  2. Data you know you’ll access rarely but immediately when needed, objects are large → Standard-IA
  3. Regenerable data accessed rarely → One Zone-IA
  4. Archived data you might need once a quarter → Glacier Instant Retrieval
  5. Long-term backup you might need in hours → Glacier Flexible Retrieval
  6. Compliance retention, rarely if ever retrieved → Glacier Deep Archive
  7. Everything else, or when you’re unsure → S3 Standard (no minimums, no surprises)

The biggest money-saving move for most teams is just enabling Intelligent-Tiering on large active buckets. It requires no ongoing management, no access pattern analysis, no Lambda functions. The second biggest move is pushing genuinely cold data to Deep Archive instead of letting it sit in Standard forever. Combined, those two changes typically cut S3 bills by 40–70% for teams that haven’t thought about storage classes before.

Run your own numbers with the actual sizes and retrieval frequencies you have. The AWS Pricing Calculator handles all of this if you want to be precise. But even rough estimates will tell you whether you’re leaving significant money on the table — and for most teams that haven’t revisited their S3 configuration in a while, the answer is yes.

Jason Michael

Jason Michael

Author & Expert

Jason covers aviation technology and flight systems for FlightTechTrends. With a background in aerospace engineering and over 15 years following the aviation industry, he breaks down complex avionics, fly-by-wire systems, and emerging aircraft technology for pilots and enthusiasts. Private pilot certificate holder (ASEL) based in the Pacific Northwest.

38 Articles
View All Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay in the loop

Get the latest stigcloud updates delivered to your inbox.