S3 Storage Class Optimisation: Real Numbers
S3 has eight storage classes and a confusing pricing model. Here's a clear map of which class fits which workload, with real numbers from real audits.
By Andrii Votiakov on
S3 is the bucket every AWS account fills with everything and revisits never. It's also one of the easier wins on a cost audit — moving objects to the right storage class typically takes 40-70% off the storage line. Keep in mind that data flowing out of S3 to other services adds data transfer charges on top — those two line items are worth auditing together. And logs you move out of S3 into CloudWatch should follow the CloudWatch cost optimisation playbook to avoid paying storage twice.
Quick answer
For unknown access patterns, S3 Intelligent-Tiering is the safe default — it auto-tiers and is rarely worse than Standard. For known cold data over 90 days old, go straight to Glacier Instant Retrieval or Glacier Flexible Retrieval. For archives you'll touch at most once a year, Glacier Deep Archive at $0.00099/GB-month is unbeatable.
The eight classes, demystified
| Class | $/GB-mo (eu-west-1) | Retrieval | When to use |
|---|---|---|---|
| Standard | $0.023 | Free, ms | Hot data, < 30 days |
| Intelligent-Tiering | $0.023 + monitoring | Free, ms | Mixed/unknown access |
| Standard-IA | $0.0125 | $0.01/GB | Cold but instantly needed, > 30 days |
| One Zone-IA | $0.01 | $0.01/GB | Re-creatable cold data, single AZ acceptable |
| Glacier Instant Retrieval | $0.004 | $0.03/GB | Cold archive, rare ms reads |
| Glacier Flexible Retrieval | $0.0036 | $0.01/GB + minutes-hours | Periodic archives |
| Glacier Deep Archive | $0.00099 | $0.02/GB + 12 hours | Compliance, deep archives |
| Reduced Redundancy | deprecated | — | Don't use |
(Prices indicative for eu-west-1 in 2026; check current AWS pricing for exact numbers.)
The decision rules
For application data (uploads, user content)
- First 30 days: Standard (likely hot)
- 30-90 days: Standard-IA (still might be needed quickly)
- 90+ days: Glacier Instant Retrieval if you need ms response when you do read it; Glacier Flexible Retrieval if minutes-hours is fine
- Forever: Glacier Deep Archive for the truly cold archive
This is one Lifecycle Rule per bucket. Set it once, save forever.
For logs
- CloudFront/ALB/VPC Flow Logs: Standard for 30 days, Glacier Flexible for 60 days, delete or Deep Archive after retention period
- Application logs: same pattern, but be aggressive — anything not queried in 30 days is cold
For backups
- Last 7 days: Standard-IA
- Last 30 days: Glacier Instant Retrieval
- 30 days to retention end: Glacier Deep Archive
For build artefacts and CI cache
- Last 30 days: Standard
- 30+ days: delete (you'll rebuild from source)
When to use Intelligent-Tiering
Use it when access patterns are mixed or unknown — user-uploaded files, asset libraries, datasets where some objects get hammered and most don't. Auto-tiering pays for itself once an average object hasn't been accessed in 30+ days.
Don't use it for:
- Objects smaller than 128 KB (no auto-tiering, just a tiny monitoring fee)
- Predictable patterns where lifecycle rules are cheaper
What people get wrong
Confusing $/GB-month with total cost
Standard-IA looks cheap at $0.0125/GB-month — half of Standard. But every GB you read costs $0.01 retrieval. If you read the data more than once a month on average, IA costs more, not less.
Ignoring retrieval fees on cold tiers
Moving 10 TB to Glacier Instant Retrieval saves you ~$190/month vs Standard. But if you accidentally read it all back you're paying $300 in retrieval fees. Worth it long-term, but plan for one-off reads.
Forgetting minimum storage durations
- Standard-IA: 30 days
- Glacier Instant Retrieval: 90 days
- Glacier Flexible: 90 days
- Glacier Deep Archive: 180 days
Delete an object early and you still pay through the minimum. Don't move data to cold tiers if its real lifetime is shorter.
Versioning + lifecycle gotchas
If you have versioning on, lifecycle rules need explicit NoncurrentVersionTransition and NoncurrentVersionExpiration rules. Without them, old versions accumulate at full Standard pricing.
I've seen accounts with 30 TB of versioned objects nobody knew existed. That's $700/month for nothing.
Multipart uploads not finished
Failed multipart uploads keep their parts billable forever. Add a lifecycle rule to abort incomplete multipart uploads after 7 days. It's a one-line policy:
{
"Rules": [{
"Id": "abort-incomplete-mpu",
"Status": "Enabled",
"AbortIncompleteMultipartUpload": { "DaysAfterInitiation": 7 }
}]
}
How to find what to fix
Storage Lens
S3 Storage Lens (free dashboard plus paid tier) tells you per-bucket: what's in Standard, what's in IA, top object age distributions. The free dashboard alone is enough to spot 80% of waste.
S3 Inventory
For deep dives, enable Inventory on your largest buckets. CSV report drops daily into a target bucket. Query with Athena:
SELECT storage_class, count(*) AS objects, sum(size)/1e9 AS gb
FROM s3_inventory
WHERE last_modified < date_add('day', -90, current_date)
GROUP BY storage_class;
Anything > 90 days old still in Standard is your immediate hit list.
Realistic savings
On a recent client (~85 TB across 11 buckets, $1,950/month S3):
- Lifecycle rules to Glacier Flexible for 60+ day data: $680/month saved
- Aborted incomplete multipart cleanup: $120/month
- Old versioned objects expired: $240/month
- Two never-used buckets deleted: $110/month
Final: $800/month, 59% reduction. Implementation: one engineer, three days.
Want me to dig through your S3 buckets and find the cold storage you've forgotten about? Book a call.