Optimizing AWS Costs: NAT Gateways, S3 Storage Classes, and EBS Lifecycle Management

Most AWS environments carry somewhere between 20% and 35% in avoidable spend. This is not a controversial claim. Industry reports from Flexera and Gartner have consistently placed cloud waste in that range, and the pattern holds regardless of whether the organization is a startup running a handful of services or an enterprise managing hundreds of accounts. The waste rarely comes from one large, obvious mistake. It accumulates through a series of small architectural decisions that made sense at the time but were never revisited as workloads evolved. What makes AWS cost waste particularly difficult to catch is that the billing model is designed around granularity. You are charged per hour, per gigabyte, per request, and per data transfer path. Each individual charge looks reasonable in isolation. It is only when you trace the full path of a request through your infrastructure, accounting for every service it touches along the way, that the compounding effect becomes visible. This post examines three of the most common cost leaks we see in AWS environments. Rather than listing surface-level tips, we will walk through the mechanics of each one: why it happens, how to identify it in your own account, and how to address it with specific AWS tools and configuration changes.
NAT Gateway data processing charges
NAT Gateways are one of the most commonly misunderstood cost centers in AWS networking. When you provision a NAT Gateway, AWS charges you in two dimensions simultaneously: a flat hourly rate of \(0.045 per hour (in us-east-1), and a per-gigabyte data processing charge of \)0.045 for every gigabyte that flows through the gateway in either direction. These charges apply on top of any standard data transfer fees that AWS levies for cross-AZ or internet-bound traffic. The hourly charge alone works out to roughly \(32.40 per month for a single NAT Gateway running continuously. Since AWS recommends deploying one NAT Gateway per Availability Zone for high availability, a standard three-AZ production architecture carries a baseline cost of approximately \)97 per month before a single byte of data is processed. This is the cost of the NAT Gateways simply existing. The data processing charge is where the bill compounds.
Consider a common scenario: your application runs in private subnets and makes regular API calls to AWS services like S3, DynamoDB, SQS, or CloudWatch. By default, all of this traffic routes through the NAT Gateway, and every gigabyte is charged at $0.045. A workload pulling 500 GB of data from S3 per month through a NAT Gateway incurs $22.50 in processing charges alone, for traffic that could flow entirely for free through a properly configured VPC endpoint. The compounding gets worse when you factor in container workloads. ECS and EKS tasks running in private subnets pull container images from Amazon ECR through the NAT Gateway. A 500 MB container image pulled 100 times per month represents 50 GB of NAT Gateway traffic, adding $2.25 per month per image. Across a fleet of microservices with frequent deployments, this accumulates into a meaningful line item.
How to identify it: Open AWS Cost Explorer and filter by service for "VPC" or "EC2-Other." Look for the line item labeled "NatGateway-Bytes" under usage type. If you are processing more than a few gigabytes per month, you are likely paying for traffic that could be routed more efficiently. You can also enable VPC Flow Logs and analyze them to understand which services and endpoints are generating the most NAT Gateway traffic.
How to fix it: The most impactful change is deploying VPC Gateway Endpoints for S3 and DynamoDB. Gateway Endpoints are completely free. There is no hourly charge and no data processing charge. Traffic routes over AWS's private network backbone instead of traversing the NAT Gateway. The setup takes minutes: you create the endpoint in your VPC, associate it with the relevant route tables, and the traffic is redirected automatically. No application code changes are required.
For other AWS services like SQS, SNS, CloudWatch, ECR, and Secrets Manager, you can deploy VPC Interface Endpoints (powered by AWS PrivateLink). These do carry a cost of $0.01 per hour plus $0.01 per GB of data processed, but this is still significantly cheaper than the $0.045 per GB you would pay through the NAT Gateway. The cost difference becomes substantial at any meaningful traffic volume. For ECR specifically, deploying an Interface Endpoint also eliminates the NAT Gateway charges incurred during container image pulls, which can represent a surprisingly large portion of total NAT traffic in containerized environments. It is worth noting that deploying these endpoints is not an all-or-nothing decision. You can start by deploying the free S3 and DynamoDB Gateway Endpoints, monitor the impact on your NAT Gateway data processing charges for a billing cycle, and then evaluate whether Interface Endpoints for other services are justified based on your traffic patterns.
S3 storage class mismatches
Amazon S3 pricing is structured around storage classes, each designed for a different access pattern. S3 Standard, the default class, costs \(0.023 per GB per month in us-east-1. This is appropriate for data that is accessed frequently and requires low-latency retrieval. The problem is that most teams store everything in S3 Standard regardless of how often the data is actually accessed, and they rarely revisit this decision as data accumulates over time. The cost difference between storage classes is significant. S3 Standard-Infrequent Access (Standard-IA) costs \)0.0125 per GB per month, roughly half the price of Standard. S3 Glacier Instant Retrieval, which still provides millisecond-level access latency, costs $0.004 per GB per month, which is about 83% less than Standard. S3 Glacier Deep Archive, designed for compliance and regulatory data that is accessed very rarely, costs just $0.00099 per GB per month.
To put these numbers in practical terms: an organization storing 10 TB of data entirely in S3 Standard pays approximately $235 per month in storage costs. If 70% of that data is archival (log files, old backups, historical exports, compliance records) and could be moved to Glacier Instant Retrieval, the storage cost for that 7 TB drops from $164 to $28 per month. That is a saving of $136 per month, or over $1,600 per year, for a single storage optimization on a relatively modest data footprint. As data volumes grow into the tens or hundreds of terabytes, these savings scale proportionally. The reason this waste persists is partly behavioral. Teams create S3 buckets, upload data, and move on. There is no built-in mechanism that alerts you when the majority of objects in a bucket have not been accessed in months. The data just sits there, billed at the Standard rate, growing quietly with every new upload.
How to identify it: AWS provides two tools that make this analysis straightforward. The first is S3 Storage Lens, which gives you an account-wide or organization-wide view of your S3 usage broken down by storage class, bucket, region, and access patterns. It will show you exactly how much data sits in each storage class and highlight buckets where a large percentage of objects have not been accessed recently. The second tool is S3 Storage Class Analysis, which you can enable on individual buckets. It monitors object-level access patterns over a 30-day period and generates recommendations for which objects would benefit from transitioning to a lower-cost storage class.
How to fix it: The simplest and most broadly applicable fix is enabling S3 Intelligent-Tiering on buckets where access patterns are unpredictable or mixed. Intelligent-Tiering automatically monitors each object's access frequency and moves it between a frequent access tier and an infrequent access tier after 30 days of no access. If the object is not accessed for 90 days, it can optionally be moved to an archive access tier, and after 180 days, to a deep archive tier.
The key advantage of Intelligent-Tiering is that there are no retrieval fees when objects move back to the frequent access tier, so you do not pay a penalty if an archived object is suddenly needed. The trade-off is a small monthly monitoring fee of $0.0025 per 1,000 objects, which is negligible for most workloads but can add up if you have millions of very small objects (under 128 KB each).
For data that you know is archival from the start, such as log files or compliance backups, the more direct approach is configuring S3 Lifecycle Rules on the relevant buckets. A lifecycle rule can automatically transition objects to Standard-IA after 30 days, to Glacier Instant Retrieval after 90 days, and to Glacier Deep Archive after 365 days. You can also configure lifecycle rules to expire (delete) objects after a defined retention period, which prevents old data from accumulating indefinitely.
One important caveat: both Standard-IA and One Zone-IA enforce a minimum storage duration of 30 days. If you delete or overwrite an object before the 30-day mark, you are still charged for the full 30 days at the IA storage rate. This means lifecycle transitions should be configured with retention requirements in mind, not applied indiscriminately to all buckets.
Unattached EBS volumes and orphaned snapshots
This is perhaps the most straightforward cost leak on this list, and also one of the most persistent. When you terminate an EC2 instance, the attached EBS volumes are not always deleted along with it. Whether the volume is deleted depends on the "Delete on Termination" attribute, which is set at the time the volume is attached. For root volumes, this attribute defaults to true, but for additional data volumes, it often defaults to false. This means that terminating an EC2 instance can leave behind one or more "orphaned" EBS volumes that are no longer attached to any running instance but continue to incur storage charges.
The cost of a single orphaned volume depends on its type and size. A 100 GB gp3 volume costs approximately $8 per month. That does not sound like much in isolation, but in environments where instances are created and terminated frequently, such as development and testing environments, CI/CD build fleets, or auto-scaling groups, orphaned volumes accumulate over time. It is not uncommon to find dozens of unattached volumes in an account that has been active for a year or more.
EBS snapshots follow a similar pattern but can be even harder to catch. Snapshots are incremental backups of EBS volumes, and teams often configure automated snapshot schedules using Amazon Data Lifecycle Manager (DLM) or custom Lambda functions as a backup strategy. This is good practice for data protection. The problem arises when snapshot retention policies are not configured, or when they are configured with overly generous retention periods. Without a retention policy, every snapshot that is created is retained indefinitely, and each one incurs storage charges based on the amount of changed data it contains. Over months and years, the cumulative cost of forgotten snapshots can exceed the cost of the volumes they were meant to protect.
How to identify it: In the EC2 console, navigate to the Volumes section and filter by the "Available" state. Every volume listed as "Available" is not attached to any instance and is almost certainly unnecessary. For snapshots, sort by creation date and cross-reference against your retention requirements. Any snapshot older than your retention policy dictates is a candidate for deletion. AWS Trusted Advisor also includes a check for underutilized EBS volumes and can surface these findings automatically. You can also run a quick audit from the AWS CLI.
The following command lists all unattached EBS volumes in your current region:
aws ec2 describe-volumes --filters Name=status,Values=available --query "Volumes[].{ID:VolumeId,Size:Size,Type:VolumeType,Created:CreateTime}" --output table
For snapshots, a similar approach works. You can list all snapshots owned by your account and review their age and associated volume:
aws ec2 describe-snapshots --owner-ids self --query "Snapshots[].{ID:SnapshotId,VolumeId:VolumeId,Size:VolumeSize,Started:StartTime}" --output table
How to fix it: For orphaned volumes, the fix is simply deletion after verifying that no critical data resides on them. If you are uncertain, you can create a final snapshot of the volume before deleting it, and then set a lifecycle policy on that snapshot to expire it after a defined period. For snapshots, the recommended approach is configuring Amazon Data Lifecycle Manager with explicit retention rules.
DLM allows you to define automated snapshot policies that specify how many snapshots to retain (for example, keep the last 7 daily snapshots and the last 4 weekly snapshots) and automatically deletes older snapshots when the retention limit is reached. This eliminates the manual overhead of snapshot cleanup and ensures that backup storage costs remain predictable over time. Going forward, it is also worth reviewing the "Delete on Termination" attribute for EBS volumes in your launch templates and AMIs.
Setting this attribute to true for non-persistent data volumes ensures that future instance terminations do not leave behind orphaned storage. For volumes that contain data you need to retain, a better pattern is snapshotting the volume before termination (using a lifecycle hook in an Auto Scaling Group, for example) and then letting DLM manage the snapshot retention.
Building cost awareness into your operational rhythm
The three cost leaks described above share a common characteristic: none of them are the result of a single bad decision. They are the product of reasonable initial configurations that were never revisited as the environment grew and workloads changed. NAT Gateways are deployed because private subnets need internet access.
S3 data is stored in Standard because it is the default. EBS volumes are left behind because the termination behavior was not explicitly configured. The underlying lesson is that AWS cost optimization is not a one-time audit. It is an ongoing discipline that needs to be embedded into your operational practices.
A monthly review of Cost Explorer anomalies, a quarterly check of S3 storage distribution across classes, and a periodic sweep for orphaned resources will catch most waste before it compounds.
AWS Budgets, which is free to use, can be configured to send alerts when spending in a particular service or account exceeds a threshold, giving you early visibility into unexpected cost increases. The tools exist. The pricing data is transparent. What most teams lack is not information but the habit of looking.





