AWS Raised Prices 15%? No, It's More Complicated Than That
Unpacking the AWS EC2 Capacity Blocks pricing adjustment: why alarmist headlines miss the point about dynamic pricing in cloud computing.
Updated on 18 February 2026
An article published in early January in The Register claims AWS quietly raised prices by 15% on a Saturday. The headline is catchy. The reality is far less dramatic.
AWS adjusted pricing for EC2 Capacity Blocks for ML, a very specific reservation mechanism that concerns a tiny fraction of cloud workloads. On-demand, reserved instances, and spot pricing for these same GPU instances haven’t changed. Let’s unpack what actually happened and why alarmist headlines miss the essential point.
What Capacity Blocks actually are
To understand what Capacity Blocks are, we first need to position this mechanism within the EC2 purchasing options ecosystem. AWS offers several ways to pay for your instances, each suited to different needs. On-demand instances provide maximum flexibility with a fixed hourly price. Savings Plans reduce costs by committing to a monthly usage amount for 1 or 3 years. Spot instances provide access to unused capacity at reduced prices but can be interrupted at any time. Dedicated Hosts reserve an entire physical server to meet software licensing constraints.
Capacity Blocks for ML sit in a category of their own. They’re not a purchasing option like the others, but a capacity reservation mechanism specifically designed for machine learning workloads. The table below compares the main EC2 purchasing options to better understand where Capacity Blocks fit in.
| Purchasing Option | Price | Availability | Use Case |
|---|---|---|---|
| On-Demand | Fixed | Immediate (if capacity) | Unpredictable workloads |
| Savings Plans | Fixed (reduced) | 1-3 year commitment | Stable workloads, instance flexibility |
| Spot Instances | Variable | Not guaranteed (interruptible) | Interruption-tolerant workloads |
| Dedicated Hosts | Fixed | Dedicated physical server | Existing software licenses |
Capacity Reservations
In addition to purchasing options above, AWS offers capacity reservation mechanisms:
- On-Demand Capacity Reservations: Reserve capacity in a specific Availability Zone
- Capacity Blocks for ML: Reserve a cluster of GPU instances for ML training (variable pricing)
Capacity Blocks allow you to reserve GPU instances in advance for machine learning model training. You specify the number of instances (up to 64), duration (from a few days to several weeks), and desired start date. AWS proposes up to three available slots with their respective prices. You pay upfront and your instances are guaranteed when needed.
The distinctive feature of Capacity Blocks is that instances are automatically placed in the same AWS cluster (EC2 UltraCluster) to minimize network latency between them. This physical proximity is crucial for distributed training requiring constant exchanges between instances. Training a language model on 32 H200 instances generates terabytes of data exchanged between GPUs. Every millisecond of latency multiplies across millions of iterations.
This service targets ML teams running scheduled training jobs who cannot afford to have their work interrupted because spot capacity disappeared. Budgets are measured in tens or hundreds of thousands of euros per training run. This is not a consumer service.
Dynamic pricing announced from the start
Here’s what the Capacity Blocks pricing page stated from the service launch in 2023: “EC2 Capacity Block prices are dynamic and depend on total available supply and demand at the time you purchase the EC2 Capacity Block.” This statement still appears in official AWS documentation.
Dynamic pricing is not new. Spot instances have operated on this principle for over ten years. Prices vary in real-time based on supply and demand, with fluctuations that can reach several hundred percent depending on periods. Nobody headlines “AWS increases spot prices” when rates fluctuate. Capacity Blocks follow similar logic, with less frequent but equally predictable adjustments.
The goal of this variable pricing is to incentivize customers to reserve during low-demand periods. If everyone wants H200s in January to close projects before fiscal quarter end, prices rise. If you can shift your training to March, you pay less. This is a mechanism for allocating scarce resources, not a disguised price increase.
What hasn’t changed
The p5e.48xlarge and p5en.48xlarge instances remain available on-demand at the same rates as before. Savings Plans (1 or 3-year commitments) haven’t moved. Spot instances continue to fluctuate according to their own dynamics. If you use these GPU instances without going through Capacity Blocks, your bill is strictly identical.
For the vast majority of companies using AWS, this change is invisible. Standard compute instances (t3, m5, c5, r5), RDS databases, S3 buckets, Lambda functions, and all other services maintain their usual pricing grids. Even for ML workloads, only customers explicitly reserving Capacity Blocks for ML are affected.
Why GPUs are under pressure
Nvidia H100 and H200 GPUs have become the most coveted cloud resource. The numbers tell the story: the AI datacenter GPU market grew from $10.5 billion in 2025 to a projected $12.8 billion in 2026, with 22% annual growth. Nvidia announced its Blackwell chips are sold out through mid-2026, with a backlog of 3.6 million units. For H200 specifically, Chinese demand alone reaches 2 million chips while Nvidia holds only 700,000 units in inventory.
Every hyperscaler (AWS, Azure, GCP) fights to secure allocations from Nvidia. Demand explodes with widespread adoption of language models and generative AI applications. Supply doesn’t keep pace. Companies wanting to build frontier AI models must wait 18 months or settle for previous generations.
In this context, AWS adjusts Capacity Block prices to reflect pressure on these specific GPU resources. This is a rational response to a documented supply-demand imbalance. Customers who absolutely need guaranteed capacity at a specific date pay a premium. Those who can be flexible or use spot pay less.
This situation is not unique to AWS. Azure and GCP face the same supply constraints. All three clouds have waiting lists for certain GPU configurations. The difference is that AWS chose a transparent pricing mechanism rather than opaque rationing.
The real issue: cloud cost predictability
The Register article raises a legitimate question beneath its sensationalist headline: what happens if AWS starts regularly increasing prices? For twenty years, cloud computing conditioned companies to expect continuous price decreases. That era may be ending.
Energy costs are rising. Electronic components no longer follow Moore’s law at the same pace. Massive datacenter investments for AI create margin pressure. It would be naive to think cloud prices can only decrease forever.
For companies, this means actively monitoring cloud costs rather than counting on automatic decreases. Tools like AWS Cost Anomaly Detection, budgets with alerts, and regular architecture reviews become essential. We support our clients in this continuous optimization approach, particularly through Well-Architected audits that systematically include the cost optimization pillar.
What to remember
AWS did not increase prices broadly. The adjustment concerns a niche service with dynamic pricing announced from the start. GPU instances remain available at the same rates for on-demand and reserved. Capacity Blocks remain cheaper than on-demand despite the adjustment.
The real lesson lies elsewhere: cloud resources are not infinite and their prices won’t decrease indefinitely. GPUs are under pressure and will remain so as long as AI demand continues to explode. Companies that actively optimize their architectures and usage will be better positioned than those counting on automatic price decreases.
If you use intensive ML workloads on AWS or want to understand how to optimize your cloud costs, contact us for a free infrastructure audit. We’ll help you identify optimization levers adapted to your context, with no corporate fluff about what actually works.
Frequently asked questions
- Did AWS really increase prices by 15%?
- No. AWS adjusted pricing for EC2 Capacity Blocks, a very specific reservation mechanism for ML workloads. On-demand, reserved instances, and spot pricing remain unchanged.
- What is an EC2 Capacity Block?
- A Capacity Block allows you to reserve GPU instances in advance, grouped in the same datacenter for ML training requiring low latency between instances. It's a niche service for very specific needs.
- Are Capacity Block prices fixed?
- No, and this was announced from day one. Prices vary based on supply and demand, exactly like spot instances. The goal is to incentivize usage during low-demand periods.
- Should I worry about my AWS bill?
- If you don't use Capacity Blocks for ML (which applies to over 95% of companies), this change doesn't affect you. Your regular EC2 instances maintain their usual pricing.
Related Articles
AWS Lambda: 10 concrete use cases to automate your business
Discover 10 practical AWS Lambda use cases to automate your business processes without managing servers.
The Well-Architected Framework explained for business leaders
Understanding the 6 pillars of the AWS Well-Architected Framework to make informed decisions about your cloud infrastructure.
Amazon CloudWatch: monitor your AWS infrastructure effectively
Practical guide to configuring Amazon CloudWatch: metrics, alarms, dashboards and logs to keep control of your cloud infrastructure.
AWS Cost Anomaly Detection: automatically detect unusual spending
How AWS Cost Anomaly Detection monitors your cloud spending and alerts you to abnormal consumption before the bill explodes.
Serverless on AWS: why SMBs are adopting it
How serverless architecture on AWS helps SMBs reduce infrastructure costs and focus on their core business.
Disaster recovery on AWS: strategies for SMBs
How to design a disaster recovery plan on AWS adapted to your budget and availability requirements.