TL;DR

A CIDR'26 analysis of cloud hardware from 2015–2025 finds that network bandwidth per dollar improved roughly 10x while CPU and DRAM cost-performance gains were modest. Most unexpectedly, NVMe SSD performance in the cloud has largely stalled since 2016, even as on-prem NVMe advanced.

What happened

Researchers behind the Cloudspecs paper examined cloud hardware trends between 2015 and 2025 with an emphasis on AWS, comparing those trends to other clouds and on-premise servers. They normalized common benchmarks by price to assess cost-performance evolution. Results show multi-core counts in the cloud have grown rapidly (AWS offers instances up to 448 cores), but cost-normalized CPU performance improved only modestly: SPECint roughly 3x over the decade with a large contribution from AWS Graviton, and roughly 2x without it. DRAM capacity per dollar has largely flatlined, with the notable 2016 introduction of memory-optimized instances delivering a one-time boost. Networking stands out: bandwidth per dollar rose about 10x and absolute instance network speeds went from about 10 Gbit/s to 600 Gbit/s. In contrast, NVMe-backed instance I/O and capacity have shown little improvement since their introduction, even as on-premise NVMe advanced.

Why it matters

  • Cost-performance improvements are uneven: networking and specialization now drive the biggest gains while general-purpose CPU and DRAM lag.
  • Stagnant NVMe in the cloud may shift storage architecture choices toward disaggregated or network-native models.
  • Software and system design face new pressure to exploit specialized hardware (e.g., networking, accelerators) rather than relying on uniform hardware scaling.
  • Procurement and cloud/on-prem tradeoffs change as some on-prem components (NVMe) outpace cloud offerings in price/performance.

Key facts

  • Study timeframe: 2015–2025; main focus: AWS with comparisons to other clouds and on-prem hardware (CIDR'26).
  • Maximum cloud core counts rose by roughly an order of magnitude; largest cited AWS instance reaches 448 cores.
  • SPECint cost-performance improved about 3x over ten years; without AWS Graviton the gain is roughly 2x.
  • In-memory database benchmarks showed only ~2x–2.5x cost-performance gains, likely constrained by memory/cache factors.
  • On-prem AMD server CPUs delivered comparable slow gains: about 1.7x improvement from 2017 to 2025.
  • DRAM: single-socket bandwidth rose from ~93 GiB/s to ~492 GiB/s (DDR3→DDR5), but cost-normalized DRAM gains were only ~2x; DRAM capacity per dollar largely flatlined.
  • Network bandwidth per dollar improved ~10x; absolute instance network speeds increased from ~10 Gbit/s to ~600 Gbit/s (60x).
  • NVMe: cloud SSD throughput stagnated since ~2016 and capacity since ~2019; the 2016 i3 family still offers the best I/O per dollar by nearly 2x.
  • On-prem NVMe improved with two rounds of platform upgrades (PCIe 4 and PCIe 5), widening the gap vs. cloud NVMe.
  • The authors provide an interactive Cloudspecs tool built on DuckDB-WASM for reproducible queries and charts.

What to watch next

  • Adoption of disaggregated or network-native storage as cloud NVMe lags and networks get faster (paper speculates architectures may shift).
  • Whether cloud NVMe price/performance catches up to on-prem PCIe 4/5 advances or remains behind (not confirmed in the source).
  • Efforts to make software better exploit high core counts—if parallel programming and synchronization bottlenecks are reduced, core-count gains may yield more benefit.

Quick glossary

  • NVMe: A storage protocol designed for high-performance solid-state drives that attaches directly to the PCIe bus to reduce latency and increase throughput.
  • DRAM: Dynamic random-access memory, the main type of volatile memory used for working data in servers and personal computers.
  • SPECint: A benchmark suite that measures integer compute performance of CPUs and is commonly used to compare relative processor performance.
  • Disaggregated storage: An architecture that separates compute from storage resources, allowing storage to be provisioned and scaled independently over a network.
  • Graviton: An AWS processor family referenced in the paper that materially contributed to measured CPU cost-performance gains.

Reader FAQ

Did network performance improve in the cloud?
Yes—network bandwidth per dollar rose about 10x and absolute instance speeds increased from ~10 Gbit/s to ~600 Gbit/s.

Are CPU and DRAM cost-performance continuing Moore's Law pace?
No—the paper finds only modest cost-normalized improvements for CPUs and largely flat DRAM capacity per dollar over the decade.

Is cloud NVMe keeping pace with on-prem NVMe?
No—the paper reports cloud NVMe I/O and capacity stagnated while on-prem NVMe advanced with PCIe 4 and 5, widening the gap.

Should teams switch to disaggregated storage now?
The paper notes a trend and speculates architectures may shift, but it does not make deployment recommendations—specific decisions are not confirmed in the source.

Why don't high core counts translate to proportional performance?
The paper lists several possibilities—memory bandwidth limits, core-to-memory balance, configuration mismatches, and software scalability issues—but a definitive cause is not confirmed in the source.

Cloudspecs: Cloud Hardware Evolution Through the Looking Glass – January 09, 2026 This paper (CIDR'26) presents a comprehensive analysis of cloud hardware trends from 2015 to 2025, focusing on AWS…

Sources

Related posts

By

Leave a Reply

Your email address will not be published. Required fields are marked *