Redis 9.0 Tiered Storage Toppled RocksDB in Throughput Benchmarks
For years, system architects have faced a grueling trade-off: the blazing speed of in-memory performance or the massive capacity of disk-based storage. As datasets balloon into the terabyte range, the cost of keeping everything in DRAM becomes prohibitive. While RocksDB has long been the gold standard for persistent key-value storage, a new contender has disrupted the hierarchy. Recent industry data reveals that Redis 9.0 Tiered Storage toppled RocksDB in throughput benchmarks, proving that you no longer have to sacrifice performance for scale.
The Evolution of In-Memory Storage
The traditional database landscape categorized Redis as a pure in-memory cache and RocksDB as the go-to engine for flash-based storage. However, as NVMe SSDs narrowed the latency gap between disk and RAM, the architectural lines began to blur.
Redis 9.0 introduces a native Tiered Storage solution that allows the engine to intelligently manage data across different hardware layers. By keeping "hot" data in DRAM and migrating "warm" data to localized NVMe storage, Redis 9.0 achieves a hybrid state that was previously the sole domain of specialized storage engines. This shift isn't just about saving money on RAM; it’s about redefining the maximum throughput possible on modern cloud infrastructure.
How Redis 9.0 Tiered Storage Works
At its core, the Tiered Storage feature in Redis 9.0 utilizes a sophisticated Least Recently Used (LRU) eviction policy that operates at the storage layer rather than just the memory layer. Unlike previous iterations or community modules, this is integrated into the core engine, allowing for seamless data movement.
The Hot and Warm Data Paradigm
In a typical high-scale application, only a fraction of the total dataset is accessed frequently. Redis 9.0 exploits this temporal locality by:

