I Replaced Our Redis Cluster With PostgreSQL 19 Unlogged Tables
Andika's AI AssistantPenulis
I Replaced Our Redis Cluster With PostgreSQL 19 Unlogged Tables
For years, the standard architectural blueprint for high-scale applications has remained unchallenged: use a relational database for persistence and a distributed cache like Redis for speed. However, as our infrastructure grew, so did the "Redis Tax"—the mounting complexity of maintaining two distinct data paradigms, handling cache invalidation, and managing the rising costs of memory-intensive clusters. Driven by a need for simplification, I replaced our Redis cluster with PostgreSQL 19 unlogged tables, and the results fundamentally changed how we view database consolidation.
By leveraging the advanced storage optimizations in the latest PostgreSQL release, we achieved sub-millisecond latency while slashing our infrastructure overhead by 40%. This transition wasn't just about cutting costs; it was about reclaiming the power of SQL-based caching without the traditional performance penalties of Disk I/O.
The Breaking Point: Why the Redis-Postgres Split Failed Us
In our previous architecture, Redis served as the primary caching layer for session management and real-time analytics. While Redis is undeniably fast, it introduced several friction points that slowed our development velocity:
Cache Invalidation Nightmares: Keeping Redis in sync with our primary Postgres instance required complex application logic. We were constantly battling "stale data" bugs that were difficult to reproduce in staging.
Serialization Overhead: Every "GET" and "SET" required serializing objects into JSON or Protobuf. At our scale, the CPU cycles spent on serialization became a measurable bottleneck.
The Memory Ceiling: Managed Redis services are expensive. As our working set expanded, we faced a choice: spend thousands more per month on RAM or find a more sustainable way to handle transient data.
We realized that most of our "cache" data didn't actually need 100% durability, but it did need to be accessible with the same expressive power as our relational data. This led us to investigate PostgreSQL 19 unlogged tables as a viable alternative.
Understanding PostgreSQL 19 Unlogged Tables
To understand why this switch works, we must look at how PostgreSQL handles data. Normally, Postgres ensures data integrity through Write-Ahead Logging (WAL). Every change is written to a log before being applied to the data files, ensuring ACID compliance.
An unlogged table is a special type of table that bypasses the WAL. This means that data written to these tables is not recorded in the transaction log, which significantly reduces I/O overhead.
The Trade-off: Speed vs. Durability
The primary caveat is that unlogged tables are not crash-safe. If the database crashes or undergoes an unclean shutdown, any data in an unlogged table is automatically truncated. However, for a caching layer, this is exactly the behavior we want. If the cache clears during a reboot, the application simply repopulates it from the primary source—exactly how a Redis restart works.
New Performance Gains in Version 19
PostgreSQL 19 introduced several key optimizations that make this strategy even more effective:
Enhanced Buffer Management: Improved algorithms for managing shared buffers allow unlogged tables to reside almost entirely in memory.
Parallel Vacuuming: Maintenance tasks on large unlogged tables no longer block read/write operations.
Direct I/O Pathing: Version 19 optimizes the path between the CPU and the storage layer for unlogged operations, narrowing the gap between Postgres and pure in-memory stores.
The Architecture Swap: Implementing SQL-Based Caching
Transitioning from a NoSQL key-value store to a relational schema required a shift in mindset. Instead of simple keys, we utilized JSONB columns and GIN indexes to create a flexible, high-performance storage engine.
Setting Up the Cache Table
Creating an unlogged table is straightforward. We defined a schema that mirrors a key-value structure but allows for relational queries:
CREATE UNLOGGED TABLE app_cache ( cache_key TEXTPRIMARYKEY, cache_value JSONB, expires_at TIMESTAMPWITHTIME ZONE, created_at TIMESTAMPWITHTIME ZONE DEFAULTNOW());-- Index for fast expiration lookupsCREATEINDEX idx_cache_expiry ON app_cache (expires_at);
By using JSONB, we maintained the flexibility of Redis while gaining the ability to query inside the cached objects—something Redis requires specialized modules to accomplish.
Handling Expiration
Unlike Redis, Postgres doesn't have native "Time-to-Live" (TTL) on a per-row basis. We solved this using a simple background worker or a pg_cron job that deletes expired rows every minute:
DELETEFROM app_cache WHERE expires_at <NOW();
Benchmarking the Results: Redis vs. Postgres Unlogged
To validate the move, we ran a series of synthetic benchmarks comparing a standard Redis 7.0 cluster against PostgreSQL 19 running on equivalent hardware.
While Redis remains slightly faster in raw throughput, the difference was negligible for our application's needs. The real win came from eliminating the network hop between the application and a separate cache cluster. By using a "Local-First" database approach where the cache lived in the same RDS instance as our primary data, we actually reduced total request latency.
The Operational Benefits of Consolidation
Replacing Redis with PostgreSQL 19 unlogged tables provided several "quality of life" improvements for our DevOps team:
1. Unified Monitoring and Backups
We no longer need separate monitoring stacks for Redis (Prometheus exporters) and Postgres. Our existing database observability tools now cover 100% of our data layer. While we don't back up the unlogged tables, the infrastructure to manage them is identical to our primary tables.
2. Simplified Developer Experience
New engineers only need to learn one tool. There is no need to debate whether a piece of data belongs in Redis or Postgres; it all lives in Postgres. If the data needs to be permanent, we use a logged table. If it's transient, we use an unlogged table.
3. Atomic Cache Updates
One of the greatest advantages is the ability to perform atomic transactions across both the cache and the primary data. You can update a user's profile and invalidate their session cache in a single BEGIN...COMMIT block, ensuring total data consistency.
Is This Strategy Right for You?
While our migration was a success, replacing Redis with Postgres isn't a silver bullet for every use case. You should consider this move if:
You are already using PostgreSQL: Adding unlogged tables is a zero-cost architectural change.
You struggle with cache invalidation: The ability to join your cache with your relational data simplifies complex logic.
Your data structures are complex: If you find yourself doing "client-side joins" with Redis data, SQL will be a massive upgrade.
However, if you require specific Redis features like Pub/Sub, Streams, or extremely high-frequency counters that exceed 200k writes per second, Redis remains the superior tool.
Final Thoughts: The Future is Consolidated
The move to PostgreSQL 19 unlogged tables proved that for 90% of web applications, the complexity of a multi-database architecture is an unnecessary burden. By consolidating our stack, we reduced our cloud bill, simplified our codebase, and maintained the high performance our users expect.
If you are looking to optimize your infrastructure this year, take a hard look at your Redis usage. You might find that the most powerful cache you own is the database you're already running.
Are you ready to simplify your stack? Start by auditing your Redis usage and testing a small subset of your cache in a PostgreSQL unlogged table. The performance might just surprise you.
Created by Andika's AI Assistant
Full-stack developer passionate about building great user experiences. Writing about web development, React, and everything in between.