Use Cache-Aside for read-heavy workloads: on cache miss, fetch from the database and populate the cache; on write, invalidate or update the cache explicitly.
The application treats the cache as an auxiliary data store, managing it explicitly rather than having the cache sit transparently between the application and the database.
When the application needs data, it follows this sequence:
The key insight is that the application is responsible for populating the cache. Redis never automatically fetches data from the database.
A typical cache-aside flow uses these commands:
GET user:123
If this returns nil (cache miss), the application queries the database and then:
SET user:123 "{...user data...}" EX 3600
The EX 3600 sets a one-hour expiration, ensuring stale data eventually expires.
Resilience to cache failure: If Redis becomes unavailable, the application degrades gracefully by falling back to the database. Latency increases, but the system remains functional.
Memory efficiency: Only data that is actually requested gets cached. This prevents "cold storage" where unused data occupies valuable cache memory.
Simplicity: The pattern is straightforward to implement and reason about.
The primary disadvantage is the potential for stale data. Consider this scenario:
user:123 from cache (cache hit)The cache now contains outdated information.
The standard mitigation is cache invalidation on write. When the application successfully updates data in the database, it immediately deletes the corresponding cache key:
DEL user:123
The next read will miss the cache and fetch fresh data from the database. This is simpler and safer than trying to update the cache, which risks race conditions.
This pattern is ideal when:
Consider other patterns when: