Detect and handle torn writes across multiple Redis instances using transaction stamps, version tokens, and commit markers when atomic multi-key operations aren't possible.
These patterns work with any multi-primary Redis setup: Redis Cluster shards, client-side sharding across independent Redis instances, or even different Redis deployments entirely. The key insight is that consistency detection happens at the application layer through shared tokens embedded in the values themselves—no coordination between Redis instances is required.
When updating related keys A and B on different shards:
Now A has new data, B has old data. Readers see inconsistent state.
Generate a unique token for each logical transaction. Include it in all related writes.
# Generate unique token (UUID, ULID, or random bytes)
token = generate_random_token()
SET user:123:profile "{token}:{profile_json}"
SET user:123:settings "{token}:{settings_json}"
profile = GET user:123:profile
settings = GET user:123:settings
profile_token = parse_token(profile)
settings_token = parse_token(settings)
if profile_token == settings_token:
# Consistent: both from same logical write
return parse_payload(profile), parse_payload(settings)
else:
# Torn write detected - handle inconsistency
trigger_repair()
This pattern is particularly valuable with client-side sharding where you have independent Redis primaries with no cluster coordination:
Setup: Two independent Redis instances (host1:6379 and host2:6379) with a client connecting to both.
The client:
1. Generates token
2. Writes to Instance A: SET user:profile "{token}:{data}"
3. Writes to Instance B: SET user:settings "{token}:{data}"
4. On read, fetches both and compares tokens
No clustering, no replication between instances—consistency is enforced entirely by the application through token validation.
Add ordering information to detect inconsistency AND determine the newest version:
SET user:123:profile "{timestamp}:{token}:{payload}"
SET user:123:settings "{timestamp}:{token}:{payload}"
On mismatch, accept the value with the higher timestamp, or trigger a full refresh.
For stronger ordering guarantees, use a monotonic counter:
version = INCR user:123:version
SET user:123:profile "{version}:{payload}"
SET user:123:settings "{version}:{payload}"
Now you can always determine which write is newer.
Write all data first, then write a commit marker as the final step:
# Phase 1: Write data (may partially fail)
SET txn:abc:A "{payload_A}"
SET txn:abc:B "{payload_B}"
# Phase 2: Mark as committed
SET txn:abc:committed "1" EX 3600
Readers only accept data whose transaction ID has a commit marker:
if EXISTS txn:abc:committed:
a = GET txn:abc:A
b = GET txn:abc:B
# Safe to use
else:
# Transaction incomplete, ignore or wait
This resembles two-phase commit, with the commit marker acting as the decision record.
For event-sourced systems, append to per-key logs with shared tokens:
# Each logical transaction appends to both logs
RPUSH user:123:profile:log "{token}:{profile_change}"
RPUSH user:123:settings:log "{token}:{settings_change}"
To find the latest consistent state:
Using sorted sets with timestamp scores:
ZADD user:123:events {timestamp} "{token}:profile:{payload}"
ZADD user:123:events {timestamp} "{token}:settings:{payload}"
Query from newest, grouping by token until you find one with all expected parts.
When inconsistency is detected:
The write may still be in progress:
for attempt in range(3):
if tokens_match():
return data
sleep(backoff * attempt)
trigger_repair()
If values have timestamps/versions, use the newest:
if profile_version > settings_version:
regenerate_settings_from_profile()
else:
regenerate_profile_from_settings()
Re-fetch from the authoritative source (database) and rewrite both:
canonical = fetch_from_database(user_id)
token = generate_token()
SET user:123:profile "{token}:{canonical.profile}"
SET user:123:settings "{token}:{canonical.settings}"
Return the most recent data immediately, trigger async repair:
queue_repair_job(user_id)
return best_effort_merge(profile, settings)
Use transaction stamps when: - Keys must live on different shards (no hash tag option) - You can tolerate detection + repair instead of prevention - Cross-system consistency is needed (Redis + database + cache)
Use commit markers when: - You need clear transaction boundaries - Incomplete transactions should be invisible to readers - You can accept the overhead of an extra key per transaction
Use version stamps when: - You need to determine which write is newest - Last-write-wins semantics are acceptable - You want automatic conflict resolution
| Approach | Guarantees | Overhead | Complexity |
|---|---|---|---|
| Hash tags (same slot) | Atomic | None | Low |
| Transaction stamps | Detect only | Per-value token | Medium |
| Commit markers | Detect + visibility | Extra key | Medium |
| Version stamps | Detect + ordering | Per-value version | Medium |
| External coordinator | Atomic | Network + latency | High |
When possible, use hash tags to co-locate keys (see Hash Tag Patterns). Use these detection patterns when co-location isn't feasible.