UtilsDaily

Redis Interview Prep

16 essential topics with syntax & real-world examples to ace your Redis interview.

16 topics
πŸ—οΈ Core Concepts & Architecture Fundamentals β–Ύ
Think of it like this: Your brain stores things in two places β€” your short-term memory (what you're thinking about right now, super fast to access) and long-term memory (stored deeper, takes a moment to recall). Redis is like short-term memory for your app β€” everything lives in RAM so it's instantly available, no waiting.

Redis (Remote Dictionary Server) is an in-memory data structure store used as a database, cache, message broker, and streaming engine. Data lives in RAM, making reads/writes microseconds fast.

Why is Redis So Fast?

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Why Redis is 100x Faster β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ Disk DB: Request β†’ Parse SQL β†’ Disk I/O β†’ Return β”‚ β”‚ β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ ~5ms β”‚ β”‚ β”‚ β”‚ Redis: Request β†’ RAM lookup β†’ Return β”‚ β”‚ β–ˆβ–ˆβ–ˆβ–ˆ ~0.05ms β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
  • RAM storage β€” no disk I/O bottleneck
  • Single-threaded event loop β€” no lock contention between threads
  • Simple data structures β€” O(1) lookups for most operations
  • Non-blocking I/O multiplexing β€” handles thousands of connections via epoll/kqueue

Common Use Cases

Use CaseHow Redis HelpsData Type
Session storageStore user sessions with TTL auto-expiryString / Hash
CachingCache DB query results, API responsesString / Hash
Rate limitingAtomic INCR per user/IP per windowString
LeaderboardsSorted Set for real-time rankingsSorted Set
Job queuesLPUSH/BRPOP for task distributionList
Pub/Sub messagingReal-time notifications between servicesPub/Sub
Unique visitor countingHyperLogLog for approximate cardinalityHyperLogLog
Event streamingDurable ordered event logStream

Redis Data Model

Every piece of data is stored as a key β†’ value pair. The key is always a string. The value can be one of Redis's built-in data types: String, List, Hash, Set, Sorted Set, Stream, HyperLogLog, Bitmap, or Geospatial index.

# Redis is a key-value store β€” every value has a key (name)
SET user:42:name "Alice"     # key = "user:42:name", value = "Alice"
GET user:42:name             # => "Alice"

# Check Redis version & info
INFO server
PING                         # => PONG (health check)

Single-Threaded Model

Redis uses one main thread to process commands. This sounds slow but isn't β€” because it avoids the overhead of thread synchronization and context switching. Since Redis is I/O-bound (not CPU-bound), one thread handles thousands of clients efficiently via multiplexed I/O.

Redis 6+ added I/O threads for reading/writing network data (not command execution), improving throughput on multi-core machines without breaking atomicity guarantees.

πŸ”‘ Keys, Commands & Naming Conventions Fundamentals β–Ύ
Think of it like this: A key is like a label on a jar. The jar (value) can contain anything β€” jam, coins, notes. The label must be unique so you can find exactly the right jar. A good label is descriptive: "user:42:cart" tells you it's for user 42's shopping cart.

Universal Key Commands

SET    key value          # Store a value
GET    key               # Retrieve a value
DEL    key [key ...]     # Delete one or more keys (returns count deleted)
EXISTS key [key ...]     # Returns number of keys that exist (not 0/1)
TYPE   key               # Returns: string | list | hash | set | zset | stream
RENAME key newkey        # Rename a key (fails if key doesn't exist)
COPY   key destkey       # Copy value to a new key (Redis 6.2+)

KEYS   pattern           # Find keys matching pattern β€” NEVER use in production!
SCAN   cursor [MATCH pattern] [COUNT count]  # Safe iterative key scan

# Key metadata
OBJECT ENCODING key      # Internal encoding (ziplist, listpack, hashtable…)
OBJECT REFCOUNT key      # Reference count
OBJECT IDLETIME key      # Seconds since last access
DEBUG OBJECT key         # Detailed internal info

Why SCAN, Not KEYS

KEYS * blocks Redis until it scans every key in the database β€” catastrophic on large datasets. SCAN returns a cursor and a batch, letting Redis serve other commands between iterations.

# Iterate all keys matching "user:*" in batches
SCAN 0 MATCH user:* COUNT 100
# Returns: [next_cursor, [key1, key2, ...]]
# Keep calling with next_cursor until cursor returns "0" (full cycle)

Key Naming Conventions

Pattern: object-type : id : field ───────────────────────────────────── user:42:profile β†’ User 42's profile hash user:42:session β†’ User 42's session token product:8:views β†’ View count for product 8 rate:login:192.168.1.1 β†’ Login attempts from an IP lock:checkout:order:99 β†’ Distributed lock for order 99 cache:homepage:html β†’ Cached HTML for homepage
  • Use colons as separators β€” they display as namespaces in Redis GUI tools
  • Keep keys short but descriptive β€” keys are stored in memory too
  • Include the object type and ID to prevent collisions
  • Avoid spaces and special characters

Counting & Existence Patterns

EXISTS user:42           # => 1 (exists) or 0 (doesn't)
EXISTS user:1 user:2 user:3  # => 2 (if 2 of the 3 exist)

# Count all keys in the database (instant, uses internal counter)
DBSIZE

# Select a database (Redis has 0-15 by default)
SELECT 0    # default database
SELECT 1    # switch to db 1 (isolated namespace)

# Move a key to another database
MOVE key 1

# Flush all keys (DANGEROUS β€” wipes entire database)
FLUSHDB     # wipes current DB
FLUSHALL    # wipes ALL databases
πŸ“ Strings Data Types β–Ύ
Think of it like this: A String in Redis is like a sticky note. You can write anything on it β€” a name, a number, even a photo (as bytes). You can also scratch out the number and write a new one, or add more text to the end. Redis Strings are binary-safe, meaning they can hold any data up to 512 MB.

Core String Commands

# Basic set and get
SET    username "alice"           # Store string
GET    username                   # => "alice"
MSET   k1 v1 k2 v2 k3 v3        # Set multiple at once (atomic)
MGET   k1 k2 k3                  # Get multiple at once β†’ ["v1","v2","v3"]

# String inspection
STRLEN username                   # => 5 (length in bytes)
APPEND username "_admin"          # Appends β†’ "alice_admin", returns new length
GETRANGE username 0 4             # Substring β†’ "alice" (0-indexed, inclusive)
SETRANGE username 0 "bob"        # Overwrite at offset β†’ "bob_admin"

Atomic Counters β€” the #1 Redis Pattern

Think of it like this: Imagine a ticket machine at a bakery. Every customer presses a button and gets the next number. Redis INCR is that button β€” it adds 1 and returns the new number in one atomic step. Even if 1,000 people press at the same time, every customer gets a unique number. No duplicates, ever.
SET    page:views 0
INCR   page:views            # => 1 (atomic: read + add 1 + write)
INCR   page:views            # => 2
INCRBY page:views 10         # => 12
DECR   page:views            # => 11
DECRBY page:views 5          # => 6
INCRBYFLOAT price 1.50       # => float increment (stored as string internally)

# Real-world: Rate limiting (10 requests per minute per user)
INCR   rate:login:user:42
EXPIRE rate:login:user:42 60  # Reset counter after 60s

SET Options (Combined Commands)

# SET with expiry
SET  session:abc123 "user_data" EX 3600    # Expire in 3600 seconds
SET  session:abc123 "user_data" PX 3600000 # Expire in milliseconds
SET  session:abc123 "user_data" EXAT 1700000000 # Expire at Unix timestamp

# Conditional SET
SET  lock:resource "owner" NX             # Set ONLY if key does Not eXist
SET  config:debug "true" XX              # Set ONLY if key already eXists

# Combine NX + EX (atomic try-lock pattern)
SET  lock:checkout "worker1" NX EX 30   # Acquire lock, expires in 30s

# Deprecated equivalents (avoid in new code)
SETEX key 3600 value   # Same as SET key value EX 3600
SETNX key value        # Same as SET key value NX

# Get and set atomically
GETSET key newvalue    # Returns old value, sets new value
GETEX  key EX 60      # Get and reset TTL (Redis 6.2+)
GETDEL key             # Get and delete atomically (Redis 6.2+)

Strings as Binary Data

# Store JSON (serialized object)
SET user:42 '{"name":"Alice","age":30}'
GET user:42   # => '{"name":"Alice","age":30}'

# Store image bytes, JWT tokens, serialized protobuf β€” Redis doesn't care
# Max value size: 512 MB

Use Strings When

  • Storing single values: session tokens, feature flags, config values
  • Atomic counters: page views, API rate limits, inventory counts
  • Caching serialized objects (JSON/protobuf)
  • Distributed locks (SET NX EX pattern)
πŸ“‹ Lists Data Types β–Ύ
Think of it like this: A Redis List is like a train. You can add carriages to the front (left) or the back (right), and remove them from either end too. It keeps everything in order. A queue at a coffee shop works the same way β€” people join at the back, get served from the front.
HEAD (left) TAIL (right) ←LPUSH/LPOP RPUSH/RPOPβ†’ β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚ "e" β”‚ "d" β”‚ "c" β”‚ "b" β”‚ "a" β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ index: 0 1 2 3 4 -5 -4 -3 -2 -1 (negative = from right)

Push & Pop Commands

# Push to head (left)
LPUSH tasks "send-email"          # List: ["send-email"]
LPUSH tasks "resize-image"        # List: ["resize-image", "send-email"]

# Push to tail (right)
RPUSH tasks "generate-report"     # List: ["resize-image", "send-email", "generate-report"]
RPUSH tasks "a" "b" "c"           # Push multiple values at once

# Pop from head (left)
LPOP  tasks                       # => "resize-image" (removes & returns)
LPOP  tasks 2                     # Pop and return 2 elements (Redis 6.2+)

# Pop from tail (right)
RPOP  tasks                       # => "generate-report"

# Blocking pop β€” wait up to 30s for an item (perfect for job workers)
BLPOP tasks 30                    # Blocks until item available or timeout
BRPOP tasks 30

# Move atomically between lists
LMOVE source dest LEFT RIGHT      # Pop from source left, push to dest right

Reading the List

>LRANGE tasks 0 -1        # All elements (0 = first, -1 = last)
LRANGE tasks 0 4         # First 5 elements
LLEN   tasks             # Length of list
LINDEX tasks 0           # Element at index 0 (no removal)
LINDEX tasks -1          # Last element

# Search and remove
LREM tasks 2 "send-email"    # Remove 2 occurrences of "send-email" from head
LREM tasks -1 "send-email"   # Remove 1 from the tail
LREM tasks 0 "send-email"    # Remove ALL occurrences

# Trim to keep only a range
LTRIM tasks 0 99         # Keep only first 100 items (discard the rest)

Queue vs Stack Patterns

PatternPushPopUse Case
Queue (FIFO)RPUSHLPOPTask queues, email sending
Stack (LIFO)LPUSHLPOPUndo history, recent items
Activity FeedLPUSH + LTRIMLRANGETwitter-style timelines
># Real-world: Activity feed β€” keep latest 100 items
LPUSH feed:user:42 "liked post 7"
LPUSH feed:user:42 "commented on post 3"
LTRIM  feed:user:42 0 99          # Cap at 100 entries
LRANGE feed:user:42 0 9           # Get latest 10 for display
πŸ—‚οΈ Hashes Data Types β–Ύ
Think of it like this: A Redis Hash is like a filing folder with labeled dividers inside. The folder has one name (the key), but inside it holds many labeled sections β€” name, age, email, score. You can read just one section without opening everything else.
Key: "user:42" β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ field β”‚ value β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ name β”‚ "Alice" β”‚ β”‚ email β”‚ "a@x.com" β”‚ β”‚ age β”‚ "30" β”‚ β”‚ score β”‚ "1500" β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Hash Commands

# Set one or many fields
HSET user:42 name "Alice" email "alice@example.com" age 30
# HMSET is deprecated in Redis 4.0+, HSET now accepts multiple fields

# Get one or many fields
HGET  user:42 name                  # => "Alice"
HMGET user:42 name email age        # => ["Alice", "alice@example.com", "30"]
HGETALL user:42                     # => {name: Alice, email: ..., age: 30}

# Field existence & deletion
HEXISTS user:42 email               # => 1 (exists) or 0
HDEL    user:42 age                 # Delete field, returns count deleted

# Count & introspect
HLEN  user:42                       # => 3 (number of fields)
HKEYS user:42                       # => ["name", "email", "score"]
HVALS user:42                       # => ["Alice", "alice@example.com", "1500"]

# Atomic increment on a hash field
HINCRBY   user:42 score 100         # Add 100 to score β†’ 1600
HINCRBYFLOAT user:42 balance 9.99  # Float increment

# Set only if field doesn't exist
HSETNX user:42 role "admin"        # Set "role" only if it doesn't already exist

# Scan fields in a large hash (like SCAN for keys)
HSCAN user:42 0 MATCH "a*" COUNT 10

Hash vs Separate String Keys

Hash (HSET user:42 name ...)Separate Strings (SET user:42:name ...)
MemoryMore efficient (ziplist for small hashes)Overhead per key
Atomic multi-field getYes β€” HGETALLNo β€” need MGET with many keys
Field TTLNo (only whole key TTL)Yes (per field)
Best forObjects with known fieldsFields with different TTLs
# Real-world: Cache a user profile as a hash
HSET session:abc123 user_id 42 role "editor" expires_at 1700000000
HGET session:abc123 role   # => "editor"
EXPIRE session:abc123 3600 # Expire the whole session in 1 hour
🎯 Sets Data Types β–Ύ
Think of it like this: A Redis Set is like a bag of unique LEGO pieces. You can throw in as many pieces as you want, but duplicates are silently ignored β€” each piece appears only once. You can also compare two bags: which pieces are in both? Which are only in mine?

Set Commands

># Add members (duplicates ignored)
SADD tags:post:1 "redis" "database" "nosql"
SADD tags:post:1 "redis"          # Ignored β€” already exists β†’ returns 0

# Read members
SMEMBERS  tags:post:1             # => {"redis", "database", "nosql"} (unordered!)
SCARD     tags:post:1             # => 3 (cardinality / count)

# Check membership
SISMEMBER tags:post:1 "redis"     # => 1 (is member)
SISMEMBER tags:post:1 "java"      # => 0 (not member)
SMISMEMBER tags:post:1 "redis" "java"  # Check multiple (Redis 6.2+)

# Remove members
SREM tags:post:1 "nosql"          # Remove "nosql"

# Pop a random member (remove & return)
SPOP tags:post:1                  # Remove and return random member
SRANDMEMBER tags:post:1 3         # Return 3 random members WITHOUT removing

Set Operations β€” the Superpower

Set A (Alice's friends): {Bob, Carol, Dave} Set B (Bob's friends): {Carol, Eve, Frank} SUNION A B β†’ {Bob, Carol, Dave, Eve, Frank} (everyone) SINTER A B β†’ {Carol} (mutual friends) SDIFF A B β†’ {Bob, Dave} (Alice's friends not in Bob's)
># Union β€” all unique members from both sets
SUNION  friends:alice friends:bob
SUNIONSTORE result friends:alice friends:bob  # Store result in new key

# Intersection β€” members in ALL sets
SINTER  friends:alice friends:bob
SINTERSTORE mutual_friends friends:alice friends:bob

# Difference β€” in first set but NOT in others
SDIFF   friends:alice friends:bob     # Alice's friends that Bob doesn't have
SDIFFSTORE alice_only friends:alice friends:bob

# Real-world: "Who to follow" suggestions
# Alice follows: {carol, dave}   Bob follows: {carol, eve}
SINTER followers:alice followers:bob   # => {carol} (mutual)
SDIFF  followers:bob followers:alice   # => {eve} (Bob follows, Alice doesn't β†’ suggest to Alice)

Use Sets When

  • Tracking unique items: unique daily visitors, unique IPs
  • Tagging systems: posts tagged with multiple tags
  • Social graphs: followers, following, blocked users
  • ACL/permission systems: which users have access to a resource
  • Set math (union, intersection, diff) for recommendations
πŸ“Š Sorted Sets Data Types β–Ύ
Think of it like this: A Redis Sorted Set is like a scoreboard at an arcade. Every player has a name (member) and a score. The board always keeps players sorted by score β€” lowest to highest. You can instantly ask "Who's in 3rd place?" or "Who are the top 10?"
Leaderboard: "game:scores" β”Œβ”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ Rank β”‚ Member β”‚ Score β”‚ β”œβ”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€ β”‚ 1 β”‚ "alice" β”‚ 9800 β”‚ β”‚ 2 β”‚ "bob" β”‚ 7200 β”‚ β”‚ 3 β”‚ "carol" β”‚ 6500 β”‚ β”‚ 4 β”‚ "dave" β”‚ 3100 β”‚ β””β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ Internally stored as a Skip List for O(log N) operations

Core Sorted Set Commands

># Add members with scores
ZADD game:scores 9800 "alice"
ZADD game:scores 7200 "bob" 6500 "carol" 3100 "dave"

# Add with options
ZADD leaderboard NX 5000 "eve"    # Add only if NOT exists
ZADD leaderboard XX 5500 "eve"    # Update only if EXISTS
ZADD leaderboard GT 6000 "eve"    # Update only if new score > current (Redis 6.2+)
ZADD leaderboard LT 4000 "eve"    # Update only if new score < current

# Get score and rank
ZSCORE game:scores "alice"        # => "9800" (as string)
ZRANK  game:scores "alice"        # => 0 (0-indexed, lowest score = rank 0)
ZREVRANK game:scores "alice"      # => 0 (0-indexed, highest score = rank 0)

# Range queries β€” by rank
ZRANGE    game:scores 0 -1              # All, ascending by score
ZRANGE    game:scores 0 -1 WITHSCORES  # Include scores in result
ZREVRANGE game:scores 0 2              # Top 3, descending (deprecated in Redis 6.2)
ZRANGE    game:scores 0 2 REV          # Top 3 descending (Redis 6.2+ unified syntax)

# Range queries β€” by score
ZRANGEBYSCORE game:scores 5000 9999          # Members with score 5000-9999
ZRANGEBYSCORE game:scores -inf +inf          # All members
ZRANGEBYSCORE game:scores "(5000" 9999       # Exclusive lower bound (> 5000)
ZREVRANGEBYSCORE game:scores 9999 5000       # Descending by score

# Remove
ZREM game:scores "dave"                 # Remove member
ZREMRANGEBYRANK  game:scores 0 2        # Remove by rank range
ZREMRANGEBYSCORE game:scores -inf 1000  # Remove members with score ≀ 1000

# Count and pop
ZCARD  game:scores                     # Total member count
ZCOUNT game:scores 5000 +inf           # Count in score range
ZPOPMIN game:scores 3                  # Remove and return 3 lowest-score members
ZPOPMAX game:scores 3                  # Remove and return 3 highest-score members

# Increment a member's score (atomic)
ZINCRBY game:scores 200 "bob"          # bob's score: 7200 β†’ 7400

Real-World Patterns

># 1. Rate limiting with sliding window
# Score = Unix timestamp, member = request_id
ZADD rate:user:42 1700000000 "req-abc"
ZREMRANGEBYSCORE rate:user:42 -inf "(now-60)"  # Remove requests > 60s ago
ZCARD rate:user:42                             # Count in last 60s β†’ compare to limit

# 2. Priority queue (task processing)
# Lower score = higher priority
ZADD tasks 1 "critical-alert"   # highest priority
ZADD tasks 5 "send-newsletter"  # lowest priority
ZPOPMIN tasks                   # Dequeue highest priority task

# 3. Time-based expiry of items in a set
# Score = expiry timestamp
ZADD expiring:sessions 1700003600 "session:abc"
ZREMRANGEBYSCORE expiring:sessions -inf (current_time)  # Clean expired
⏰ Expiration & Eviction Policies Memory Mgmt β–Ύ
Think of it like this: Imagine your fridge (Redis) has a limited shelf. Some food has an expiry date printed on it (TTL = Time To Live) β€” it gets thrown out automatically when that date passes. But when the fridge is full with no expiry dates, the fridge must decide what to throw out to make room. That decision is the eviction policy.

Setting Expiration

># Set expiry on an existing key
EXPIRE  key 3600          # Expire in 3600 seconds (1 hour)
PEXPIRE key 3600000       # Expire in milliseconds
EXPIREAT  key 1700000000  # Expire at Unix timestamp (seconds)
PEXPIREAT key 1700000000000  # Expire at Unix timestamp (milliseconds)

# Check remaining time
TTL  key      # Seconds remaining (-1 = no expiry, -2 = key doesn't exist)
PTTL key      # Milliseconds remaining

# Remove expiry (make key persistent again)
PERSIST key   # Returns 1 if expiry was removed, 0 if key had no expiry

# Set with expiry in one command
SET key value EX 3600     # Most common pattern

# Get expiry as absolute timestamp (Redis 7.0+)
EXPIRETIME  key           # => Unix timestamp in seconds
PEXPIRETIME key           # => Unix timestamp in milliseconds

How Redis Expires Keys

Redis uses two complementary strategies β€” not just one:

  1. Lazy expiration β€” when a key is accessed, Redis checks if it's expired. If yes, delete and return nil. Zero CPU overhead for untouched keys.
  2. Active expiration β€” Redis runs a background job 10x/second randomly sampling 20 volatile keys and deleting expired ones. Repeats if >25% of sampled keys were expired.

Eviction Policies (maxmemory-policy)

When Redis hits its maxmemory limit, it must evict keys. Eight policies control this:

PolicyWhich KeysHow Chosen
noevictionNoneReturns error on writes (default)
allkeys-lruAll keysLeast Recently Used
volatile-lruKeys with TTL onlyLeast Recently Used
allkeys-lfuAll keysLeast Frequently Used
volatile-lfuKeys with TTL onlyLeast Frequently Used
allkeys-randomAll keysRandom eviction
volatile-randomKeys with TTL onlyRandom eviction
volatile-ttlKeys with TTL onlyShortest TTL first
># Configure in redis.conf or at runtime
CONFIG SET maxmemory 2gb
CONFIG SET maxmemory-policy allkeys-lru

# For a pure cache: allkeys-lru (evict LRU from any key)
# For cache + persistent data mix: volatile-lru (only evict TTL keys)
# For frequency-biased workloads: allkeys-lfu (keeps "hot" keys)

LRU vs LFU

LRU (Least Recently Used)LFU (Least Frequently Used)
EvictsKey not touched longestKey accessed least often
Good forTime-locality workloadsSkewed popularity (hot keys)
Redis impl.Approximated (sampled)Approximated with decay counter
πŸ“‘ Pub/Sub & Messaging Messaging β–Ύ
Think of it like this: Pub/Sub is like a radio station (publisher) and radio listeners (subscribers). The station broadcasts on channel 99.1FM. Anyone tuned to 99.1FM hears the broadcast instantly. But if your radio was off when the song played, you missed it forever β€” there's no rewind.
Publisher Redis Subscribers PUBLISH news β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” SUBSCRIBE news "Breaking: ..." ──────────► β”‚ news β”‚ ───────► Client A hears it β”‚ channel β”‚ ───────► Client B hears it β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Client C (offline) βœ— misses it

Pub/Sub Commands

># Subscriber side β€” blocks and listens
SUBSCRIBE news sports weather       # Subscribe to specific channels
PSUBSCRIBE news.*                   # Pattern subscribe (glob: news.tech, news.sports, etc.)
UNSUBSCRIBE news                    # Unsubscribe from a channel
PUNSUBSCRIBE news.*                 # Unsubscribe from a pattern

# Publisher side β€” fire and forget
PUBLISH news "Server rebooting in 5 minutes"  # => 2 (number of clients that received it)
PUBLISH sports "Goal! 1-0!"

# Inspect subscriptions (server-side)
PUBSUB CHANNELS           # List all active channels
PUBSUB CHANNELS news.*    # Channels matching pattern
PUBSUB NUMSUB news sports # Subscriber count per channel
PUBSUB NUMPAT             # Number of pattern subscriptions

Pub/Sub Limitations

LimitationWhy it mattersAlternative
No message persistenceOffline clients miss all messagesRedis Streams
No delivery guaranteeNetwork issues = silent message lossRedis Streams / Kafka
No consumer groupsCan't load-balance across workersRedis Streams
Subscriber blocks connectionCan't run other commands while subscribedUse separate connection

When to Use Pub/Sub

  • Real-time notifications (chat messages, live dashboards)
  • Cache invalidation signals across app servers
  • Broadcasting config changes to microservices
  • Live events where missing a message is acceptable
># Real-world: Invalidate cache across all app servers
PUBLISH cache:invalidate "user:42:profile"
# Every server subscribed to "cache:invalidate" deletes that key from its local cache
πŸ”’ Transactions & Pipelines Atomicity β–Ύ
Think of it like this: A transaction is like placing an online grocery order. You add everything to your cart, then hit "place order" β€” all items ship together. If your card is declined, nothing ships. A pipeline is like writing a shopping list and handing it to someone: they do all the shopping in one trip instead of going back and forth for each item.

Transactions β€” MULTI/EXEC

># Queue commands, then execute all atomically
MULTI                         # Start transaction (queue mode)
  INCR inventory:item:5       # Queued (not executed yet)
  DECR available:item:5       # Queued
  LPUSH orders:pending 5      # Queued
EXEC                          # Execute all queued commands atomically

# Abort a transaction
MULTI
  SET foo "bar"
DISCARD                       # Discard all queued commands

Important: Redis transactions are not like SQL transactions. Individual command errors within EXEC do NOT roll back other commands β€” the rest still execute. Use transactions for atomicity (all-or-nothing delivery), not for SQL-style rollbacks.

WATCH β€” Optimistic Locking (CAS)

Think of it like this: WATCH is like reserving a concert seat. You look at the available seats (WATCH), pick one, then try to book it. If someone else booked that same seat while you were deciding, your booking fails and you start over. This is "check and set" β€” check the value, set if unchanged.
># Pattern: read-modify-write with optimistic locking
WATCH balance:user:42         # Watch this key for changes

# (Read current value)
GET balance:user:42            # => "1000"

MULTI                          # Start transaction
  DECRBY balance:user:42 100  # Deduct 100
EXEC
# => nil if balance:user:42 changed since WATCH (transaction aborted)
# => [array of results] if no one changed it (transaction succeeded)

# Common retry loop (application-side)
# while True:
#   WATCH balance
#   val = GET balance
#   MULTI
#   SET balance (val - amount)
#   if EXEC != nil: break  # Success
#   # else: retry

Pipelines β€” Batching Without Atomicity

># Without pipeline: 100 round-trips to server
for i in range(100):
    redis.set(f"key:{i}", i)

# With pipeline: 1 round-trip, 100 commands sent together
pipe = redis.pipeline(transaction=False)  # No MULTI/EXEC
for i in range(100):
    pipe.set(f"key:{i}", i)
pipe.execute()  # Send all 100 at once, get all responses back
MULTI/EXEC (Transaction)Pipeline
AtomicityYes (all or none queued)No
Server-side queuingYesNo (client-side batching)
Round trips1 (after EXEC)1 (batched)
Use whenNeed atomicity guaranteeJust want network efficiency
πŸ’Ύ Persistence: RDB vs AOF Durability β–Ύ
Think of it like this: Imagine writing a book. RDB is like taking a photo of every page every hour β€” fast and compact, but if there's a fire in the last hour, those pages are gone. AOF is like a scribe writing down every word as you speak β€” if there's a fire, you only lose the last second. But the log grows very long.
RDB (Snapshot): Time ──────────────────────────────────────► πŸ“Έ t=0 πŸ“Έ t=5min πŸ“Έ t=10min πŸ’₯crash βœ… save βœ… save βœ… save ❌ lose 5min of writes AOF (Append-Only File): Time ──────────────────────────────────────► ✏️write ✏️write ✏️write ✏️write πŸ’₯crash βœ…every write logged ❌ lose ~1 second

RDB (Redis Database Snapshots)

># redis.conf β€” trigger snapshot when N writes happen within M seconds
save 3600 1      # After 3600s if at least 1 key changed
save 300  100    # After 300s if at least 100 keys changed
save 60   10000  # After 60s if at least 10000 keys changed

# Manual snapshot commands
BGSAVE          # Fork child process to save in background (non-blocking)
SAVE            # Blocking save (avoids β€” blocks all clients)
LASTSAVE        # Unix timestamp of last successful save

# RDB file location
dbfilename dump.rdb
dir /var/lib/redis

AOF (Append Only File)

># Enable AOF in redis.conf
appendonly yes
appendfilename "appendonly.aof"

# fsync policy β€” controls how often AOF is flushed to disk
appendfsync always    # Sync after EVERY command β€” safest, slowest
appendfsync everysec  # Sync every 1 second β€” good balance (default)
appendfsync no        # Let OS decide β€” fastest, most data risk

# AOF rewrite β€” compact the log periodically
BGREWRITEAOF           # Trigger manual compaction
auto-aof-rewrite-percentage 100   # Rewrite when AOF doubles in size
auto-aof-rewrite-min-size 64mb    # But only if AOF is at least 64MB

RDB vs AOF Comparison

RDBAOFHybrid (RDB+AOF)
Data loss riskMinutes (last snapshot)1 second (everysec)~1 second
File sizeCompact binaryLarger (text log)Compact + recent AOF
Restart speedFastSlow (replay all)Fast (load RDB, replay short AOF)
Performance impactFork overhead at snapshot timeSlight (fsync calls)Mixed
Best forBackup / DRNear-zero data lossProduction recommended
># Hybrid mode (Redis 4.0+) β€” recommended for production
aof-use-rdb-preamble yes  # AOF file starts with RDB snapshot, then AOF log
πŸ”„ Replication & Sentinel High Availability β–Ύ
Think of it like this: Replication is like having a personal assistant who copies everything you write into their own notebook. If you're sick (master fails), the assistant can take over immediately. Sentinel is the manager who watches both you and your assistant β€” and automatically promotes the assistant if you don't show up.
Replication: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” async copy β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Master β”‚ ─────────────► β”‚ Replica β”‚ β”‚(read/write)β”‚ β”‚(read only)β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” ───► β”‚ Replica β”‚ β”‚(read only)β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Sentinel (High Availability): β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚Sentinel 1β”‚ β”‚Sentinel 2β”‚ β”‚Sentinel 3β”‚ ← 3 sentinels for quorum β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Monitor β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Master β”‚ (if down β†’ vote β†’ promote replica) β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Replication Setup

># On replica server β€” redis.conf
replicaof 192.168.1.10 6379   # Connect to master (replaces deprecated SLAVEOF)

# Or at runtime
REPLICAOF 192.168.1.10 6379
REPLICAOF NO ONE              # Promote this replica to master (manual failover)

# Inspect replication status
INFO replication
# Returns: role, connected_slaves, master_replid, master_repl_offset, etc.

# Replication is asynchronous by default
# For stronger durability (wait for replicas to acknowledge):
WAIT 2 1000   # Wait for 2 replicas to confirm, max 1000ms timeout

Key Replication Facts

  • Replication is asynchronous β€” replicas may lag behind master by milliseconds
  • Replicas are read-only by default
  • On connection, replica gets a full RDB snapshot then streams commands
  • Partial resynchronization (PSYNC) avoids full resync after brief disconnect

Redis Sentinel

># sentinel.conf
sentinel monitor mymaster 192.168.1.10 6379 2  # Monitor master, quorum=2
sentinel down-after-milliseconds mymaster 5000  # 5s to declare master down
sentinel failover-timeout mymaster 10000        # 10s to complete failover
sentinel parallel-syncs mymaster 1             # 1 replica syncs at a time after failover

# Sentinel commands
SENTINEL masters                  # List monitored masters
SENTINEL replicas mymaster        # List replicas for mymaster
SENTINEL sentinels mymaster       # List sentinel peers
SENTINEL get-master-addr-by-name mymaster  # Current master IP:port

Quorum: Sentinels vote on whether master is down. Quorum = minimum votes needed to start failover (typically (n/2)+1 for n sentinels). Always run an odd number of Sentinels (3, 5).

🌐 Redis Cluster Scalability β–Ύ
Think of it like this: Imagine a library with 16,384 shelves. Every book (key) gets assigned to a shelf based on its title (hash). Three librarians each manage a section of shelves. When you ask for a book, if your librarian doesn't have it, they send you to the right one. That's Redis Cluster β€” one big logical store, split across multiple nodes.
Redis Cluster: 3 Masters + 3 Replicas β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ 16,384 Hash Slots Total β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ Node A β”‚ Node B β”‚ Node C β”‚ β”‚ Slots 0-5460β”‚ Slots 5461-10922β”‚ Slots 10923-16383 β”‚ β”‚ (Master) β”‚ (Master) β”‚ (Master) β”‚ β”‚ ↕ β”‚ ↕ β”‚ ↕ β”‚ β”‚ Replica A β”‚ Replica B β”‚ Replica C β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ Key "user:42" β†’ CRC16("user:42") % 16384 = 5432 β†’ Node A

How Key Distribution Works

># Every key maps to a hash slot:
slot = CRC16(key) % 16384

# Hash tags β€” force keys to the same slot (use curly braces)
SET {user:42}.name "Alice"    # slot = CRC16("user:42") % 16384
SET {user:42}.email "a@x.com" # same slot as above!
# Without hash tags, different keys may land on different nodes
# Multi-key operations (MGET, MSET) require all keys on the same node

Cluster Setup Commands

># redis.conf β€” enable cluster mode
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 5000

# Create cluster (redis-cli tool)
redis-cli --cluster create \
  192.168.1.10:7000 192.168.1.10:7001 192.168.1.10:7002 \
  192.168.1.11:7000 192.168.1.11:7001 192.168.1.11:7002 \
  --cluster-replicas 1   # 1 replica per master

# Cluster introspection
CLUSTER INFO             # Cluster state and stats
CLUSTER NODES            # All nodes and their slot ranges
CLUSTER SLOTS            # Slot-to-node mapping
CLUSTER KEYSLOT user:42  # Which slot does this key map to?
CLUSTER COUNTKEYSINSLOT 5432  # Keys in a slot

MOVED vs ASK Redirects

ErrorMeaningClient action
MOVED 5432 192.168.1.10:7000Key permanently lives on this nodeUpdate internal routing table, retry there
ASK 5432 192.168.1.11:7001Key is being migrated right nowSend ASKING, then retry (temporary redirect)

Cluster Limitations

  • Multi-key operations only work if all keys are in the same slot (use hash tags)
  • No cross-slot transactions (MULTI/EXEC fails with cross-slot keys)
  • Lua scripts must access only keys in the same slot
  • SELECT (database switching) not supported in cluster mode β€” only db 0
πŸš€ Caching Patterns Architecture β–Ύ
Think of it like this: Your office has a filing cabinet (database) in the basement and a sticky notepad on your desk (Redis cache). Cache-aside = you check your notepad first; if not there, go to the basement and put a copy on your notepad. Write-through = every time you update a file, you update both notepad and basement at the same time.

Pattern 1: Cache-Aside (Lazy Loading) β€” Most Common

># App logic (pseudocode):
# 1. Try cache first
value = GET cache:user:42
if value is nil:
  # 2. Cache miss β†’ query database
  value = db.query("SELECT * FROM users WHERE id = 42")
  # 3. Populate cache with TTL
  SET cache:user:42 value EX 3600

# Pros: Only caches what's actually needed
# Cons: Cache miss = slow first request (cold start)

Pattern 2: Write-Through

># On every write, update BOTH cache and database synchronously
def update_user(id, data):
    db.update(f"UPDATE users SET ... WHERE id = {id}")
    SET cache:user:{id} serialize(data) EX 3600

# Pros: Cache always fresh, no stale reads
# Cons: Write latency increases (two writes), cache fills with rarely-read data

Pattern 3: Write-Behind (Write-Back)

># Write to cache first, flush to DB asynchronously later
SET cache:user:42 data EX 3600     # Fast β€” just write to Redis
LPUSH write_queue "user:42:update" # Queue DB write for background worker
# Background worker flushes cache β†’ DB periodically

# Pros: Fastest writes
# Cons: Data loss risk if Redis crashes before flush

Cache Stampede (Thundering Herd)

Think of it like this: 10,000 users all request the homepage at the same moment. The cache expires. All 10,000 requests simultaneously hit the database. The DB crashes. That's a stampede β€” the herd of requests thunders toward the DB at once.
># Solution 1: Mutex / single-filler lock
# Only one worker queries DB; others wait and get the cached result
lock_acquired = SET lock:cache:homepage 1 NX EX 5  # 5s lock
if lock_acquired:
    data = db.query(...)
    SET cache:homepage data EX 3600
    DEL lock:cache:homepage
else:
    # Wait briefly and retry from cache
    sleep(0.1)
    data = GET cache:homepage

# Solution 2: Probabilistic early reexpiry (PER)
# Refresh cache slightly before it expires based on recompute cost

# Solution 3: Stale-while-revalidate
# Return stale data immediately, refresh in background

Cache Invalidation Strategies

StrategyHowTrade-off
TTL-basedLet key expire naturallySimple; brief staleness window
Event-basedDEL on write; PUBLISH invalidation messageFresh data; more complex code
Versioned keyscache:user:42:v3 β€” bump version on updateOld versions linger until TTL
πŸ” Distributed Locking Concurrency β–Ύ
Think of it like this: Imagine a single bathroom key on a hook at a restaurant. Whoever grabs the key gets exclusive access. When done, they hang the key back. But what if someone forgets to return it? A distributed lock in Redis has a built-in timer (TTL) β€” if the key holder crashes, the lock auto-releases after a set time.

Correct Locking Pattern (SET NX EX)

># Acquire lock β€” atomic, single command
token = random_unique_id()   # e.g., UUID
acquired = SET lock:resource token NX EX 30
# NX = only if Not eXists (acquire only if no one holds the lock)
# EX 30 = auto-release after 30 seconds (prevents deadlock if holder crashes)

if not acquired:
    # Lock is held by someone else β€” retry or fail fast

# Do critical work...
process_order()

# Release lock β€” ONLY if we still own it (verify token!)
# Use Lua script for atomic check-and-delete
lua_script = """
  if redis.call('GET', KEYS[1]) == ARGV[1] then
    return redis.call('DEL', KEYS[1])
  else
    return 0
  end
"""
EVAL lua_script 1 lock:resource token

Why Check the Token Before Releasing?

Timeline of a bug: t=0: Worker A acquires lock (token="abc"), EX=5s t=4: Worker A's process pauses (GC pause, network lag) t=5: Lock expires! Worker B acquires lock (token="xyz") t=6: Worker A resumes and does DEL lock:resource β†’ Worker A just released Worker B's lock! πŸ’₯ Fix: Only DEL if GET lock == "abc" (your token)

Redlock Algorithm (Multi-Node)

For stronger guarantees across Redis Cluster or multiple independent Redis instances:

># Redlock: acquire lock on N/2 + 1 independent Redis nodes
# 1. Record start time
# 2. Try to SET NX EX on all N nodes (e.g., 5 nodes)
# 3. If acquired on β‰₯ 3 nodes AND total elapsed < lock TTL β†’ lock is valid
# 4. If failed β†’ release on all nodes and retry with backoff

# Libraries implement this: redlock-py, redlock-rb, Redlock.net
# Single-node is fine for most cases; Redlock for multi-datacenter critical ops

Common Mistakes

MistakeProblemFix
SETNX then EXPIRE separatelyNot atomic β€” crash between them = deadlockUse SET NX EX (single command)
DEL without token checkCan release another worker's lockLua check-and-delete
TTL too shortLock expires during long operationSet TTL > max expected work time; use watchdog
No retry backoffLock contention β†’ thundering herdExponential backoff + jitter on retry
πŸ“ˆ Redis Streams Streaming β–Ύ
Think of it like this: A Redis Stream is like a FedEx conveyor belt at a sorting facility. Packages (messages) arrive with timestamps and IDs. Multiple teams (consumer groups) each grab their own packages from the belt β€” nobody takes the same package twice. Dropped packages are tracked and can be redelivered. Unlike Pub/Sub, packages don't disappear if you're not watching.
Stream "events:orders" ID β”‚ Fields ────────────────┼──────────────────────────────── 1700000001-0 β”‚ action=created, order_id=101 1700000002-0 β”‚ action=payment, order_id=101 1700000003-0 β”‚ action=shipped, order_id=101 1700000010-0 β”‚ action=created, order_id=102 ↑ Millisecond timestamp + sequence number (auto-generated)

Producing Messages β€” XADD

># Add a message to the stream
XADD events:orders * action created order_id 101 user_id 42
# * = auto-generate ID (timestamp-sequence)
# Returns: "1700000001-0" (the message ID)

# Add with explicit ID (advanced β€” must be monotonically increasing)
XADD events:orders 1700000001-0 action created order_id 101

# Capped stream (keep only latest 1000 messages)
XADD events:orders MAXLEN 1000 * action shipped order_id 101
XADD events:orders MAXLEN ~ 1000 * action ...   # ~ = approximate cap (faster)

Reading Messages

># Read by range (XRANGE)
XRANGE events:orders - +            # All messages (- = min, + = max)
XRANGE events:orders 1700000001-0 + # From specific ID to end
XRANGE events:orders - + COUNT 10   # First 10 messages

# Reverse range
XREVRANGE events:orders + - COUNT 5 # Latest 5 messages

# Read new messages from a position
XREAD COUNT 10 STREAMS events:orders 0   # 10 msgs from beginning
XREAD COUNT 10 STREAMS events:orders $   # Only NEW msgs arriving after this command

# Blocking read β€” wait for new messages (like BLPOP for lists)
XREAD BLOCK 0 COUNT 10 STREAMS events:orders $  # Block forever until new msg

# Stream info
XLEN events:orders                  # Total message count
XINFO STREAM events:orders          # Full stream metadata

Consumer Groups β€” Distributed Processing

># Create a consumer group (start from beginning with 0, or $ for new msgs only)
XGROUP CREATE events:orders shipping-group 0
XGROUP CREATE events:orders billing-group  $

# Read as a consumer (each message delivered to only ONE consumer in the group)
XREADGROUP GROUP shipping-group worker-1 COUNT 5 STREAMS events:orders >
# ">" = undelivered messages only

# Acknowledge processing (removes from PEL - Pending Entry List)
XACK events:orders shipping-group 1700000001-0

# Check pending (unacknowledged) messages
XPENDING events:orders shipping-group - + 10

# Reclaim stuck messages (idle > 60s β€” worker crashed)
XAUTOCLAIM events:orders shipping-group worker-2 60000 0

Streams vs Pub/Sub vs Lists

Pub/SubLists (BLPOP)Streams
PersistenceNoYes (until popped)Yes (configurable)
Consumer groupsNoNoYes
Replay old msgsNoNoYes (by ID/range)
AcknowledgementNoNo (pop = done)Yes (XACK)
Best forEphemeral broadcastSimple task queueEvent sourcing, audit logs
No topics match your search. Try different keywords.

What is Redis Interview Prep?

This is a structured, interactive reference covering the 16 Redis topics that appear most frequently in backend, infrastructure, and systems engineering interviews β€” from mid-level to staff engineer. Each topic pairs plain-English explanation with real Redis syntax and real-world scenarios, so you understand why each command exists, not just what it does.

Redis powers the infrastructure of Airbnb, Twitter, Stack Overflow, GitHub, and thousands more. Interviewers at these companies don't just ask "what is Redis?" β€” they probe your mental model: why is Redis single-threaded? When would you pick AOF over RDB? How do you prevent cache stampedes? This guide prepares you for those conversations.

How This Guide Works

Each of the 16 topics appears in an expandable card, all open by default. Use the search bar to jump to any concept instantly β€” type "TTL", "consumer group", "Redlock", or any command like "ZADD". The progress bar fills as you collapse topics you've reviewed. Every code block has a copy button for quick transfer to your Redis CLI or notes.

Each topic follows the same pattern: a real-world analogy to anchor the concept, a visual diagram where helpful, complete command syntax, a comparison table for related options, and a "use this when" summary so you know when to reach for each tool.

Topics Covered in This Guide

  • Fundamentals: Core Architecture & Why Redis is Fast, Keys & Naming Conventions
  • Data Types: Strings (counters, sessions, locks), Lists (queues, stacks, feeds), Hashes (objects, profiles), Sets (unique tracking, social graphs), Sorted Sets (leaderboards, priority queues, rate limiting)
  • Memory Management: TTL, EXPIRE, and all 8 Eviction Policies (LRU, LFU, volatile vs allkeys)
  • Messaging: Pub/Sub for real-time broadcast, Redis Streams for durable event logs with consumer groups
  • Atomicity: MULTI/EXEC Transactions, WATCH for optimistic locking, Pipelines for batching
  • Durability: RDB snapshots vs AOF append-only log vs Hybrid persistence
  • High Availability: Master-Replica Replication, Redis Sentinel for automatic failover
  • Scalability: Redis Cluster with 16,384 hash slots, sharding, MOVED/ASK redirects
  • Architecture: Cache-Aside, Write-Through, Write-Behind patterns, Cache Stampede prevention
  • Concurrency: Distributed Locking with SET NX EX, Lua-based atomic release, Redlock algorithm

Who Should Use This Guide?

  • Backend engineers preparing for system design rounds where Redis is a building block (caching layers, rate limiting, pub/sub)
  • Full-stack developers who use Redis via libraries (Sidekiq, ActionCable, Bull) but need to explain the internals
  • DevOps and SREs responsible for Redis availability who need to articulate persistence, replication, and eviction tradeoffs
  • Senior/staff candidates expected to reason about distributed locking, cluster sharding, and cache consistency at depth

Benefits of Using This Tool

  • Analogy-first: Every topic opens with a real-world analogy β€” understand the concept before the syntax
  • Complete syntax: Every command variant documented with all options, not just the happy path
  • Visual diagrams: ASCII illustrations of data structures, replication topology, cluster sharding, and more
  • Searchable: Find any command or concept in one keystroke β€” no page-hunting
  • Interview-calibrated: Content focuses on what interviewers actually probe: trade-offs, failure modes, and "why" explanations

How to Prepare for a Redis Interview

The biggest mistake candidates make is memorizing commands without understanding trade-offs. Interviewers don't want to hear "use Redis for caching" β€” they want to hear you reason through it: "I'd use cache-aside with a 5-minute TTL here, but to prevent stampede on a high-traffic endpoint I'd add a mutex lock with exponential backoff on cache miss."

Practice with a live Redis CLI. Run redis-cli, create a leaderboard with ZADD, watch keys expire with redis-cli --stat, trigger a MULTI/EXEC and see what happens when WATCH detects a conflict. Hands-on experience makes every answer more concrete and confident in an interview setting.

Finally, understand when not to use Redis: it is not a primary database (limited RAM, no ACID transactions, no complex joins). Knowing its limitations is as impressive as knowing its strengths.

Embed This Tool on Your Website

β–Ό