Redis Caching Interview Questions

February 26, 2026By Surya SinghRedis • Caching • Backend • Interview

Redis caching interview questions — cache strategies, invalidation, distributed cache patterns.

RedisCachingBackendInterview

Key Takeaways

  • 1Redis is an in-memory key-value store used for caching, sessions, rate limiting, and pub/sub.
  • 2Cache strategies: Cache-Aside (app manages), Write-Through, Write-Behind; each has trade-offs.
  • 3Eviction policies: LRU, LFU, TTL; configurable in Redis.
  • 4Cache invalidation is hard: use TTL, event-driven invalidation, or versioned keys.

The questions below are commonly asked in technical interviews. Each answer is written to help you understand the concept clearly and explain it confidently. Focus on understanding the "why" behind each answer—that is what interviewers care about.

Interview Questions & Answers

What is Redis and when should I use it for caching?

Redis is an in-memory data store that supports strings, hashes, lists, sets, and sorted sets. It is extremely fast (sub-millisecond) and is commonly used for caching (store DB results in memory to avoid repeated DB hits), session storage, rate limiting counters, and leaderboards. Use Redis when you have read-heavy data that can be recomputed or fetched from the source, and when the cost of a cache miss (a DB or API call) is acceptable. It reduces load on the primary database.

What is the cache-aside pattern?

In cache-aside, the application manages the cache. On a read: check the cache first; if hit, return. If miss, fetch from the database, store in the cache, return. On a write: update the database, then either invalidate the cache (delete the key) or update it. The application is responsible for consistency. Pros: cache only contains what is requested, simple. Cons: cache misses cause DB load (thundering herd if many requests miss at once—mitigate with a lock or probabilistic early expiration).

// Cache-aside with Redis + DB
async Task<User?> GetUserAsync(string id) {
    var cacheKey = $"user:{id}";
    var cached = await _redis.StringGetAsync(cacheKey);
    if (cached.HasValue)
        return JsonSerializer.Deserialize<User>(cached!);
    var user = await _db.Users.FindAsync(id);
    if (user != null)
        await _redis.StringSetAsync(cacheKey,
            JsonSerializer.Serialize(user), TimeSpan.FromMinutes(15));
    return user;
}
async Task UpdateUserAsync(User user) {
    await _db.Users.UpdateAsync(user);
    await _redis.KeyDeleteAsync($"user:{user.Id}");  // invalidate
}

What is cache invalidation and why is it hard?

When the underlying data changes, the cache may hold stale data. Invalidation is deciding when to remove or update cached entries. "There are only two hard things in Computer Science: cache invalidation and naming things." Options: TTL (time-to-live)—entries expire after a fixed time; event-driven—when the DB is updated, invalidate or update the cache; versioned keys—include a version in the key, bump version on change. Each has trade-offs: TTL can serve stale data; event-driven requires wiring; versioned keys use more memory.

What is the thundering herd problem and how do I prevent it?

When a popular cache key expires, many requests simultaneously get a miss and all hit the database at once—a thundering herd. Fixes: (1) Lock: only one request fetches; others wait and read from cache after. (2) Probabilistic early expiration: before TTL, with small probability, refresh in the background; spreads refreshes over time. (3) Stale-while-revalidate: serve stale data immediately, refresh in background. (4) Use a distributed lock (Redis SETNX) so only one process fetches across servers.

When should I use Write-Through vs Write-Behind caching?

Write-Through: every write goes to the cache and the DB. Reads always hit cache. Consistency is strong; write latency is higher. Write-Behind (Write-Back): write goes to the cache immediately; the DB is updated asynchronously. Lower write latency, but risk of data loss if the cache fails before the write is persisted. Use Write-Through when consistency matters (e.g., user profile). Use Write-Behind for high write throughput when eventual consistency is acceptable (e.g., analytics, activity logs).

Loading...

Surya Singh

Surya Singh

Azure Solutions Architect & AI Engineer

Microsoft-certified Azure Solutions Architect with 8+ years in enterprise software, cloud architecture, and AI/ML deployment. I build production AI systems and write about what actually works—based on shipping code, not theory.

  • Microsoft Certified: Azure Solutions Architect Expert
  • Built 20+ production AI/ML pipelines on Azure
  • 8+ years in .NET, C#, and cloud-native architecture