Skip to main content

Command Palette

Search for a command to run...

Building my first Redis-powered feature

Updated
2 min read

When I started doing backend development, I was building the authentication module for a project. The flow was straightforward: the frontend sent a JWT token, the backend verified and decoded it, extracted the user id, and then queried the database to confirm the user existed.

I ended up building three auth APIs: one for signup, one for login, and one for Google OAuth (login/signup). This post is about the login side—and more specifically, what I learned after it “worked.”

A couple of days in, I noticed a pattern: every time I hit a protected API, the backend repeated the same routine. JWT comes in → verify/decode → extract user id → query the database → respond. And it wasn’t a one-time thing. It happened on every request that required authentication.

That’s when I paused and asked myself: can this be faster? Instead of going to the database every time and searching through thousands of users, can I keep the frequently needed data somewhere quicker?

Coming from a frontend background, my first instinct was almost funny in hindsight: “What if I store the user in localStorage?” Then it clicked—there’s no localStorage on a server. That’s a browser feature, not something your backend can rely on.

While exploring better options, I came across Redis and the idea of in-memory caching. The promise was simple: keep hot data in RAM, fetch it in microseconds, and avoid unnecessary database queries. I dug into the docs and implemented it step by step.

Here’s what I changed: on login (and on subsequent authenticated requests), after verifying and decoding the JWT, I’d take the user id and check Redis first. If Redis had the user (or the minimal data I needed), I could skip the database call. If Redis didn’t have it, only then I’d query the main database—and immediately store the result in Redis with a TTL, so future requests could be served faster.

After doing this, I could clearly see the difference. The system stopped hitting the database for the same repeated lookups, and overall latency dropped noticeably (in my case, roughly half for those paths).

The bigger lesson for me wasn’t just “Redis is fast”—it was understanding why the bottleneck existed and how caching fits into backend thinking: reduce repeated expensive operations, and use the database for what it should do best, not for the same lookups again and again.

More from this blog

T

Tech Talks

7 posts

This is a space where I share my knowledge and experience about tech. Hi my name is VP and I am a software engineer with 3 years of experience. I am a full stack developer.