Proxy vs Reverse Proxy
Let’s talk about something that’s hiding in plain sight in almost every production setup — proxies.
We throw around terms like “reverse proxy,” “load balancer,” “gateway,” and “forward proxy,” often interchangeably, and before you know it, someone on the team sets up an NGINX config they found on Stack Overflow that works… until it doesn’t. So let’s slow down and get clear on what these things actually are, what problems they solve, and why they matter.
What’s a Proxy?
At the simplest level, a proxy is a middleman. It sits between your client (like your browser) and the internet.
Think of it like this:
You → Proxy → Internet
You send a request to the proxy. The proxy forwards that request to the destination server. The server responds, and the proxy sends the response back to you.
Why would you do this?
- To hide your IP address (common in corporate or school networks)
- To filter traffic (e.g. block certain websites or content)
- To cache responses and improve speed
- To log or monitor outgoing traffic
This is called a forward proxy (or just "proxy"). It's on the client’s side.
What’s a Reverse Proxy
Now flip that flow.
A reverse proxy sits in front of your server, and handles incoming requests on behalf of the server.
So instead of the client talking directly to your backend, the reverse proxy handles it:
Client → Reverse Proxy → Backend
This is where things get real interesting in system design.
Use cases?
- Load balancing — You’ve got 5 backend instances? The reverse proxy can distribute traffic across them.
- SSL termination — Let NGINX handle HTTPS, and keep your backend logic clean.
- Caching — Reverse proxies like Varnish can cache responses to save your app from doing repeated work.
- Security — Want to throttle requests or block certain IPs? Do it here.
- URL rewriting / routing — Your client hits
/api
, and your reverse proxy knows to route that to the actual service running on port 3001 or whatever.
Why Not Just Let Clients Talk Directly to the Backend?
Because that’s how you end up with:
- Tight coupling between clients and backend services
- Painful migrations when endpoints or ports change
- Security headaches because now everything’s exposed
- SSL overhead on every backend node
Basically, you want a single entry point. That’s your reverse proxy. It gives you control, observability, and flexibility.
Still Confused?
Let’s anchor it with a classic example.
You're building a web app. Here’s what a typical flow might look like without a reverse proxy:
Client → https://app.example.com (Node.js server)
Now you want to add:
- An authentication service
- A separate API backend
- A static assets server
- TLS support
Suddenly you’re running 3-4 services, and you don't want your users to worry about which one they’re hitting.
Here’s where a reverse proxy shines:
Client → NGINX (reverse proxy)
├── /api → API service
├── /auth → Auth service
├── /static → Static assets
Your backend services stay clean, and your client just talks to one entry point.
In Production, You’re Already Using One
If you’re deploying anything behind:
- NGINX
- HAProxy
- Envoy
- Apache HTTP Server
- Cloudflare / AWS ALB / GCP Load Balancer
…you’re using a reverse proxy. Whether you realized it or not.
They’re everywhere, and for good reason — they help you scale, secure, and simplify your app delivery.
Final Thoughts
If you’re just getting started with infra or deploying apps, it’s easy to treat proxies as black boxes. Copy the config, pray, ship.
But take the time to understand how proxies and reverse proxies work — they are critical components in real-world software architecture.
They’re not just routing requests; they’re shaping how your app scales, how secure it is, and how easy it is to operate.
And once you know where they fit, you'll start designing systems with much more confidence — not just throwing services into the cloud and hoping for the best.
Need help thinking through how your app’s traffic should flow? I offer architecture reviews and feedback sessions. No fluff, just real-world debugging and design thinking.
Member discussion