When I rented my first VPS, I started with what I considered the usual hardening steps:
- disable password-based SSH logins
- disable direct root login
- enable UFW with a default deny policy
- allow only the ports I thought I needed
- keep the machine patched and reasonably boring
- Fail2Ban
- and so on....
At the time, that seemed like a sensible baseline.
I still think it was.
But a baseline is not the same thing as a complete security model.
Also, to be honest: I am a software engineer, not a full-time Linux administrator. I am perfectly happy to use a tool like UFW if it gives me a clear and maintainable way to express intent. If I can avoid hand-managing raw iptables rules, I generally will.
What I learned later is that a server can look reasonably hardened and still expose services in ways that are easy to miss, especially once Docker starts publishing ports.
This post is about that realization, the layered setup I moved to afterwards, and why I came away believing that better infrastructure design was more valuable than writing yet another custom monitoring tool.
The Moment It Clicked
At the time, I was already doing what I considered responsible server hygiene. SSH was locked down, UFW was active, and I assumed that anything I had not explicitly allowed was hidden from the internet.
Then I scanned the server from the outside.
One of the published container ports was open.
That was the moment I realized I had confused "host firewall configured" with "service actually protected".
Up to that point, my mental model had been fairly simple: if UFW says a port is blocked, then in practical terms it should be blocked. In a purely host-centric setup, that assumption is often good enough. Once Docker starts managing networking, it becomes much less reliable.
That was the real lesson. The individual controls were not imaginary. UFW was enabled. SSH really was hardened. The mistake was assuming that those controls, by themselves, described the full runtime behavior of the system.
Where The Gap Came From
From my point of view, the issue was not that UFW was broken.
The issue was that Docker modifies iptables rules directly when ports are published. If you run a container like this:
-p 8080:8080
Docker can make that port reachable even if your UFW policy looks restrictive.
That is what makes this easy to miss: the system appears hardened until you test it from the outside.
In other words, the problem was not a missing firewall. The problem was an incomplete security model. I had protected the host, but I had not fully accounted for how container networking interacts with packet filtering on that host.
And that is exactly the kind of thing I try to avoid operationally: setups where the abstraction looks clean, but a lower layer quietly changes the outcome.
If you spend enough years around production systems, this pattern keeps coming back: the problem is often not a missing tool, but a wrong assumption about where the real boundary actually is.
What I Thought I Had
flowchart TD
Internet[Internet] --> Host[My VPS]
Host --> UFW[UFW]
UFW --> SSH[SSH]
UFW --> Hidden[Everything Else Hidden]
What Was Actually Happening
flowchart TD
Internet[Internet] --> VPS[My VPS]
VPS --> UFW[UFW Rules]
VPS --> Docker[Docker iptables Rules]
UFW --> SSH[SSH]
Docker --> DockerPort[Published Docker Port]
UFW -. expected to block .-> DockerPort
classDef danger fill:#fee2e2,stroke:#dc2626,color:#991b1b,stroke-width:2px;
class DockerPort danger;
That discovery changed how I thought about security on a small self-hosted server. I stopped asking, "Is my firewall enabled?" and started asking, "What can actually be reached from the outside, and what layers would still fail safely if I made a mistake?"
That shift in thinking led me to a defense-in-depth approach.
My Fix: Defense In Depth
After that, I stopped thinking in terms of a single firewall and started thinking in layers.
My goal was simple:
- block unwanted traffic before it reaches the server
- avoid exposing applications directly where possible
- require identity checks before sensitive tools are even reachable
That led me to a setup with three distinct layers.
I am not claiming this is the best architecture in general. I am only saying that, for my own environment, it made the system easier to reason about and reduced the number of ways I could accidentally expose something.
1. Cloud Firewall First
The first step was moving the perimeter outward.
Instead of relying only on the host firewall, I now use a cloud firewall in front of the VPS with a default DROP policy. In my case, that outer layer is the Contabo Cloud Firewall, but the broader idea is not provider-specific.
Only the traffic I explicitly need is allowed, for example:
22for SSH- anything else only if it truly needs direct public exposure
The practical benefit is straightforward: even if I accidentally publish a container port locally, the cloud firewall can still block it before it reaches the server.
To me, that is a better failure mode.
It also changes the operational mindset. If I make a mistake in Docker or on the host, the outer layer still has a chance to contain it. At this point in my career, I trust setups that fail safely a lot more than setups that assume I will never make a mistake.
2. Public Apps Through Cloudflare Tunnel
For public-facing apps, I stopped exposing ports directly and moved to Cloudflare Tunnel.
Instead of opening inbound ports, the server creates an outbound connection to Cloudflare through cloudflared.
That means services like these no longer need direct public exposure:
app-a.exampleapp-b.exampleadmin.example
This does not eliminate risk, and I would not present it that way. But in my experience it reduces unnecessary public exposure significantly.
It also simplifies the mental model. Rather than asking which web ports need to stay open, I can treat most applications as non-public on the origin and let Cloudflare handle the externally reachable edge.
For a small self-hosted setup, I have found that kind of simplification more valuable than theoretical flexibility I rarely need.
3. Sensitive Paths Behind Cloudflare Access
For sensitive paths and internal tools, I added Cloudflare Access on top.
That means I can protect things like:
/admin- internal dashboards
- maintenance tools
- management UIs
Before the application login page is even reachable, I have to pass an identity check first, for example via GitHub or Google.
That gives me an additional security boundary before the application itself is even in play.
I like this approach because it does not replace application security, it complements it. My apps can still have their own authentication, sessions, and authorization rules, but Access adds another gate in front of them. If I misconfigure an app, there is still another layer protecting the entry point.
To me, that kind of layering is far more useful than pretending any single control is definitive.
What mattered to me was not just the extra login screen. The more important shift was architectural: access was no longer decided only by network reachability, but also by identity. That is the part that made the setup feel much more deliberate.
The Layered Model
flowchart TD
Internet[Internet] --> Web[Web Requests]
Internet --> Direct[Direct Traffic]
Web --> CF[Cloudflare Edge]
CF --> Access[Cloudflare Access]
Access -->|approved| Tunnel[Cloudflare Tunnel]
Access -->|denied| Blocked[Blocked]
Tunnel --> App[Apps / Admin Paths]
Direct --> CFW[Cloud Firewall]
CFW -->|allow 22| SSH[SSH]
CFW -->|drop unexpected ports| Dropped[Dropped]
SSH --> VPS[My VPS]
This is not a perfect architecture, and I would not claim it is universally optimal. But for my own environment, it gave me something I had not really had before: clearer boundaries, fewer exposed entry points, and a much smaller gap between "what I think is happening" and "what is actually reachable from the internet".
If I had to describe the biggest shift in one sentence, it would be this: I stopped thinking only in terms of open ports and started thinking in terms of trust boundaries, identity, and controlled exposure.
Why I Rethought My Tooling
Around the same time, I had started building a small Go tool for myself.
The idea was straightforward:
- scan my own server every 15 minutes
- compare exposed ports against an allowlist
- alert me via Telegram if something unexpected was reachable
As a technical exercise, I still think that was a reasonable idea. It would have worked, and it likely would have caught mistakes.
But the more I improved the infrastructure itself, the more I felt I was building a tool to monitor a weakness that I should first try to reduce architecturally.
I did not really want a better alarm for accidental exposure. I wanted less accidental exposure in the first place.
Once the cloud firewall, tunnel, and access policies were in place, that custom tool became much less important:
- if the perimeter rules are correct, accidental exposure becomes less likely
- if apps sit behind a tunnel, fewer ports are exposed publicly
- if admin paths require Access, there is another barrier before the app itself
That is also where the Zero Trust angle became real to me. It stopped being a buzzword and started being a practical design choice: do not expose services broadly and then hope the application login is enough; reduce reachability first, and make identity part of the access path.
For basic uptime checks, Uptime Kuma was already enough.
That was a useful reminder of something I keep relearning in software engineering:
In many cases, better architecture is more valuable than clever code.
Or put differently: the better solution is often not writing smarter monitoring, but removing the conditions that made the monitoring necessary in the first place.
My Current Security Stack
Hard perimeter filtering before traffic reaches the VPS
Cloud Firewall
Local fallback and host-level filtering
UFW / iptables
Reduced public exposure for web applications
Cloudflare Tunnel
Identity-based protection for admin paths and internal services
Cloudflare Access
Basic uptime and heartbeat monitoring
Uptime Kuma
None of these layers is magical on its own. In my experience, the value comes from how they complement each other. A local firewall is still useful. A tunnel still needs sane configuration. Access is still only one layer. But together they create a setup that, in my opinion, is more resilient than relying on a single control and assuming it covers everything.
Closing Thoughts
In my opinion, for a small self-hosted setup (or even a mid-complex setup), this kind of layered architecture is a strong baseline.
It is not expensive. It is not overly complicated. And to me it feels much more robust than relying on a single host firewall while publishing Docker ports and hoping the abstractions line up the way I think they do.
If you are running Docker on a VPS and your security story is still "UFW is enabled", I would suggest doing one thing today:
Run an external port scan against your own server.
It is a simple test, but in my experience it tells the truth much faster than assumptions do.
If you happen to need a VPS for your own project, I can personally recommend Contabo (www.contabo.com).
In my experience, it offers a very good price-to-performance ratio, and I have had a reliable experience with it so far.