I’ve been running my server without a firewall for quite some time now, I have a piped instance and snikket running on it. I’ve been meaning to get UFW on it but I’ve been too lazy to do so. Is it a necessary thing that I need to have or it’s a huge security vulnerability? I can only SSH my server from only my local network and must use a VPN if I wanna SSH in outside so I’d say my server’s pretty secure but not the furthest I could take it. Opinions please?
IMHO, security measures are necessary. I have a tendency to go a bit heavy on security because I really hate having to mop up after a breach. So the more layers I have, the better I feel. Most of the breaches I’ve experienced were not some dude in a smokey, dimly lit room, wearing a hoody, and clacking away at a keyboard, while confidently announcing ‘I’m In!’ or ‘Enhance!’. Most are bots by the thousands. The bots are pretty sophisticated now days. They can scan vulnerabilities, attack surfaces, et al. They have an affinity for xmrig too, tho those are easy to spot when your server pegs all resources.
So, for the couple days investment of implementing a good, layered security defense, and then the time it takes to monitor such defenses, is worth it to me, and lets me sleep better. To each their own. Not only are breaches a pain in the ass, they have serious ramifications and can have legal consequences such as in a case where your server became a hapless zombie and was orchestrated to attack other servers. So, even on the selfhosted side of things, security measures are required, I would think.
It takes about 5 minutes to set up UFW which would be the absolute minimum, I would think.
If it is just you on your server and the only access from outside your network is SSHing in front the VPN? You’re good. Especially if it’s just you on your network/VPN.
If there are services that others utilize, you need a firewall. Can’t trust other people’s devices to not drag in malware.
I only bind applications to ports on the Internet facing network interfaces that need to be reachable from outside, and have all other ports closed because nothing is listening on them. A firewall in this case would bring me no further protection from external threats, because all those ports have to be open in the firewall too.
But Linux comes with a firewall build in, so I use it even if it is not strictly needed with my strict port management regime for my services. And a firewall has the added benefit to limit outgoing network traffic to only allowed ports/applications.
That depends. If you have exposed services, you could use some features of the firewall to geoip restrict incoming requests to prevent spam from China and Russia and whatnot.
If you don’t have any services running on a publicly accessible port, then what would the firewall protect?
One thing that hasn’t been said in this thread is the following: Do you trust your router? Do you have an isp that can probe your router remotely and access it? In those cases, you absolutely need a firewall
Absolutely. Even if your ISP is firewalling, never trust they will maintain it, and some of these cheapshit routers they use are awful. Use your own router and put it on the ISP routers DMZ.
You should, yes. I run a firewall (I usually use ufw) on all of my Internet-connected devices, since all of my devices run Linux. There’s not really any good reason not to in 2025.
But is there a good reason to run one on a server? Any port that’s not in use won’t allow traffic in. Any port that’s in use would be added to the firewall exception anyway.
The only reasons I can think of to use a firewall are:
- some services aren’t intending to be accessible - with containers, this is really easy to prevent
- your firewall also does other stuff, like blocking connections based on source IP (e.g. block Russia and China to reduce automated cyber attacks if you don’t have users in Russia or China)
Be intentional about everything you run, because each additional service is a potential liability.
Because it’s easy to accidentally run services or set up services temporarily and forget that you left them running. With UPnP being able to automatically/dynamically open ports, a firewall is just another layer of protection. You can also configure firewalls to ignore packets silently or log dropped packets, and if applications ever get new versions and end up listening on new ports, you would have to manually allow the ports. Maybe you want to have one part of an application accessible through the firewall but not another part of the application.
Plus, like you said, country blocking is another feature which personally I think is nice to have, and there are also other features too like being able to throttle connections, especially with things like fail2ban.
It’s just another layer of protection, and it ensures that everything you run is deliberate.
It honestly depends on how you run things.
If everything is in containers, chances are you’re already getting the benefits of a firewall. For example, with podman or docker, you already explicitly expose ports, which is already a form of firewall. If you’re running things outside of containers, then yeah, I agree with you, there’s too much risk of something opening up a port you didn’t expect.
Everything I run is with podman, which exposes stuff with iptables rules. That’s the same thing a basic firewall does, so adding a firewall is superfluous unless you’re using it to do something else, like geoip filtering.
When in doubt, use a firewall. But depending on the setup, it could be unnecessary.
deleted by creator
deleted by creator
I use OpenWRT on my network and each server I have is on its own VLAN. So in my case, my router is the firewall to my servers. But I do have on my todo list to get the local firewalls working as well. As others have said, security is about layers. You want an attacker to have to jump multiple hurdles.
My personal advice, secure it down to only permitting what needs it, regardless of your trust to the network.
Treat each device as if they’ve been compromised and the attacker on the compromised device is now trying to move laterally. Example scenario: had you blocked all devices except your laptop or phone to your server, your server wouldn’t have been hacked because someone went through a hacked cloud-connected HVAC panel.
I lock down everything and grant access only to devices that should have access. Then on top of that, I enable passwords and 2FA on everything as if it were public… Nothing I self host is public. It’s all behind my network firewall and router firewall, and can only be accessed externally by a VPN.
I can recommend cockpit for managing the firewall
No
If it’s just one server you probably already use a firewall on the server.
You have a firewall. It’s in your router, and it is what makes it so that you have to VPN into the server. Otherwise the server would be accessible. NAT is, effectively, a firewall.
Should you add another layer, perhaps an IPS or deny-listing? Maybe it’s a good idea.
Op means, as they said, a firewall on the server itself.
NAT is, effectively, a firewall.
No it isn’t. Stop giving advice on edge security.
Just make sure you’re using public key authentication and you’re good
You do not even need a port based firewall when the server is open on the internet.
When you configure the software to not have unnecessary open ports over the internet connected interface then a port based firewall is providing zero additional security.
A port based firewall has the benefit that you can lock everything down to the few ports you actually need, and do not have to worry about misconfigured software.
For example, something like docker circumvents ufw anyway. And i know ppl that had open ports even tho they had ufw running.