this post was submitted on 08 Jul 2023
144 points (100.0% liked)

Selfhosted

40006 readers
1133 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I put up a vps with nginx and the logs show dodgy requests within minutes, how do you guys deal with these?

Edit: Thanks for the tips everyone!

top 50 comments
sorted by: hot top controversial new old
[–] teapot@programming.dev 46 points 1 year ago (1 children)

Anything exposed to the internet will get probed by malicious traffic looking for vulnerabilities. Best thing you can do is to lock down your server.

Here's what I usually do:

  • Install and configure fail2ban
  • Configure SSH to only allow SSH keys
  • Configure a firewall to only allow access to public services, if a service only needs to be accessible by you then whitelist your own IP. Alternatively install a VPN
[–] AES@lemmy.ronsmans.eu 14 points 1 year ago (1 children)

I would suggest crowdsec and not fail2ban

[–] ItsGhost@sh.itjust.works 11 points 1 year ago (1 children)

Seconded, not only is CrowdSec a hell of a lot more resource efficient (Go vs Python IIRC), having it download a list of known bad actors for you in advance really slows down what it needs to process in the first place. I’ve had servers DDoSed just by fail2ban trying to process the requests.

[–] Alfi@lemmy.alfi.casa 3 points 1 year ago* (last edited 1 year ago) (1 children)

Hi,

Reading the thread I decided to give it a go, I went ahead and configured crowdsec. I have a few questions, if I may, here's the setup:

  • I have set up the basic collections/parsers (mainly nginx/linux/sshd/base-http-scenarios/http-cve)
  • I only have two services open on the firewall, https and ssh (no root login, ssh key only)
  • I have set up the firewall bouncer.

If I understand correctly, any attack detected will result in the ip being banned via iptables rule (for a configured duration, by default 4 hours).

  • Is there any added value to run the nginx bouncer on top of that, or any other?
  • cscli hub update/upgrade will fetch new definitions for collections if I undestand correctly. Is there any need to run this regularly, scheduled with let's say a cron job, or does crowdsec do that automatically in the background?
load more comments (1 replies)
[–] h3x@kbin.social 43 points 1 year ago* (last edited 1 year ago)

A pentester here. Those bad looking requests are mostly random fuzzing by bots and sometimes from benign vulnerability scanners like Censys. If you keep your applications up date and credentials strong, there shouldn’t be much to worry about. Of course, you should review the risks and possible vulns of every web application and other services well before putting them up in the public. Search for general server hardening tips online if you’re unsure about your configuration hygiene.

An another question is, do you need to expose your services to the public? If they are purely private or for a small group of people, I’d recommend putting them behind a VPN. Wireguard is probably the easiest one to set up and so transparent you wouldn’t likely even notice it’s there while using it.

But if you really want to get rid of just those annoying requests, there’s really good tips already posted here.

Edit. Typos

[–] AngryHippy@lemmy.world 23 points 1 year ago (2 children)

Fail2ban and Nginx Proxy Manager. Here's a tutorial on getting started with Fail2ban:

https://github.com/yes-youcan/bitwarden-fail2ban-libressl

[–] Pete90@feddit.de 3 points 1 year ago

I really wanted to use this and set it up a while ago. Works great but in the end I had to deactivate it, because my nextcloud instance would cause too many false positives (404s and such) and I would ban my own up way too often.

[–] AES@lemmy.ronsmans.eu 2 points 1 year ago (2 children)
load more comments (2 replies)
[–] wgs@lemmy.sdf.org 17 points 1 year ago

I mean, it's not a big deal to have crawlers and bots poking at our webserver if all you do is serving static pages (which is common for a blog).

Now if you run code on server side (eg using PHP or python), you'll want to retrieve multiple known lists of bad actors to block them by default, and setup fail2ban to block those that went through. The most important thing however is to keep your server up to date at all times.

[–] gobbling871@lemmy.world 16 points 1 year ago (1 children)

Nothing too fancy other than following the recommended security practices. And to be aware of and regularly monitor the potential security holes of the servers/services I have open.

Even though semi-related, and commonly frowned upon by admins, I have unattended upgrades on my servers and my most of my services are auto-updated. If an update breaks a service, I guess its an opportunity to earn some more stripes.

[–] scrchngwsl@feddit.uk 4 points 1 year ago (1 children)

Why is unattended upgrades frowned upon? Seems like I good idea all round to me?

[–] gobbling871@lemmy.world 4 points 1 year ago (1 children)

Mostly because stability is usually prioritized above all else on servers. There's also a multitude of other legit reasons.

[–] exu@feditown.com 10 points 1 year ago (2 children)

All the legit reasons mentioned in the blog post seem to apply to badly behaved client software. Using a good and stable server OS avoids most of the negatives.

Unattended Upgrades on Debian for example will by default only apply security updates. I see no reason why this would harm stability more than running a potentially unpatched system.

[–] gobbling871@lemmy.world 3 points 1 year ago (1 children)

Even though minimal, the risk of security patches introducing new changes to your software is still there as we all have different ideas on how/what correct software updates should look like.

[–] exu@feditown.com 3 points 1 year ago

Fair, I'd just rather have a broken system than a compromised one.

load more comments (1 replies)
[–] orangeboats@lemmy.world 16 points 1 year ago* (last edited 1 year ago) (3 children)

I only expose services on IPv6, for now that seems to work pretty well - very few scanners (I encounter only 1 or 2 per week, and they seem to connect to port 80/443 only).

[–] beppi@sh.itjust.works 5 points 1 year ago (2 children)

Must be nice living in a post 1995 country... theres only 1 or 2 ISPs in Australia that support ipv6...

[–] orangeboats@lemmy.world 3 points 1 year ago (1 children)

Lol, I have heard some ISP horror stories from the Down Under.

I am fortunate enough that my country's government has been forcing ISPs to implement IPv6 in their backbone infrastructure, so nowadays all I have to really do is to flick a switch on the router (unfortunately many routers still turn off IPv6 by default) to get an IPv6 connection.

[–] beppi@sh.itjust.works 2 points 1 year ago (1 children)

Yeah the internet services here are really stuck in the past. Hard to tell if theyre taking advantage of the scarcity of ipv4 addresses to make more money somehow, or of theyre just too fuckn lazy

[–] gardner@lemmy.nz 3 points 1 year ago (1 children)

I’m guessing they’re on CG-NAT and someone upstairs thinks staying ipv4 reduces customer support costs.

load more comments (1 replies)
load more comments (1 replies)
load more comments (2 replies)
[–] OuiOuiOui@lemmy.world 11 points 1 year ago

I've been using crowdsec with swag for quite some time. I set it up with a discord notifier. It's very interesting to see the types of exploits that are probed and from each country. Crowdsec blocks just like fail2ban and seems to do so in a more elegant fashion.

[–] nik282000@lemmy.ml 9 points 1 year ago

I map them every day.

[–] lemmy@lemmy.nsw2.xyz 8 points 1 year ago
  • Turn off password login for SSH and only allow SSH keys
  • Cloudflare tunnel
  • Configure nginx to resolve the real IPs since it will now show a bunch of Cloudflare IPs. See discussion.
  • Use Fail2ban or Crowdsec for additional security for anything that gets past Cloudflare and also monitor SSH logs.
  • Only incoming port that needs to be open now is SSH. If your provider has a web UI console for your VPS you can also close the SSH port, but that's a bit overkill.
[–] takeda@kbin.social 8 points 1 year ago (1 children)

I use fail2ban and add detection (for example I noticed that after I implemented it for ssh, they started using SMTP for brute force, so had to add that one as well.

I also have another rule that observes fail2ban log and adds repeated offenders to a long term black list.

[–] sifrmoja@mastodon.social 2 points 1 year ago (1 children)
load more comments (1 replies)
[–] Alfi@lemmy.alfi.casa 8 points 1 year ago

sometimes I grab popcorn and "tail -f /var/log/secure"

[–] swifteh@lemmy.ml 7 points 1 year ago (1 children)

Any service I have that is public facing is proxied through Cloudflare. I run a firewall on the host that only allows traffic from Cloudflare IPs. Those IPs are updated via a cron job that calls this script: https://github.com/Paul-Reed/cloudflare-ufw I also have a rule set up in Cloudflare that blocks traffic from other countries.

For WAF, I use modsecurity with nginx. It can be a little time consuming to set up and weed out false positives, but it works really well when you get it configured properly.

Some of my applications are set up with Cloudflare Access. I use this with Azure AD free tier and SAML, but it could be set up with self hosted solutions like authentik.

[–] SeeJayEmm@lemmy.procrastinati.org 5 points 1 year ago (2 children)

Is everyone using Cloudflare?

[–] PhilBro@lemmy.world 2 points 1 year ago

Pretty much, strange in the self-hosted community to have stuff like that happen.

[–] Still@programming.dev 2 points 1 year ago

cloudflare is sweet I just switched to there from Google domains and it feels like a billion options have just opened up

also the https security radio buttons I always forget to change in new sites

[–] ichbinjasokreativ@lemmy.world 7 points 1 year ago

Ignore them, as long as your firewall is set up properly.

[–] dinosaurdynasty@lemmy.world 6 points 1 year ago

I use Caddy as a reverse proxy, but most of this should carry over to nginx. I used to use basic_auth at the proxy level, which worked fine(-ish) though it broke Kavita (because websockets don't work with basic auth, go figure). I've since migrated to putting everything behind forward_auth/Authelia which is even more secure in some ways (2FA!) and even more painless, especially on my phone/tablet.

Sadly reverse proxy authentication doesn't work with most apps (though it works with PWAs, even if they're awkward about it sometimes), so I have an exception that allows Jellyfin through if it's on a VPN/local network (I don't have it installed on my phone anyway):

@notapp {
  not {
    header User-Agent *Jellyfin*
    remote_ip 192.160.0.0/24 192.168.1.0/24
  }
}
forward_auth @notapp authelia:9091 {
  uri /api/verify?rd=https://authelia.example
}

It's nice being able to access everything from everywhere without needing to deal with VPNs on Android^ and not having to worry too much about security patching everything timely (just have to worry about Caddy + Authelia basically). Single sign on for those apps that support it is also a really nice touch.

^You can't run multiple VPN tunnels at once without jailbreaking/rooting Android

[–] Dr_Toofing@programming.dev 5 points 1 year ago (2 children)

These requests are probably made by search/indexing bots. My personal server gets a quite a lot of these, but they rarely use any bandwidth.
The easiest choice (probably disliked by more savvy users) is to just enable cloudflare on your server. It won't block the requests, but will stop anything malicious.
With how advanced modern scraping techniques are there is so much you can do. I am not an expert, so take what I say with a grain of salt.

[–] rusty@lemmy.world 2 points 1 year ago (1 children)

Fail2Ban is great and all, but Cloudflare provides such an amazing layer of protection with so little effort that it's probably the best choice for most people.

You press a few buttons and have a CDN, bot attack protection, DDOS protection, captcha for weird connections, email forwarding, static website hosting... It's suspicious just how much stuff you get for free tbh.

[–] AES@lemmy.ronsmans.eu 8 points 1 year ago (3 children)

And you only need to give them your unencrypted data...

load more comments (3 replies)
[–] waspentalive@lemmy.one 2 points 1 year ago

The ligitimate web spiders (for example the crawler used by Google to map the web for search) should pay attention to robots.txt. I think though that that is only valid for web-based services.

[–] InEnduringGrowStrong@lemm.ee 5 points 1 year ago (2 children)

I do client ssl verification.
Nobody but me or my household is supposed to access those anyway.
Any failure is a ban (I don't remember how long for).
I also ban every IP not from my country, adjusting that sometimes if I travel internationally.
It's much easier when you host stuff only for your devices (my case) and not for the larger public (like this lemmy instance).

[–] karlthemailman@sh.itjust.works 4 points 1 year ago (3 children)

How do you have this set up? Is it possible to have a single verification process in front of several exposed services? Like as part of a reverse proxy?

load more comments (3 replies)
[–] ComptitiveSubset@lemmy.world 2 points 1 year ago (1 children)

That sounds like an excellent solution for web based apps, but what about services like Plex or Nextcloud that use their own client side apps?

load more comments (1 replies)
[–] BlackEco@lemmy.blackeco.com 5 points 1 year ago

I'm using BunkerWeb which is an Nginx reverse-proxy with hardening, ModSecurity WAF, rate-limiting and auto-banning out of the box.

[–] apigban@lemmy.dbzer0.com 4 points 1 year ago (1 children)

Depends on what kind of service the malicious requests are hitting.

Fail2ban can be used for a wide range of services.

I don't have a public facing service (except for a honeypot), but I've used fail2ban before on public ssh/webauth/openvpn endpoint.

For a blog, you might be well served by a WAF, I've used modsec before, not sure if there's anything that's newer.

[–] archy@lemmy.world 4 points 1 year ago (1 children)

I use ACL where I add my home/work IPs as well as a few commonly used VPNs IPs as well. Cloudflare clocks known bots for me. Don't see anything in the server logs, but I do see attempts on the CF side.

load more comments (1 replies)
[–] Illecors@lemmy.cafe 4 points 1 year ago

I've implemented bot blocker and some iptables rate limiting.

[–] DigitalPortkey@lemmy.world 3 points 1 year ago

I stopped messing with port forwarding and reverse proxies and fail2ban and all the other stuff a long time ago.

Everything is accessible for login only locally, and then I add Tailscale (alternative would be ZeroTier) on top of it. Boom, done. Everything is seamless, I don't have any random connection attempts clogging up my logging, and I've massively reduced my risk surface. Sure I'm not immune; if the app communicates on the internet, it must be regularly patched, and that I do my best to keep up with.

[–] SHITPOSTING_ACCOUNT@feddit.de 2 points 1 year ago

Don't have vulnerable shit and ignore them.

Those are just weather.

[–] alibloke@feddit.uk 2 points 1 year ago

Cloudflare tunnel

load more comments
view more: next ›