this post was submitted on 19 Jul 2023
41 points (97.7% liked)

Selfhosted

40246 readers
680 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I use some batch scripts in my proxmox installation. They are in cron.hourly and daily checking for virus and ram/CPU load of my LXC containers. An email is send on condition.

What are your tipps or solution without unnecessary load on disc io or CPU time. Lets keep it simple.

Edit: a lot of great input about possible solutions. In addition TIL "that keep it simple" means a lot of different things to people.πŸ˜‰

you are viewing a single comment's thread
view the rest of the comments
[–] easeKItMAn@lemmy.world 3 points 1 year ago* (last edited 1 year ago) (2 children)

I set up custom bash scripts collecting information (df, docker json, smartCTL etc) Either parse existing json info or assemble json strings and push it to Homeassistant REST api (cron) In Homeassistant data is turned into sensors and displayed. HA sends messages of sensors fail.
Info served in HA:

  • HDD/SSD (size, smartCTL errors, spin up/down, temperature etc)
  • Availability/health of docker services
  • CPU usage/RAM/temperature
  • Network interface/throughput/speed/connections
  • fail2ban jails

Trying to keep my servers as barebones as possible. Additional services/apps put strain on CPU/RAM etc. Found out most of data necessary for monitoring is either available (docker json, smartCTL json) or can be easily caught, e.g.

df -Pht ext4 | tail -n +2 | awk '{ print $1}

It was fun learning and defining what must be monitored or not, and building a custom interface in HA.

[–] Fermiverse@kbin.social 3 points 1 year ago* (last edited 1 year ago) (1 children)

Thats basically the way I do it.

pvesh get /cluster/resources --output-format json-pretty | jq --arg k "lxc/$container_id" -r 'map(select(.id == $k))[].name, map(select(.id == $k))[].mem, map(select(.id == $k))[].maxmem, map(select(.id == $k))[].cpu')

Example using pvesh in proxmox. The data is available, just have to use it. I also prefer barebone approach.

[–] easeKItMAn@lemmy.world 2 points 1 year ago

At last we keep it simple ;)