this post was submitted on 12 Oct 2024
30 points (100.0% liked)

technology

23313 readers
84 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
 

Hello again! Ya'll are my last hope for help! I've posted to both the proxmox forums and r/proxmox and nobody's responding.

Here's the deal, I built this home media server about a year ago. Took some time to work out the bugs, but I got TrueNAS Scale and Jellyfin on it and started filling it up. A few weeks ago TrueNAS started freezing up, but would work for a little while after a restart, but then it stopped working. I poked around and I found that there needed to be some sort of new EFI boot thing established. I followed it and it worked. A few days later, jellyfin freezes, I can't access the pve GUI or anything, so I do a hard reset. Now proxmox can't launch pve, let alone the GUI. So I've been poking around and found that the drives are at 100% usage, and inodes are at 100% usage (see pic, disk usage is the same % as the inode usage). Digging deeper, I try to find the offending folder in /rpool/ROOT/pve-1, but there are no deeper directories listed. So I drill down into the other pig one /subvol-100-disk-0; this lead me to find a jellyfin metadata library folder with a bunch of small files using up <250 inodes each. I've searched all over the place and haven't been able to figure out what I could delete to at least get pve up and running, and work towards... idk, migrating it to a new larger drive? Or setting up something to automatically clear old files?

At any rate, I'm running 2 old 512gb laptop drives for all the OSs on the server. I have it in a ZFS mirror raid.

PS: Come to think of it, I've had to expand the size of the virtual drive for my jellyfin LXC multiple times now to get the container to actually launch. Seems I know just enough to get myself into trouble.

Someone, please help me rite my pirate ship! pirate-jammin

top 25 comments
sorted by: hot top controversial new old
[–] Edie@hexbear.net 9 points 1 month ago* (last edited 1 month ago) (1 children)

/rpool/ROOT/pve-1 is in fact not the offending dir, but rather your drive, or well ZFS raid.

To find the offending dir run du, e.g. du -x -d1 -h /

[–] tactical_trans_karen@hexbear.net 4 points 1 month ago* (last edited 1 month ago) (1 children)

Thanks for chiming in! I ran the command, but added "*" after the slash and "| less" to get a more readable printout. It looks like /var/lib (FUCKING LIBS!!!) Is the culprit, taking up 415gb. What do?

[–] Edie@hexbear.net 6 points 1 month ago (1 children)

Run du in /var/lib to find the offending dir there

[–] tactical_trans_karen@hexbear.net 4 points 1 month ago (1 children)

And I assume follow that rabbit hole to the bottom? Brb.

[–] Edie@hexbear.net 4 points 1 month ago* (last edited 1 month ago) (1 children)

Yup. It should at least help you figure what is taking up space. What to do after that is then another question.

[–] tactical_trans_karen@hexbear.net 4 points 1 month ago (1 children)

Okay, found the problem files. It's in /var/lib/vz/dump/. I don't know what vzdump files are, but they have qemu and lxc names in the file names with dates and times. A whole bunch of .log, .tar, .zst, .notes, and a whole bunch of combinations of those. A bunch of these files are taking up multiple gb each and there's a long list.

[–] Edie@hexbear.net 4 points 1 month ago* (last edited 1 month ago) (1 children)

Searching for the path usually leads to some good answers.

That's the proxmox backup dir.

[–] tactical_trans_karen@hexbear.net 4 points 1 month ago (1 children)
[–] Edie@hexbear.net 6 points 1 month ago* (last edited 1 month ago) (4 children)

ls -l and see how old the oldest are, then start by deleting the oldest. That's what I'd do at least. To free up some space, from there I'd make changes to proxmoxs backup

Edit: also, the .log files might be less important? If they truly are just log files?

[–] tactical_trans_karen@hexbear.net 5 points 1 month ago (1 children)

I got pve up and running!!! THANK YOU!!! stalin-heart

[–] Edie@hexbear.net 4 points 1 month ago (1 children)
[–] tactical_trans_karen@hexbear.net 4 points 1 month ago (1 children)

Watching my stories with my partner now thanks to you! Did a good pruning of my backups and freed up over half the drive space. Where'd you learn how to nerd?

[–] Edie@hexbear.net 4 points 1 month ago (1 children)

Mostly myself. My father help me make my first website when I was... 10? So, I've been doing computer touching for a good decade at least.

Cool, thanks again!

[–] RedWizard@hexbear.net 4 points 1 month ago (1 children)

Yeah if these are backups I'd start deleting the oldest files. At some point you'll maybe want to find a way to offshore the oldest backup before you delete it. Maybe rclone to a google drive or something similar. Just so you have an off site backup. But that might be getting ahead of ourselves here. But this is the kind of task ready and waiting to be automated.

I'm hoping so! Once I get pve actually running I want to offload most of them before starting up the VMs again.

[–] tactical_trans_karen@hexbear.net 4 points 1 month ago (1 children)

Update! I mixed up the _ and - in the file names! I'm deleting a few now!

[–] Edie@hexbear.net 4 points 1 month ago* (last edited 1 month ago) (1 children)

It sounds like you are typing out the path by hand. You can auto complete paths using the tab key.

Also, you can go back and forwards in your terminal history using the up and down keys

I knew the second one, but not the first, thanks for the tip!!

Okay, trying to but it's telling me 'no such file or directory'. I tried rm from the directory proper, and I tried doing it via the full directory path. I double checked it and everything looks correct, is there something else that could return that error?

[–] heatenconsumerist@hexbear.net 5 points 1 month ago (1 children)

As shitty as it is, I would ask the folks over on the shitty-lib-lemmy, dbzero

https://lemmy.dbzer0.com/

Already solved tho! Honestly, hexbear has been a solid tech help community. People on official help channels couldn't be bothered. A comrade in this thread got me fixed up in under an hour.

[–] Dragonish@lemmy.dbzer0.com 5 points 1 month ago (2 children)

Are you using zfs snapshots at all? I have seen similar symptoms with automatic snapshotting that fills a disk and then becomes read-only. this command will show all snapshots: zfs list -r -t snapshot

Just ran that command, it said "no datasets available".

I... think? Not entirely clear on what snapshots are, but I think I turned them on by following the tutorials.