view the rest of the comments
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
It's understandable that you want to take your virtualization-capabilities to the next level but I also don't see the appeal of containerizing unraid like many others here. I started using unraid last autumn and to me it really is about being able to mix drive sizes. It's a backup to my main server's ZFS pool so (fingers crossed) I don't even really worry about drive failures on unraid. (I have double parity on ZFS and single parity on unraid.)
Anyways my point is I started out with 8 SATA slots plus an old USB-based enclosure with i set to JBOD mode and that was a pretty stupid idea. unraid couldn't read SMART data from those USB drives. Every once in a while one of the drives would suddenly show up as having an unsupported partition layout. Couple weeks ago all 5 drives in the enclosure started showing up as unusable. So as you can imagine I dropped that enclosure and now am working solely off the 8 internal slots. I'd imagine that virtualizing unraid's disk access might potentially yield similar issues. At least the comments of people here remind me of my own janky setup.
You do make a great point. I really am feeling more inclined to spinning up a new rig for ProxMox, and leave my UnRaid to do what it's good at in it's bare metal state as it is today.
This self hosting rabbit hole runs scarily deep.
Once you face the (seemingly) inevitable necessity of further hardware purchases it does become sort of tedious I must say. I used to treat my raid parity as a "backup" for way longer than I'd like to admit because I didn't want my costs to double. With unraid I at least don't have the same management workload that I have on my main box where I have a rolling release Arch with manually installed ZFS where the build always has to line up with the kernel version and all that jazz. Unraid is my deploy and forget box. Rsync every 24h. God bless.
Proxmox has been recommended to me before I switched my main server to Arch but once I realised that it has no direct docker support I thought I'd rather just do things myself. It really is a matter of preference. It's kind of hard to believe that all the functionality in Proxmox can be had for absolutely free.
That's why I built 2 of my boxes, and have them Rsync 2,500 miles away from each other. My brother was nice enough to let me set the backup box in his garage. I too was mistakenly under the impression that parity was enough to keep my data safe. Once I went over some horror stories in the forums, I duplicated my purchase, built an exact replica of my box, and then set it up at my brother's house.
2500 miles sheesh. That shit's nuclear war proof then.