You shouldn't have abysmal performance with ZFS. Something must be up.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
What's up is ZFS. It is solid but the architecture is very dated at this point.
There are about a hundred different settings I could try to change but at some point it is easier to go btrfs where it works out of the box.
One day I had a power outage and I wasn't able to mount the btrfs system disk anymore. I could mount it in another Linux but I wasn't able to boot from it anymore. I was very pissed, lost a whole day of work
Don't use btrfs if you need RAID 5 or 6.
The RAID56 feature provides striping and parity over several devices, same as the traditional RAID5/6. There are some implementation and design deficiencies that make it unreliable for some corner cases and the feature should not be used in production, only for evaluation or testing. The power failure safety for metadata with RAID56 is not 100%.
https://btrfs.readthedocs.io/en/latest/btrfs-man5.html#raid56-status-and-recommended-practices
I've been using single-disk btrfs for my rootfs on every system for almost a decade. Great for snapshots while still being an in-tree driver. I also like being able to use subvolumes to treat / and /home (maybe others) similar to separate filesystems without actually being different partitions.
I had used it for my NAS array too, with btrfs raid1 (on top of luks), but migrated that over to ZFS a couple years ago because I wanted to get more usable storage space for the same money. btrfs raid5 is widely reported to be flawed and seemed to be in purgatory of never being fixed, so I moved to raidz1 instead.
One thing I miss is heterogenous arrays: with btrfs I can gradually upgrade my storage one disk at a time (without rewriting the filesystem) and it uses all of my space. For example, two 12TB drives, two 8TB drives, and one 4TB drive adds up to 44TB and raid1 cuts that in half to 22TB effective space. ZFS doesn't do that. Before I could migrate to ZFS I had to commit to buying a bunch of new drives (5x12TB not counting the backup array) so that every drive is the same size and I felt confident it would be enough space to last me a long time since growing it after the fact is a burden.
With version 2.3 (currently in RC), ZFS will at least support RAIDZ expansion. That should already help a lot for a NAS usecase.
Btrfs Raid 10 reportedly is stable
One time I had a power outage and one of the btrfs hds (not in a raid) couldn't be read anymore after reboot. Even with help from the (official) btrfs mailinglist It was impossible to repair the file system. After a lot of low level tinkering I was able to retrieve the files, but the file system itself was absolutely broken, no repair process was possible. I since switched to zfs, the emergency options are much more capable.
Was that less than 2 years ago? Were you using kernel 5.15 or newer?
A bit of topic; am I the only one that pronounces it "butterface"?
Not anymore.
You son of a bitch, I'm in.
Ah feck. Not any more.
Not proxmox-specific, but I've been using btrfs on my servers and laptops for the past 6 years with zero issues. The only times it's bugged out is due to bad hardware, and having the filesystem shouting at me to make me aware of that was fantastic.
The only place I don't use zfs is for my nas data drives (since I want raidz2, and btrfs raid5 is hella shady) but the nas rootfs is btrfs.
No reason not to. Old reputations die hard, but it's been many many years since I've had an issue.
I like also that btrfs is a lot more flexible than ZFS which is pretty strict about the size and number of disks, whereas you can upgrade a btrfs array ad hoc.
I'll add to avoid RAID5/6 as that is still not considered safe, but you mentioned RAID1 which has no issues.
I've been vaguely planning on using btrfs in raid5 for my next storage upgrade. Is it really so bad?
Check status here. It looks like it may be a little better than the past, but I'm not sure I'd trust it.
An alternative approach I use is mergerfs + snapraid + snapraid-btrfs. This isn't the best idea for a system drive, but if it's something like a NAS it works well and snapraid-btrfs
doesn't have the write hole issues that normal snapraid
does since it operates on r/o snapshots instead of raw data.
If it didn't give you problems, go for it. I've run it for years and never had issues either.
Meh. I run proxmox and other boot drives on ext4, data drives on xfs. I don't have any need for additional features in btrfs. Shrinking would be nice, so maybe someday I'll use ext4 for data too.
I started with zfs instead of RAID, but I found I spent way too much time trying to manage RAM and tuning it, whereas I could just configure RAID 10 once and be done with it. The performance differences are insignificant, since most of the work it does happens in the background.
You can benchmark them if you care about performance. You can find plenty of discussion by googling "ext vs xfs vs btrfs" or whichever ones you're considering. They haven't changed that much in the past few years.
Proxmox only supports btrfs or ZFS
Or at least that's what I thought
but I found I spent way too much time trying to manage RAM and tuning it,
I spent none, and it works fine. what was you're issue?
I have four 6tb data drives and 32gb of RAM. When I set them up with zfs, it claimed quite a few gb of RAM for its cache. I tried allocating some of the other NVMe drive as cache, and tried to reduce RAM usage to reasonable levels, but like I said, I found that I was spending a lot of time fiddling instead of just configuring RAID and have it running just fine in much less time.
I run it now because I wanted to try it. I haven't had any issues. A friend recommended it as a stable option.
Using it here. Love the flexibility and features.
Btrfs only has issues with raid 5. Works well for raid 1 and 0. No reason to change if it works for you
It is stable with raid 0,1 and 10.
Raid 5 and 6 are dangerous
Do you rely on snapshotting and journaling? If so backup your snapshots.
Why?
I already take backups but I'm curious if you have had any serious issues
The question is how do you get a bad performance with ZFS?
I just tried to read a large file and it gave me uncached 280 MB/s from two mirrored HDDs.
The fourth run (obviously cached) gave me over 3.8 GB/s.
I have never heard of anyone getting those speeds without dedicated high end hardware
Also the write will always be your bottleneck.
I have similar speeds on a truenas that I installed on a simple i3 8100