this post was submitted on 23 Sep 2024
87 points (96.8% liked)

Selfhosted

40219 readers
909 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

I have quite an extensive collection of media that my server makes available through different means (Jellyfin, NFS, mostly). One of my harddrives has some concerning smart values so I want to replace it. What are good harddrives to buy today? Are there any important tech specs to look out for? In the past I didn't give this too much attention and it didn't bite me, yet. But if I'm gonna buy a new drive now, I might as well...

I'm looking for something from 4TB upwards. I think I remember that drives with very high capacity are more likely to fail sooner - is that correct? How about different brands - do any have particularly good or bad reputation?

Thanks for any hints!

you are viewing a single comment's thread
view the rest of the comments
[–] avidamoeba@lemmy.ca 43 points 1 month ago* (last edited 1 month ago) (6 children)

Buy recertified enterprise grade disks from https://serverpartdeals.com. Prices were around $160/16TB the last time I checked. Mix brands and models to reduce simultaneous failure. Use more than 1-disk redundancy. If you can't buy from SPD, either find an alternative or buy external drives and shuck them. Use ZFS to know if your data is correct. I've been dealing with funny AMD USB controllers recently and the amount of silent data corruption I'd have gotten if not for ZFS is ridiculous.

[–] Loulou@lemmy.mindoki.com 17 points 1 month ago (1 children)

This is incredible!

American sites like this so rarely ship to France, or it costs a litteral fortune just in shipping, here it's 130€ for a 12TB shipping included!

Wow.

I Do Not Need A 12TB Hard drive.

I Do Not Need a 12 TB Hard drive!

I mean or do I?

Thanks 💖

[–] avidamoeba@lemmy.ca 8 points 1 month ago

Get more drives, run higher redundancy 💪

[–] femtech@midwest.social 7 points 1 month ago (1 children)

Yep, I have 6 14tb drives from them in raid10.

[–] avidamoeba@lemmy.ca 2 points 1 month ago (2 children)
[–] femtech@midwest.social 6 points 1 month ago

I just keep adding 2 more drives as it gets full. Not sure if that's the best thing.

[–] TheHolm@aussie.zone 4 points 1 month ago (2 children)

I would not trust these kind of dives in the mirror. IMHO RAID6 is the only way.

[–] avidamoeba@lemmy.ca 2 points 1 month ago (2 children)

Due to risk of failure or risk of data corruption because the mirror can't tell which drive is right when there's a difference?

[–] TheHolm@aussie.zone 2 points 1 month ago

ZFS or BTRF mirror will know which side is at fault due to checksums. I'm more concern about simultaneous falures of two disks. Rebuilding of a RAID puts lots of pressure on remaining disks, so probability that remaining one dies too is much higher. with RAID6 3 disks need to die to lost date, which is less likely but not impossible.

[–] turmacar@lemmy.world 2 points 1 month ago* (last edited 1 month ago) (1 children)

The second one.

Mirroring is good for speed, but a storage mechanism with parity checks will always be more recoverable. And you will have far more storage available.

[–] avidamoeba@lemmy.ca 1 points 1 month ago* (last edited 1 month ago)

I think data checksums allow ZFS to tell which disk has the correct data when there's a mismatch in a mirror, eliminating the need for 3-way mirror to deal with bit flips and such. A traditional mirror like mdraid would need 3 disks to do this.

[–] peregus@lemmy.world 1 points 1 month ago

IMHO RAID6 is the only way.

Or SnapRaid

[–] actual_pillow@programming.dev 6 points 1 month ago

Damn I just put 32 more TBs in my homelab and wish I would have known about this site.

[–] pedroapero@lemmy.ml 6 points 1 month ago

I use BTRFS for the same. Being able to check for and repair silent corruptions is a must (and this is without needing to read the whole drives, only the actual files). I've had a lot of them over the years, including (but not only) because of a cheap USB controller also.

[–] mumblerfish@lemmy.world 4 points 1 month ago

Oh, wow. Just ordered a new computer. I guess it have to include some more disks!

[–] Pacmanlives@lemmy.world 4 points 1 month ago (1 children)

Holy cow these are way cheaper than anything I have seen before. I am in a RAID 5 setup so if a disk or two dies I am okay.

[–] avidamoeba@lemmy.ca 1 points 1 month ago* (last edited 1 month ago) (1 children)

If you can, move to a RAID-equivalent setup with ZFS (preferred in my opinion) in order to also know about and fix silent data corruption. RAIDz1, RAIDz2 would do the equivalent to RAID5, RAID6. That should eliminate one more variable with cheap drives.

[–] Pacmanlives@lemmy.world 6 points 1 month ago (1 children)

ZFS is a no go for me due to not being able to add larger disk and then expand my pool size on the fly. MDADM and LVM+XFS have treated me well the past few years. I started with an 12tb pool and now over 50 tb pool

[–] avidamoeba@lemmy.ca 2 points 1 month ago* (last edited 1 month ago) (1 children)

Not that I want to push ZFS or anything, mdraid/LVM/XFS is a fine setup, but for informational purposes - ZFS can absolutely expand onto larger disks. I wasn't aware of this until recently. If all the disks of an existing pool get replaced with larger disks, the pool can expand onto the newly available space. E.g. a RAIDz1 with 4x 4T disks will have usable space of 12T. Replace all disks with 8T disks (one after another so that it can be done on the fly) and your pool will have 24T of space. Replace those with 16T and you get 48T, and so on. In addition you can expand a pool by adding another redundant topology just like you can with LVM and mdraid. E.g. 4x 4T RAIDz1 + 3x 8T RAIDz2 + 2x 16T mirror for a total of 44T. Finally, expanding existing RAIDz with additional disks has recently landed too.

And now for pushing ZFS - I was doing file based replication on a large dataset for many years. Just going over all the hundreds of thousands of dirs and files took over an hour on my setup. That's then followed by a diff transfer. Think rsync or Syncthing. That's how I did it on my old mdraid/LVM/Ext4 setup, and that's how I continued doing on my newer ZFS setup. Recently I tried using ZFS send/receive which operates within the filesystem. It completely eliminated the dataset file walk and stat phase since the filesystem already knows all of the metadata. The replication was reduced to just the diff file transfer time. What used to take over an hour got reduced to seconds or minutes, depending on the size of the changed data. I can now do multiple replications per hour without significant load on the system. Previously it was only feasible overnight because the system would be robbed of IOPS for over an hour.

[–] Pacmanlives@lemmy.world 2 points 1 month ago (1 children)

I wonder if that’s a new feature. IIRC the issue was with vdevs in ZFS in the pool expansion. I am a FreeBSD user and do have some jails running. I do like ZFS a lot it’s way more mature then BTRFS on the Linux

[–] avidamoeba@lemmy.ca 1 points 1 month ago* (last edited 1 month ago)

As far as I can tell it dates back to at least 2010 - https://docs.oracle.com/cd/E19253-01/819-5461/githb/index.html. See the Solaris version. You can try it with small test files in place of disks and see if it works. I haven't done it expansion yet but that's my plan for growing beyond the 48T of my current pool. I use ZFS on Linux btw. Works perfectly fine.