this post was submitted on 09 Feb 2025
60 points (95.5% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

56706 readers
971 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):

🏴‍☠️ Other communities

Torrenting:

Gaming:


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 2 years ago
MODERATORS
 

I have been lurking on this community for a while now and have really enjoyed the informational and instructional posts but a topic I don't see come up very often is scaling and hoarding. Currently, I have a 20TB server which I am rapidly filling and most posts talking about expanding recommend simply buying larger drives and slotting them in to a single machine. This definitely is the easiest way to expand, but seems like it would get you to about 100TB before you cant reasonably do that anymore. So how do you set up 100TB+ networks with multiple servers?

My main concern is that currently all my services are dockerized on a single machine running Ubuntu, which works extremely well. It is space efficient with hardlinking and I can still seed back everything. From different posts I've read, it seems like as people scale they either give up on hardlinks and then eat up a lot of their storage with copying files or they eventually delete their seeds and just keep the content. Does the Arr suite and Qbit allow dynamically selecting servers based on available space? Or are there other ways to solve these issues with additional tools? How do you guys set up large systems and what recommendations would you make? Any advice is appreciated from hardware to software!

Also, huge shout out to Saik0 from this thread: https://lemmy.dbzer0.com/post/24219297 I learned a ton from his post, but it seemed like the tip of the iceberg!

you are viewing a single comment's thread
view the rest of the comments
[–] tenchiken@lemmy.dbzer0.com 28 points 1 day ago (10 children)

I personally have dedicated machines per task.

8x SSD machine: runs services for Arr stack, temporary download and work destination.

4-5x misc 16x Bay boxes: raw storage boxes. NFS shared. ZFS underlying drive config. Changes on a whim for what's on them, but usually it's 1x for movies, 2x for TV, etc. Categories can be spread to multiple places.

2-3x 8x bay boxes: critical storage. Different drive geometric config, higher resilience. Hypervisors. I run a mix of Xen and proxmox depending on need.

All get 10gb interconnect, with critical stuff (nothing Arr for sure) like personal vids and photos pushed to small encrypted storage like BackBlaze.

The NFS shared stores, once you get everything mapped, allow some smooth automation to migrate things pretty smoothly around to allow maintenance and such.

Mostly it's all 10 year old or older gear. Fiber 10gb cards can be had off eBay for a few bucks, just watch out for compatibility and the cost for the transceivers.

8 port SAS controllers can be gotten same way new off eBay from a few vendors, just explicitly look for "IT mode" so you don't get a raid controller by accident.

SuperMicro makes quality gear for this... Used can be affordable and I've had excellent luck. Most have a great ipmi controller for simple diagnostic needs too. Some of the best SAS drive planes are made by them.

Check BackBlaze disk stats from their blog for drive suggestions!

Heat becomes a huge factor, and the drives are particularly sensitive to it... Running hot shortens lifespan. Plan accordingly.

It's going to be noisy.

Filter your air in the room.

The rsync command is a good friend in a pinch for data evacuation.

Your servers are cattle, not pets... If one is ill, sometimes it's best to put it down (wipe and reload). If you suspect hardware, get it out of the mix quick, test and or replace before risking your data again.

You are always closer to dataloss than you realize. Be paranoid.

Don't trust SMART. Learn how to read the full report. Pending-Sectors above 0 is always failure... Remove that disk!

Keep 2 thumb drives with your installer handy.

Keep a repo somewhere with your basics of network configs... Ideally sorted by machine.

Leave yourself a back door network... Most machines will have a 1gb port. Might be handy when you least expect. Setting up LAGG with those 1gb ports as fallback for the higher speed fiber can save headaches later too...

[–] 0range@lemmy.dbzer0.com 6 points 1 day ago

This is what i expect to see in a piracy community on Lemmy

load more comments (9 replies)