this post was submitted on 20 Oct 2024
508 points (99.6% liked)

Technology

59972 readers
2546 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] zlatiah@lemmy.world 47 points 1 month ago (4 children)

This again??

This time once archive.org is back online again... is it possible to get torrents of some of their popular data storage? For example I wouldn't imagine their catalog of books with expired copyright to be very big. Would love a community way to keep the data alive if something even worse happens in the future (and their track record isn't looking good now)

[–] Exeous@lemmy.world 15 points 1 month ago

Like this idea

[–] njordomir@lemmy.world 13 points 1 month ago (2 children)

Yep, that seems like the ideal decentralized solution. If all the info can be distributed via torrent, anyone with spare disk space can help back up the data and anyone with spare bandwidth can help serve it.

[–] Shdwdrgn@mander.xyz 8 points 1 month ago (2 children)

Most of us can't afford the sort of disk capacity they use, but it would be really cool if there were a project to give volunteers pieces of the archive so that information was spread out. Then volunteers could specify if they want to contribute a few gigabytes to multiple terabytes of drive space towards the project and the software could send out packets any time the content changes. Hmm this description sounds familiar but I can't think of what else might be doing something similar -- anyone know of anything like that that could be applied to the archive?

[–] njordomir@lemmy.world 5 points 1 month ago

Yeah, the projects I've heard about that have done something like this broke it into multiples.

For example, 1000GB could be broken into forty 25GB torrents and within that, you can tell the client to only download some of the files.

At scale, a webpage can show the seed/leach numbers and averages foe each torrent over a time period to give an idea of what is well mirrored and what people can shore up. You could also change which torrent is shown as the top download when people go to the contributor page and say they want to help host it ensuring a better distribution.

[–] rottingleaf@lemmy.world 0 points 1 month ago

Since I'm spamming with this same idea right now - the description is similar to Freenet (the old one, the Hyphanet), but you'd need some kind of ability to choose parts of which collections of data get stored in your contributed storage, while with Freenet it's all the network (unless you form a separated F2F net, there is such an option, but no way to be sure that all peers, ahem, store only IA data and not their own porn collections, for example, taking precious storage). I've described one idea in my previous comment, but it's purely an idea, I'm nowhere close to having the knowledge to make such.

[–] rottingleaf@lemmy.world 1 points 1 month ago (1 children)

There's an issue with torrents, only the most popular ones get replicated and the process is manual\social.

Something like Freenet is needed, which automatically "spreads" data over machines contributing storage, but Freenet is an unreliable storage, basically like a cache where older and unwanted stuff gets erased.

So it should be something like Freenet, but possibly with some "clusters" or "communities" with a central (cryptography-enabled) authority of each being able to determine the state of some collection of data as a whole, and pick priorities. My layman's understanding is that this would be similar to something between Freenet and Ceph, LOL. More like a cluster filesystem spread over many nodes, not like cache.

[–] njordomir@lemmy.world 1 points 1 month ago* (last edited 1 month ago) (1 children)

You have more knowledge on this than I did. I enjoyed reading about Freenet and Ceph. I have dealt with cloud stuff, but not as much on a technical-underpinnings level. My first freenet impression from reading some articles gives me 90s internet vibes based on the common use cases they listed.

I remember ceph because I ended up building it from the AUR once on my weak little personal laptop because it got dropped from some repository or whatever but was still flagged to stay installed. I could have saved myself an hours long build if I had read the release notes.

[–] rottingleaf@lemmy.world 1 points 1 month ago

My first freenet impression from reading some articles gives me 90s internet vibes based on the common use cases they listed.

That's correct, I meant the way it works.

[–] catloaf@lemm.ee 8 points 1 month ago

I'm pretty sure all their content is available by torrent, so you could mirror the data and provide the torrent files for direct download. It'll probably be here when it's back up: https://archive.org/details/public-domain-archive

[–] SilentStorms@lemmy.dbzer0.com 3 points 1 month ago

Anna’s Archive does this. I think its a really good way to make it difficult to take them down.

Hopefully this hack starts some conversations on how they can ensure longevity for their project. Seems they’re being attacked on multiple fronts now.