26
12
submitted 2 months ago by Showroom7561@lemmy.ca to c/datahoarder@lemmy.ml

Hey guys, so it seems that Linkwarden isn't as good as I was hoping, since some websites will throw up a cookie popup or some other screen that basically prevents the capture.

Firefox Screenshot seems to work well, but it saves a PNG, which isn't really text searchable.

FF's "save page as..." feature seems to break things when viewing them back.

Save to PDF is another option, and that seems to be decent.

I'm not looking to copy entire websites, but I like to save web pages for later reference (i.e. instructions/specs).

I use Synology Note Station, but they don't have a web clipper for Firefox...

I'm fine with using a folder structure to store files, despite not being totally ideal when compared to Linkwarden.

Does anyone have any other suggestions that perhaps I've missed? Nothing too complicated... ideally, as simple as a button click would be great.

27
24
28
13
submitted 3 months ago by Sprokes@jlai.lu to c/datahoarder@lemmy.ml

YouTube is cracking down Adblocker and they may never work in a year or so.

I don't watch YouTube that much and most of the time I watch the same thing. So I am thinking of mirroring the videos I watch to other platforms. But I don't know which. I was just thinking of ok.ru. I don't know if they respond to DMCA requests.

Did anyone do something similar?

29
24

Running GParted gives me an error that says

fsyncing/closing dev/sdb: input/output error

Using Gnome Disk Utility under the assessment section it says

Disk is OK, one bad sector

Clicking to format it to EXT4 I'm getting a message that says

Error formatting volume

Error wiping device: Failed to probe the device 'dev/SDB' (udisks-error-quark, 0)

Running sudo smartctl -a /dev/SDB I get a few messages that all say

... SCSI error badly formed scsi parameters


In terms of the physical side I've swapped out the SATA data and power cable with the same results.


Any suggestions?

Amazon has a decent return policy so I'm not incredibly concerned but if I can avoid that hassle it would nice.

30
20

a few days ago i saw a post on the reddit datahoarder community asking how to backup keys and other small files for a long time.
it reminded me of a script i made some time ago to save my otp secrets in case of loss of device or a reenactment of the raivo otp incident,
so i decided to make it public on github, hope someone here finds it useful

github.com/Leviticoh/weedcup

the density is not great, about 1kB per A4 page, but it can recover from losing up to half of the printed surface and, if stored properly, paper should last very long

31
30
submitted 3 months ago by Potatisen@lemmy.world to c/datahoarder@lemmy.ml

Basically title!

I want to run it through my NAS to free up some space.

Tha ks in advance.

32
8
submitted 3 months ago* (last edited 3 months ago) by andioop@programming.dev to c/datahoarder@lemmy.ml

I read something about once-reliable sites that would tell you the best [tech thing] now not giving legit reviews, being paid to say good things about certain companies, and I do not remember where I read that or which sites, so I figured I'd bypass the issue and ask people here. I'm pretty new to anything near the level of complexity and technical details that I see on datahoarder communities. I know about the 321 backup rule and that's it. This is me trying to find something to hold copy 3 of my data.

33
40
submitted 3 months ago by evasync@lemmy.world to c/datahoarder@lemmy.ml

i want to buy a few hard drives for backups.

What is the most reliable option for longetivity? i was looking at the wd ae, which they claim is fit for this purpose, but knowing nothing about hard drives, I wouldnt know if it was a marketing claim..

34
303
submitted 3 months ago by lars@lemmy.sdf.org to c/datahoarder@lemmy.ml

cross-posted from: https://lemmy.world/post/17689141

I'll just save them in this folder so that I can totally come back later and read them.

35
108
36
23
submitted 4 months ago by Thavron@lemmy.ca to c/datahoarder@lemmy.ml
37
14

I was considering making a 30+ TB NAS to simplify and streamline my current setup but because it's a relatively low priority for me I am wondering is it worth it to hold off for a year or two?

I am unsure if prices have more or less plateaued and the difference won't be all that substantial. Maybe I should just wait for Black Friday.

For context it seems like two 16TB HDD would cost about $320 currently.


Here's some related links:

  • This article by Our World in Data contains a chart with how the price per GB has decreased overtime.

  • This article by Tom's Hardware talks about how in July 2023 SSD prices bottomed out before climbing back up predicted further increases in 2024.

38
16
Renewed drives (slrpnk.net)
submitted 4 months ago by greengnu@slrpnk.net to c/datahoarder@lemmy.ml

Are they worth considering or only worth it at certain price points?

39
149
submitted 4 months ago by xnx@slrpnk.net to c/datahoarder@lemmy.ml

cross-posted from: https://slrpnk.net/post/10273849

Vimms Lair is getting removal notices from Nintendo etc. We need someone to help make a rom pack archive can you help?

Vimms lair is starting to remove many roms that are being requested to be removed by Nintendo etc. soon many original roms, hacks, and translations will be lost forever. Can any of you help make archive torrents of roms from vimms lair and cdromance? They have hacks and translations that dont exist elsewhere and will probably be removed soon with ios emulation and retro handhelds bringing so much attention to roms and these sites

40
108

I've been working on this subtitle archive project for some time. It is a Postgres database along with a CLI and API application allowing you to easily extract the subs you want. It is primarily intended for encoders or people with large libraries, but anyone can use it!

PGSub is composed from three dumps:

  • opensubtitles.org.Actually.Open.Edition.2022.07.25
  • Subscene V2 (prior to shutdown)
  • Gnome's Hut of Subs (as of 2024-04)

As such, it is a good resource for films and series up to around 2022.

Some stats (copied from README):

  • Out of 9,503,730 files originally obtained from dumps, 9,500,355 (99.96%) were inserted into the database.
  • Out of the 9,500,355 inserted, 8,389,369 (88.31%) are matched with a film or series.
  • There are 154,737 unique films or series represented, though note the lines get a bit hazy when considering TV movies, specials, and so forth. 133,780 are films, 20,957 are series.
  • 93 languages are represented, with a special '00' language indicating a .mks file with multiple languages present.
  • 55% of matched items have a FPS value present.

Once imported, the recommended way to access it is via the CLI application. The CLI and API can be compiled on Windows and Linux (and maybe Mac), and there also pre-built binaries available.

The database dump is distributed via torrent (if it doesn't work for you, let me know), which you can find in the repo. It is ~243 GiB compressed, and uses a little under 300 GiB of table space once imported.

For a limited time I will devote some resources to bug-fixing the applications, or perhaps adding some small QoL improvements. But, of course, you can always fork them or make or own if they don't suit you.

41
49
submitted 5 months ago by ylai@lemmy.ml to c/datahoarder@lemmy.ml
42
22

I'm looking at my library and I'm wondering if I should process some of it to reduce the size of some files.

There are some movies in 720p that are 1.6~1.9GB each. And then there are some at the same resolution but are 2.5GB.
I even have some in 1080p which are just 2GB.
I only have two movies in 4k, one is 3.4GB and the other is 36.2GB (can't really tell the detail difference since I don't have 4k displays)

And then there's an anime I have twice at the same resolution, one set of files are around 669~671MB, the other set 191 each (although in this the quality is kind of noticeable while playing them, as opposed to the other files I extract some frames)

What would you do? what's your target size for movies and series? What bitrate do you go for in which codec?

Not sure if it's kind of blasphemy in here talking about trying to compromise quality for size, hehe, but I don't know where to ask this. I was planning on using these settings in ffmpeg, what do you think?
I tried it in an anime at 1080p, from 670MB to 570MB, and I wasn't able to tell the difference in quality extracting a frame form the input and the output.
ffmpeg -y -threads 4 -init_hw_device cuda=cu:0 -filter_hw_device cu -hwaccel cuda -i './01.mp4' -c:v h264_nvenc -preset:v p7 -profile:v main -level:v 4.0 -vf "hwupload_cuda,scale_cuda=format=yuv420p" -rc:v vbr -cq:v 26 -rc-lookahead:v 32 -b:v 0

43
11
submitted 5 months ago by ylai@lemmy.ml to c/datahoarder@lemmy.ml
44
46

I was so confident that WhatsApp was backing itself up to Google ever since I got my new pixel but I just wasn't. Then yesterday I factory reset my phone to fix something else and I lost it all. Years worth of chats from so many times in my past just aren't there, all my texts with my mom and my family, group chats with old friends... I can't even look at the app anymore, I'll never use Whatsapp as much as I used to. I just don't feel right with this change. There's no way to get those chats back and now it doesn't feel like there's any point backing up WhatsApp now! I really wanna cry like this is so unfair!! And all I had to do was check Whatsapp before I did a factory reset.. the TINIEST THING I could have done and prevented this and I didn't fucking do it!!!!!!!

How do I get past this?

45
25
submitted 5 months ago* (last edited 5 months ago) by ylai@lemmy.ml to c/datahoarder@lemmy.ml
46
30
submitted 5 months ago* (last edited 5 months ago) by cm0002@lemmy.world to c/datahoarder@lemmy.ml

With Google Workspace cracking down on storage (Been using them for unlimited storage for years now) I was lucky to get a limit of 300TBs, but now I have to actually watch what gets stored lol

A good portion is uh "Linux ISOs", but the rest is very seldom (In many cases last access was years ago) accessed files that I think would be perfect for tape archival. Things like byte-to-byte drive images and old backups. I figure these would be a good candidate for tape and estimate this portion would be about 100TBs or more

But I've never done tape before, so I'm looking for some purchasing advice and such. I seen from some of my research that I should target picking up an LTO8 drive as it's compatible with LTO9 for when they come down in price.

And then it spiraled from there with discussions on library tape drives that are cheaper but need modifications and all sorts of things

47
8
submitted 5 months ago* (last edited 5 months ago) by dullbananas@lemmy.ca to c/datahoarder@lemmy.ml

Run this javascript code with the document open in the browser: https://codeberg.org/dullbananas/google-docs-revisions-downloader/src/branch/main/googleDocsRevisionDownloader.js

Usually this is possible by pasting it into the Console tab in developer tools. If running javascript is not an option, then use this method: https://lemmy.ca/post/21276143

You might need to manually remove the characters before the first { in the downloaded file.

48
7
submitted 5 months ago* (last edited 3 months ago) by dullbananas@lemmy.ca to c/datahoarder@lemmy.ml
  1. Copy the document ID. For example, if the URL is https://docs.google.com/document/d/16Asz8elLzwppfEhuBWg6-Ckw-Xtf/edit, then the ID is 16Asz8elLzwppfEhuBWg6-Ckw-Xtf.
  2. Open this URL: https://docs.google.com/document/u/1/d/poop/revisions/load?id=poop&start=1&end=1 (replace poop with the ID from the previous step). You should see a json file.
  3. Add 0 to the end of the number after end= and refresh. Repeat until you see an error page instead of a json file.
  4. Find the highest number that makes a json file instead of an error page appear. This involves repeatedly trying a number between the highest number known to result in a json file and the lowest number known to result in an error page.
  5. Download the json file. You might need to remove the characters before the first {.

I found the URL format for step 2 here:

https://features.jsomers.net/how-i-reverse-engineered-google-docs/

I am working on an easy way. Edit: here it is https://lemmy.ca/post/21281709

49
71
submitted 6 months ago by ylai@lemmy.ml to c/datahoarder@lemmy.ml
50
37
submitted 6 months ago by lars@lemmy.sdf.org to c/datahoarder@lemmy.ml

cross-posted from: https://programming.dev/post/13631943

Firefox Power User Keeps 7,400+ Browser Tabs Open for 2 Years

view more: ‹ prev next ›

datahoarder

6699 readers
13 users here now

Who are we?

We are digital librarians. Among us are represented the various reasons to keep data -- legal requirements, competitive requirements, uncertainty of permanence of cloud services, distaste for transmitting your data externally (e.g. government or corporate espionage), cultural and familial archivists, internet collapse preppers, and people who do it themselves so they're sure it's done right. Everyone has their reasons for curating the data they have decided to keep (either forever or For A Damn Long Time). Along the way we have sought out like-minded individuals to exchange strategies, war stories, and cautionary tales of failures.

We are one. We are legion. And we're trying really hard not to forget.

-- 5-4-3-2-1-bang from this thread

founded 4 years ago
MODERATORS