38

This is probably not the right place, but I figured I'd give it a try. I've been trying to copy this website so I could have offline access when tooling around the Channel Islands. I could (and probably will), just copy/paste whatever info I need for my little trips, but the fact that I can't copy it ass over is annoying the hell out of me. I've tried variations of wget -r ... as well as httrack with no success. Anyone have any idea how I can get this?

top 10 comments
sorted by: hot top controversial new old
[-] thereddevil@sh.itjust.works 5 points 1 year ago

Do you want it for one trip or all?

For one trip you select it and then save as html from your browser.

For all trips if there is a request being made to the backend to get data I’m not sure it’s possible

[-] sin_free_for_00_days@lemmy.one 1 points 1 year ago

I'm trying to get all. I mean, I've pretty much manually copied most of the info already, it's just that I can't do it all at once that is bugging me. I'll go out to one of the islands, and there are numerous anchorages. So what I've been doing is copying the one or two that I know I'll be going to, then picking a couple alternatives so I have options. It's not the end of the world, because I'm not a complete idiot as I do have NOAA charts and can navigate with them, it's just that website is great for specifics on the spots.

[-] redcalcium@c.calciumlabs.com 5 points 1 year ago* (last edited 1 year ago)

Another option is to use the SingleFile browser extension. You can install it from your browser's extension marketplace.

SingleFile can download an entire web page into a single html file while preserving the layout and images. It basically inline everything into a single file, which is very handy. The drawback is you'll have to do it manually on every page you want to save, but it works really well. Even the google map embed in the page you linked got preseved as well (it's not interactive anymore though because it's not preserving javascript).

[-] sin_free_for_00_days@lemmy.one 1 points 1 year ago

Oh, that is nice. Thanks for the link. That works great, basically similar effort to just copy/pasting, but with far better results.

[-] ash@lemm.ee 5 points 1 year ago

I used to use httrack as well, but it’s very outdated, please lmk if anyone has one similar

[-] byroncoughlin@lemmy.world 2 points 1 year ago

When we were cruising in 2018-2020, we used sitesucker to download/update noonsite.com for offline reference. It worked very well for us

https://ricks-apps.com/osx/sitesucker/index.html

[-] sin_free_for_00_days@lemmy.one 1 points 1 year ago

Thanks for the link, unfortunately I haven't touched anything Apple in at least a decade.

[-] jablz@vlemmy.net 2 points 1 year ago

I’ve used ArchiveBox as a desktop app (on Linux Mint) which saves a snapshot of URL’s you feed it. Worked for the sites I needed when seeking an offline solution.

[-] sin_free_for_00_days@lemmy.one 2 points 1 year ago

I had never heard of that app! I tried it with no luck, but I'm going to keep that in my pocket for later use. Thanks.

[-] ttt3ts@lemmy.dbzer0.com 1 points 1 year ago* (last edited 1 year ago)

2nd this. I run archivebox docker flavor on my media server and have done so for years. Everything I bookmark is automatically offlined.

load more comments
view more: next ›
this post was submitted on 03 Jul 2023
38 points (100.0% liked)

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ

54424 readers
337 users here now

⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don't request invites, trade, sell, or self-promote

3. Don't request or link to specific pirated titles, including DMs

4. Don't submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

founded 1 year ago
MODERATORS