this post was submitted on 28 May 2024
73 points (82.3% liked)
Technology
59161 readers
1813 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You are removing the most computationally intensive part of the process in your example, that's making it sound easy, while adding it back shows that your process is not practical.
The first thing I said was, "the more you compress something, the more processing power you're going to need [to decompress it]"
I'm not removing the most computationally expensive part by any means and you are misunderstanding the process if you think that.
That's why I specified:
And again
Those 30-60+ second estimates are based on someone using an RTX 4090, the top end Consumer grade GPU of today. They could speed up the process by having multiple GPUs or even enterprise grade equipment, but that's why I mentioned that this depends on hardware.
So, yes, this very specific example is not practical for Neuralink (I even said as much in my original example), but this example still works very well for explaining a method that can allow you a compression rate of over 20,000x.
Yes you need power, energy, and time to generate the original image, and yes you need power, energy, and time to regenerate it on a different computer. But to transmit the information needed to regenerate that image you only need to convey a tiny message.