this post was submitted on 27 Sep 2023
128 points (97.8% liked)

Technology

34912 readers
174 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] redcalcium@lemmy.institute 45 points 1 year ago* (last edited 1 year ago) (15 children)

Say, if you compress some data using these LLMs, how hard it is to decompress the data again without access to the LLM used to perform the compression? Is the compression "algorithm" used by the LLM will be the same for all runs (which means you probably can reverse engineer it to created a decompressor program), or will it be different every time it compress new data?

I mean, having to download a huge LLM to decompress some data, which probably also requires GPU with big VRAM, seems a bit much.

[–] YellowBendyBoy@lemmy.world 20 points 1 year ago (1 children)

It probably is more like the LLM is able to „pack the truck much more efficiently“ and decompression should be the same.

But I agree that the likely use-case of uploading all your files to the cloud, having it compress your files, and downloading the result which is a few kb smaller isn’t really practical time efficient or even needed at all.

[–] DarkenLM@kbin.social 11 points 1 year ago (2 children)

Correct me if I'm wrong, but don't algorithms like Huffman or even Shannon-Fano code with blocks already pack the files as efficiently as possible? It's impossible to compress a file beyond it's entropy, and those algorithms get pretty damn close to it.

[–] Hexagon@feddit.it 8 points 1 year ago* (last edited 1 year ago) (1 children)

We're likely talking about lossy compression here

[–] Enkers@sh.itjust.works 15 points 1 year ago (1 children)

That was my first thought as well, but it doesn't seem to be the case:

In their study, the Google DeepMind researchers repurposed open-source LLMs to perform arithmetic coding, a type of lossless compression algorithm.

[–] Hexagon@feddit.it 5 points 1 year ago

... and this is why I should actually read the articles before commenting lol

[–] zero_iq@lemm.ee 5 points 1 year ago* (last edited 1 year ago) (1 children)

Correct me if I’m wrong

Well actually, yes, I'm sorry to have to tell you are wrong. Shannon-Fano coding is suboptimal for prefix codes and Huffman coding, while optimal for prefix-based coding, is not necessarily the most efficient compression method for any given data (and often isn't).

Huffman can be optimal given certain strict constraints, but those constraints don't always occur in natural/real- world data.

The best compression method (whether lossless or lossy) depends greatly on the nature of the data to be compressed. Patterns and biases can make certain methods much more efficient (or more practical) in some cases, when they might be useless elsewhere or in general. This is why data is often transformed before compression, using a reversible transformation that "encourages" certain desirable statistical characteristics in the data, so the compression method can better exploit them.

For example, compression software (e.g. gzip) may perform a Burrows-Wheeler transform and other encodings before applying Huffman coding to get a better compression ratio. If Huffman coding was an optimal compression method for all possible data, this would be redundant! Often, E.g. in medical imaging, audio/video data, the data is best analysed in a different domain to better reveal the underlying patterns and redundancies in the data so they cam be easily exploited by compression. E.g. frequency domain instead of time/spatial domain.

[–] DarkenLM@kbin.social 3 points 1 year ago

No need to be sorry, I am well aware I can be wrong, and I prefer to learn something new than being bashed for being wrong.

Maybe I phrased it in a way different than I thought about it. I didn't mean to claim that Shannon-Fano or Huffman are THE most efficient ways of doing it, but rather that comparing it to the massive overhead of running a LLM to compress a file, the current methods are way more resource efficient, even one as obsolete as Shannon-Fano codes.

I should probably have mentioned an algorithm like LZMA, or gzip, like you did.

load more comments (13 replies)