this post was submitted on 24 Jan 2025
93 points (100.0% liked)

technology

23495 readers
342 users here now

On the road to fully automated luxury gay space communism.

Spreading Linux propaganda since 2020

Rules:

founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] yogthos@lemmygrad.ml 22 points 1 week ago (2 children)

that's a deeply reactionary take

[–] peppersky@hexbear.net 11 points 1 week ago (2 children)

LLMs are literally reactionary by design but go off

[–] yogthos@lemmygrad.ml 26 points 1 week ago (2 children)
[–] xiaohongshu@hexbear.net 11 points 1 week ago* (last edited 1 week ago) (1 children)

They’re not just automations though.

Industrial automations are purpose-built equipments and softwares designed by experts with very specific boundaries set to ensure that tightly regulated specifications can be met - i.e., if you are designing and building a car, you better make sure that the automation doesn’t do things it’s not supposed to do.

LLMs are general purpose language models that can be called up to spew out anything and without proper reference to their reasoning. You can technically use them to “automate” certain tasks but they are not subjected to the same kind of rules and regulations employed in the industrial setting, where tiny miscalculations can lead to consequences.

This is not to say that they are useless and cannot aid in the work flow, but their real use cases have to be manually curated and extensively tested by experts in the field, with all the caveats of potential hallucinations that can cause severe consequences if not caught in time.

What you’re looking for is AGI, and the current iterations of AI is the furthest you can get from an AGI that can actually reason and think.

[–] yogthos@lemmygrad.ml 3 points 1 week ago (1 children)

That's not the case with stuff like neurosymbolic models and what DeepSeek R1 is doing. These types of models do actual reasoning and can explain the steps they use to arrive at a solution. If you're interested, this is a good read on the neurosymbolic approach https://arxiv.org/abs/2305.00813

However, automation doesn't just apply to stuff like factory work. If you read the articles I linked above, you'll see that they're specifically talking about automating aspects of producing media such as visual content.

[–] xiaohongshu@hexbear.net 10 points 1 week ago (7 children)

The “chain of thought” output simply gives you the “progress” and the specific path/approach the model has arrived at a particular answer - which is useful for tweaking and troubleshooting the parameters toward improving the accuracy and reducing hallucinations on a model, but it is not the same reasoning that could be given from a human mind.

The transformer architecture is really just a statistical model built to have very strong memory retention when it comes to making associations (in the case of LLMs, words). It fundamentally cannot think or reason. It takes a specific “statistical” path and arrives at an answer based on the associations it has been trained on, but you cannot make it think and reason the way we do, nor can it evaluate or verify the validity of a piece of information based on cognitive reasoning.

load more comments (7 replies)
[–] ThermonuclearEgg@hexbear.net 7 points 1 week ago (1 children)

They're just automation

The fact that there is nuance does not preclude that artifacts can be political, whether intentional or not..

While I don't know whether this applies to DeepSeek R1, the Internet perpetuates many human biases and machine learning will approximate and pick up on those biases regardless of which country is doing the training. Sure you can try to tell LLMs trained on the Internet not to do that — we've at least become better at that than Tay in 2016, but that probably still goes about as well as telling a human not to at best.

I personally don't buy the argument that you should hate the designer instead of the technology, in the same way we shouldn't excuse a member of Congress' actions because of the military-industrial complex, or capitalism, or systemic racism, and so on that ensured they're in such a position.

[–] yogthos@lemmygrad.ml 6 points 1 week ago (3 children)

I don't see these tools replacing humans in the decision making process, rather they're going to be used to automate a lot of tedious work with the human making high level decisions.

[–] ThermonuclearEgg@hexbear.net 7 points 1 week ago (1 children)

That's fair, but human oversight doesn't mean they'll necessarily catch biases in its output

[–] yogthos@lemmygrad.ml 3 points 6 days ago

We already have that problem with humans as well though.

load more comments (2 replies)
[–] Outdoor_Catgirl@hexbear.net 14 points 1 week ago (2 children)
[–] shath@hexbear.net 18 points 1 week ago (2 children)

they "react" to your input and every letter after i guess?? lmao

[–] Hermes@hexbear.net 37 points 1 week ago (2 children)

Hard disk drives are literally revolutionary by design because they spin around. Embrace the fastest spinning and most revolutionary storage media gustavo-brick-really-rollin

[–] comrade_pibb@hexbear.net 13 points 1 week ago (1 children)

sorry sweaty, ssds are problematic

[–] Hermes@hexbear.net 17 points 1 week ago

Scratch a SSD and a NVMe bleeds.

[–] culpritus@hexbear.net 10 points 1 week ago

Sufi whirling is the greatest expression of revolutionary spirit in all of time.

[–] bobs_guns@lemmygrad.ml 12 points 1 week ago (1 children)

Pushing glasses up nose further than you ever thought imaginable *every token after

[–] shath@hexbear.net 10 points 1 week ago

hey man come here i have something to show you

[–] plinky@hexbear.net 9 points 1 week ago (2 children)

It's a model with heavy cold war liberalism bias (due to information being fed to it), unless you prompt it - you'll get freedom/markets/entrepreneurs out of it for any problem. As people are treating them as gospel of the impartial observer - shrug-outta-hecks

[–] xiaohongshu@hexbear.net 13 points 1 week ago* (last edited 1 week ago) (1 children)

The fate of the world will be ultimately decided on garbage answers spewed out by an LLM trained on Reddit posts. That’s just how the future leaders of the world will base their decisions on.

[–] plinky@hexbear.net 6 points 1 week ago

Future senator getting "show hog" to some question with 0.000001 probability: well, if the god-machine says so

[–] iByteABit@hexbear.net 10 points 1 week ago (2 children)

That's not the technology's fault though, it's just that the technology is produced by an imperialist capitalist society that treats cold war propaganda as indisputable fact.

Feed different data to the machine and you will get different results. For example if you just train a model on CIA declassified documents it will be able to answer questions about the real role of the CIA historically. Add a subjective point of view on these events and it can either answer you with right wing bullshit if that's what you gave it, or a marxist analysis of the CIA as an imperialist weapon that it is.

As with technology in general, it's effect on society lies with the hands that wield it.

[–] plinky@hexbear.net 4 points 1 week ago* (last edited 1 week ago)

Put it that way, even if one feeds it cia files to the hearts content, the weights of words which are needed to construct sentences is still sitting somewhere there. (also answering about real role of cia implies llm has any idea about reality, it will just bias answer in another direction, just as marxist analysis: it will just reproduce likeliest answer resembling marxist literature you fed to it, not "have analysis").

Benign application of llm is natural language processing into fixed functions on the back end (e.g. turn off the lights when it start raining or whatever, something which can be disassembled from millions of ways into same set of instructions, here its fuzziness is great)

load more comments (1 replies)
[–] peppersky@hexbear.net 4 points 1 week ago (3 children)

"let's just use autocorrect to create the future this is definitely cool and not regressive and reactionary and a complete recipe for disaster"

[–] crime@hexbear.net 26 points 1 week ago* (last edited 1 week ago) (2 children)

It's technology with many valid use-cases. The misapplication of the technology by capital doesn't make the tech itself inherently reactionary.

[–] Dessa@hexbear.net 8 points 1 week ago (4 children)

It's incredibly power hungry.

[–] marxisthayaca@hexbear.net 2 points 4 days ago

except this one doesn't require as much power and training costs, which is where the resource intensive problem resides.

[–] yogthos@lemmygrad.ml 20 points 1 week ago

The context of the discussion is that it's already 50x less power hungry than just a little while ago.

[–] crime@hexbear.net 14 points 1 week ago* (last edited 1 week ago) (1 children)

For now. We've been seeing great strides in reducing that power hunger recently, including by the LLM that's the subject of this post.

That also doesn't make it inherently reactionary.

[–] enkifish@hexbear.net 11 points 1 week ago (2 children)

We've been seeing great strides in reducing that power hunger recently, including by the LLM that's the subject of this post.

Due to the market economy in both the United State and China, further development of LLM efficiency is probably the worst thing that could possibly happen. Even if China did not want to subject LLMs to market forces, they are going to need to compete with the US. This is going further accelerate the climate disaster.

[–] crime@hexbear.net 15 points 1 week ago (1 children)

Again, an issue with capitalism and not the technology itself.

[–] enkifish@hexbear.net 8 points 1 week ago (1 children)

Well I agree with you there. Too bad there's all this capitalism.

[–] crime@hexbear.net 8 points 1 week ago (2 children)

For now. Are we supposed to just halt all technological progress because capitalism is inevitably going to misuse it? Should we stop trying to develop new medical treatments and drugs because capitalism is going to prevent all but the wealthiest from accessing them in our lifetime?

Regardless, my point was that the tech itself isn't inherently reactionary. Not that it won't be misused under capitalism.

[–] enkifish@hexbear.net 8 points 1 week ago

A hundred years ago I'd agree with you that technological progress is more important. Now, I don't know. We need to be triaging the climate crisis instead of wasting time making shit exponentially worse. I half jokingly believe that western knowledge workers should go full luddite and smash data centers and backups. Joking because western knowledge workers would never do that in a million years.

[–] Dessa@hexbear.net 5 points 1 week ago* (last edited 1 week ago) (1 children)

Medical technology doesnt carry the same negatives. I don't agree with Other Person that it's inherently reactionary, but the theoretical value of its benevolent application doesn't mean much when, for all intents and purposes, it serves reactionary goals right now, in the material world

[–] crime@hexbear.net 6 points 1 week ago

One of the use-cases of this technology is assisting in drug discovery and medical research, which is why I gave it as an example.

[–] Cimbazarov@hexbear.net 5 points 1 week ago

Kind of wondering why China needs to compete in this realm? Unless their is something from LLM's that improves the productive forces in a country, I don't see any other reason.

At least the space race had something to do with a strategic military advantage

[–] GaryLeChat@lemmygrad.ml 5 points 1 week ago

Vacuum tubes were too

load more comments (1 replies)
[–] tripartitegraph@hexbear.net 14 points 1 week ago* (last edited 1 week ago) (3 children)

This is a stupid take. I like the autocorrect analogy generally, but this veers into Luddite-ism.
Let me add, the way we're pushed to use LLMs is pretty dumb and a waste of time and resources, but the technology has pretty fascinating use-cases in material and drug discovery.

[–] piggy@hexbear.net 7 points 1 week ago* (last edited 1 week ago) (4 children)

drug discovery

This is mainly hype. The process of creating AI has been useful for drug discovery, LLMs as people practically know them (e.g. ChatGBT) have not other than the same kind of sloppy labor corner cost cutting bullshit.

If you read a lot of the practical applications in the papers it's mostly publish or perish crap where they're gushing about how drug trials should be like going to cvs.com where you get a robot and you can ask it to explain something to you and it spits out the same thing reworded 4-5 times.

They're simply pushing consent protocols onto robots rather than nurses, which TBH should be an ethical violation.

load more comments (4 replies)
[–] Cimbazarov@hexbear.net 7 points 1 week ago

Just like every technological advancement. The problem isn't the technology but how capitalism puts it to use

load more comments (1 replies)