this post was submitted on 24 Jan 2025
93 points (100.0% liked)
technology
23495 readers
342 users here now
On the road to fully automated luxury gay space communism.
Spreading Linux propaganda since 2020
- Ways to run Microsoft/Adobe and more on Linux
- The Ultimate FOSS Guide For Android
- Great libre software on Windows
- Hey you, the lib still using Chrome. Read this post!
Rules:
- 1. Obviously abide by the sitewide code of conduct. Bigotry will be met with an immediate ban
- 2. This community is about technology. Offtopic is permitted as long as it is kept in the comment sections
- 3. Although this is not /c/libre, FOSS related posting is tolerated, and even welcome in the case of effort posts
- 4. We believe technology should be liberating. As such, avoid promoting proprietary and/or bourgeois technology
- 5. Explanatory posts to correct the potential mistakes a comrade made in a post of their own are allowed, as long as they remain respectful
- 6. No crypto (Bitcoin, NFT, etc.) speculation, unless it is purely informative and not too cringe
- 7. Absolutely no tech bro shit. If you have a good opinion of Silicon Valley billionaires please manifest yourself so we can ban you.
founded 4 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
that's a deeply reactionary take
LLMs are literally reactionary by design but go off
They're just automation
https://redsails.org/artisanal-intelligence/
https://www.artnews.com/art-in-america/features/you-dont-hate-ai-you-hate-capitalism-1234717804/
They’re not just automations though.
Industrial automations are purpose-built equipments and softwares designed by experts with very specific boundaries set to ensure that tightly regulated specifications can be met - i.e., if you are designing and building a car, you better make sure that the automation doesn’t do things it’s not supposed to do.
LLMs are general purpose language models that can be called up to spew out anything and without proper reference to their reasoning. You can technically use them to “automate” certain tasks but they are not subjected to the same kind of rules and regulations employed in the industrial setting, where tiny miscalculations can lead to consequences.
This is not to say that they are useless and cannot aid in the work flow, but their real use cases have to be manually curated and extensively tested by experts in the field, with all the caveats of potential hallucinations that can cause severe consequences if not caught in time.
What you’re looking for is AGI, and the current iterations of AI is the furthest you can get from an AGI that can actually reason and think.
That's not the case with stuff like neurosymbolic models and what DeepSeek R1 is doing. These types of models do actual reasoning and can explain the steps they use to arrive at a solution. If you're interested, this is a good read on the neurosymbolic approach https://arxiv.org/abs/2305.00813
However, automation doesn't just apply to stuff like factory work. If you read the articles I linked above, you'll see that they're specifically talking about automating aspects of producing media such as visual content.
The “chain of thought” output simply gives you the “progress” and the specific path/approach the model has arrived at a particular answer - which is useful for tweaking and troubleshooting the parameters toward improving the accuracy and reducing hallucinations on a model, but it is not the same reasoning that could be given from a human mind.
The transformer architecture is really just a statistical model built to have very strong memory retention when it comes to making associations (in the case of LLMs, words). It fundamentally cannot think or reason. It takes a specific “statistical” path and arrives at an answer based on the associations it has been trained on, but you cannot make it think and reason the way we do, nor can it evaluate or verify the validity of a piece of information based on cognitive reasoning.
The fact that there is nuance does not preclude that artifacts can be political, whether intentional or not..
While I don't know whether this applies to DeepSeek R1, the Internet perpetuates many human biases and machine learning will approximate and pick up on those biases regardless of which country is doing the training. Sure you can try to tell LLMs trained on the Internet not to do that — we've at least become better at that than Tay in 2016, but that probably still goes about as well as telling a human not to at best.
I personally don't buy the argument that you should hate the designer instead of the technology, in the same way we shouldn't excuse a member of Congress' actions because of the military-industrial complex, or capitalism, or systemic racism, and so on that ensured they're in such a position.
I don't see these tools replacing humans in the decision making process, rather they're going to be used to automate a lot of tedious work with the human making high level decisions.
That's fair, but human oversight doesn't mean they'll necessarily catch biases in its output
We already have that problem with humans as well though.
What does that even mean
they "react" to your input and every letter after i guess?? lmao
Hard disk drives are literally revolutionary by design because they spin around. Embrace the fastest spinning and most revolutionary storage media
sorry sweaty, ssds are problematic
Scratch a SSD and a NVMe bleeds.
Sufi whirling is the greatest expression of revolutionary spirit in all of time.
Pushing glasses up nose further than you ever thought imaginable *every token after
hey man come here i have something to show you
It's a model with heavy cold war liberalism bias (due to information being fed to it), unless you prompt it - you'll get freedom/markets/entrepreneurs out of it for any problem. As people are treating them as gospel of the impartial observer -
The fate of the world will be ultimately decided on garbage answers spewed out by an LLM trained on Reddit posts. That’s just how the future leaders of the world will base their decisions on.
Future senator getting "show hog" to some question with 0.000001 probability: well, if the god-machine says so
That's not the technology's fault though, it's just that the technology is produced by an imperialist capitalist society that treats cold war propaganda as indisputable fact.
Feed different data to the machine and you will get different results. For example if you just train a model on CIA declassified documents it will be able to answer questions about the real role of the CIA historically. Add a subjective point of view on these events and it can either answer you with right wing bullshit if that's what you gave it, or a marxist analysis of the CIA as an imperialist weapon that it is.
As with technology in general, it's effect on society lies with the hands that wield it.
Put it that way, even if one feeds it cia files to the hearts content, the weights of words which are needed to construct sentences is still sitting somewhere there. (also answering about real role of cia implies llm has any idea about reality, it will just bias answer in another direction, just as marxist analysis: it will just reproduce likeliest answer resembling marxist literature you fed to it, not "have analysis").
Benign application of llm is natural language processing into fixed functions on the back end (e.g. turn off the lights when it start raining or whatever, something which can be disassembled from millions of ways into same set of instructions, here its fuzziness is great)
"let's just use autocorrect to create the future this is definitely cool and not regressive and reactionary and a complete recipe for disaster"
It's technology with many valid use-cases. The misapplication of the technology by capital doesn't make the tech itself inherently reactionary.
It's incredibly power hungry.
except this one doesn't require as much power and training costs, which is where the resource intensive problem resides.
The context of the discussion is that it's already 50x less power hungry than just a little while ago.
For now. We've been seeing great strides in reducing that power hunger recently, including by the LLM that's the subject of this post.
That also doesn't make it inherently reactionary.
Due to the market economy in both the United State and China, further development of LLM efficiency is probably the worst thing that could possibly happen. Even if China did not want to subject LLMs to market forces, they are going to need to compete with the US. This is going further accelerate the climate disaster.
Again, an issue with capitalism and not the technology itself.
Well I agree with you there. Too bad there's all this capitalism.
For now. Are we supposed to just halt all technological progress because capitalism is inevitably going to misuse it? Should we stop trying to develop new medical treatments and drugs because capitalism is going to prevent all but the wealthiest from accessing them in our lifetime?
Regardless, my point was that the tech itself isn't inherently reactionary. Not that it won't be misused under capitalism.
A hundred years ago I'd agree with you that technological progress is more important. Now, I don't know. We need to be triaging the climate crisis instead of wasting time making shit exponentially worse. I half jokingly believe that western knowledge workers should go full luddite and smash data centers and backups. Joking because western knowledge workers would never do that in a million years.
Medical technology doesnt carry the same negatives. I don't agree with Other Person that it's inherently reactionary, but the theoretical value of its benevolent application doesn't mean much when, for all intents and purposes, it serves reactionary goals right now, in the material world
One of the use-cases of this technology is assisting in drug discovery and medical research, which is why I gave it as an example.
Kind of wondering why China needs to compete in this realm? Unless their is something from LLM's that improves the productive forces in a country, I don't see any other reason.
At least the space race had something to do with a strategic military advantage
Vacuum tubes were too
This is a stupid take. I like the autocorrect analogy generally, but this veers into Luddite-ism.
Let me add, the way we're pushed to use LLMs is pretty dumb and a waste of time and resources, but the technology has pretty fascinating use-cases in material and drug discovery.
This is mainly hype. The process of creating AI has been useful for drug discovery, LLMs as people practically know them (e.g. ChatGBT) have not other than the same kind of sloppy labor corner cost cutting bullshit.
If you read a lot of the practical applications in the papers it's mostly publish or perish crap where they're gushing about how drug trials should be like going to cvs.com where you get a robot and you can ask it to explain something to you and it spits out the same thing reworded 4-5 times.
They're simply pushing consent protocols onto robots rather than nurses, which TBH should be an ethical violation.
Just like every technological advancement. The problem isn't the technology but how capitalism puts it to use
🙄