this post was submitted on 12 Jun 2025
41 points (91.8% liked)

Late Stage Capitalism

5919 readers
79 users here now

founded 5 years ago
MODERATORS
 

generative AI vs just AI...is this the difference?

top 10 comments
sorted by: hot top controversial new old
[–] sevenapples@lemmygrad.ml 9 points 22 hours ago

There's no inherent difference. It's people mistakenly generalizing after hating on current image and text generation AI models. And there's obviously a discussion to be made about how these models were trained unethically by stealing data, but a) that is irrelevant to their usefulness as a technology b) it was recently shown that you can train them with purely public domain data with good results (yogthos posted an article from Stanford the other day).

GenAI is not (inherently) a grift, unless you believe the following applications are nothing but grifts:

  • machine translation
  • image upscaling/restoration/coloring
  • support chatbots
  • etc
[–] amemorablename@lemmygrad.ml 3 points 18 hours ago

Generative AI is the one people tend to hate, but it's still not a grift; more like grifters are ancillary to it. Grifters try to jump on it like they jump on anything they can hype and make a quick buck, but the funny thing about it is if anyone is actually making easy money that way, it's the ones who are popping up as fast businesses providing a frontend for an already-existing model (the cheapest way to attempt profiting). But even some of these crash and burn because they do a free tier to get users and attempt the "grow until you're big then majorly monetize" thing, but can't sustain this due to the high costs of generative AI and bleed investor money. And the major corps doing generative AI? They're just burning money like it's going out of style, training massive models with massive datasets.

So (in the western AI context, can't speak for China) it's a capitalist style race, but it's more of a race to see who can position themself as market leader than it is some kind of easy cash infusion.

It's not like crypto or NFTs where it can be called an actual scam. Generative AI does do something that can be functionally useful and/or entertaining. It's just complicated because beyond a baseline of it not being total crap, there all kinds of questions about where it is actually useful and for what, and the ethics of pushing it in this or that area by whom.

[–] CriticalResist8@lemmygrad.ml 10 points 1 day ago (1 children)

I mean, AI as a word has been used in tons of different ways. We still say "NPC AI" in video games and that's just a whole bunch of if statements, no LLM involved. And on the other end of the spectrum we still talk about AI in movies like I, Robot, with fully sentient machines. My line when I say "AI" with no qualifier is neural networks, the parameters that we hear so much about.

And I don't think GenAI is "the" grift like the tweet implies, because the AI they describe (machine learning, ML) is the exact same - neural networks with trained models. They talk about the Kinect using ML - you can do machine learning without the neural network - but was the Kinect not "wasteful", "unethical", and "useless", to use their words? It was an expensive system that worked okay (on some tech demos) but barely had 3 games. The EyeToy for the PS2 was more fun.

A lot of the conversation around Generative AI surrounds image generation and I feel that's harmfully reductive. It centers a lot around artists and the purity of their art (as if they're the only people impacted by AI) when there's so much more to talk about; GenAI can do code - there was a whole discussion around it here the other day, and maybe it's not super great code, but it can do code nonetheless and for people who don't code and need something, it gets the job done.

Yes, there is also a whole lot of stuff you can do with AI without LLMs. In fact, I'm not sure how LLMs specifically became so ubiquitous because you can do neural network AI stuff without ever needing an LLM. I remember back when AI became big (2022 or so), China announced they'd used an AI to map the wiring on a new ship model. What took an engineer one year to do was done by the AI in 24 hours.

GenAI including LLMs have hard limitations that I think are, conversely, overlooked. LLM AI will not do everything, but it can get you part of the way there. The grift is moreso tech companies trying to pretend their toy is a panacea. When asked about AI making stuff up in an interview, OpenAI's CTO "well you know, it's very human in that regard [emphasis mine], because when we don't know something, what do we do? We make stuff up". They admit it themselves that they have a bullshit generator. But when it works, it works - you can use GPT 4o or o4 or whatever the new model is called as a tutor, for example, for photoshop, guitar, or whatever other hobby you have. It works great! You can ask it any question you have like a tutor, instead of being limited to what the page cares to tell you about! And yes I could ask someone, but: a- people are not necessarily available the moment I have a question and b- google is crap now and if you ask on most forums they will tell you to google it. So chatGPT it is. We just have to take into account that it might be making stuff up to the point that you need to double-check, and that OpenAI clearly has no plans to fix that (not that they even could).

For coding my choice nowadays is start with chatGPT then pass it over to deepseek once I have the prototype, it works great.

[–] ksynwa@lemmygrad.ml 3 points 21 hours ago (1 children)

For coding I like to "talk to" Deepseek or Gemini for advice on approaches because most of the time when I need their help its for a problem that I have never worked before. I don't like prompting them for whole solutions because it robs of a learning opportunity and the code is often over weirdly overengineered. Like recently I asked it for thoughts on a python script I was working on and they added so much noise over useless shit like parametrising hardcoded values.

[–] CriticalResist8@lemmygrad.ml 2 points 18 hours ago* (last edited 18 hours ago)

the code is often over weirdly overengineered

With the GPT + deepseek combo it fixes that problem (in my layman's eyes); deepseek has a problem of overengineering especially if you don't scope it correctly but if it works on an initial script, it will mostly stay within it. It's also able to simplify GPT code.

For the life of me I have never been able to learn javascript, so when I need to code something for ProleWiki I just throw the problem at them. You can look into https://en.prolewiki.org/wiki/MediaWiki:Common.js our script file for what they came up with - the color scheme picker, the typeface picker, and the testimonials code. Color picker is a bit overengineered because we have a theme that should only be available for one month out of the year and it was simpler to hardcode it in.

I still have to think about the problem, if I don't scope it well enough they will come up with whatever they feel like. For the color schemes when I had the first custom theme working I told them to refactor the code so that we could add new themes in an array, and they came up with everything. So now I can add unlimited themes by just adding a line in the array.

Potential simplifications that they don't think about (and hence why it's good to know how the generated code works) is that there's no need for className and LabelId in the array, it could generate that from the value. But eh, it works.

edit - using it as a "copilot" is also a good use, though I find that sometimes it just utterly fails. It's still an RNG machine at heart. But if you get it to work, it can really help unlock your other skills and not get stuck on one part of the process.

[–] Xiisadaddy@lemmygrad.ml 13 points 1 day ago

Even generative AI like as a technology isn't evil. It's just that tech bros were like lets do this the most immoral way possible and be really obnoxious about it at the same time. A good analog would be like planes. Like planes can be very useful in many circumstances, but because the people who run the world are stupid evil and boring they overuse them where trains could do a better job for many routes. They use them to bomb innocent people for no reason, and even the ones that are technically being used where and how they should be you get squeezed in like sardines and charged an arm and a leg for it.

[–] robotElder2@hexbear.net 11 points 1 day ago (1 children)

None of those older technologies were ever referred to as AI until AI became the new venture capital buzzword dejour.

[–] sevenapples@lemmygrad.ml 7 points 22 hours ago

Just because AI became a buzzword doesn't mean that these technologies were not under the AI umbrella. I have an AI book from 2003 that mentions solutions to problems such as object detection, obstacle avoidance, machine translation, language generation and so on.

[–] blame@hexbear.net 9 points 1 day ago* (last edited 1 day ago)

There's AI The Brand that is the next thing silicon valley corps and investors are hoping is a repeat of the smartphone, and there is the much more ordinary machine learning and neural networks that have been used in a wide variety of industrial and consumer applications for decades. Those things don't tell you how right and how smart you are though.

Let's not gloss over AI in games which may not actually involve machine learning at all.

[–] Carl@hexbear.net 4 points 1 day ago

Machine learning is also used in medical research and other things, if we did a modern version of Cybersyn it would almost certainly use machine learning at some level, it's a powerful technology.

I would suggest targeting your hate specifically at language and image generation slop.