this post was submitted on 19 Nov 2024
1065 points (97.7% liked)

People Twitter

5375 readers
598 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a tweet or similar
  4. No bullying or international politcs
  5. Be excellent to each other.
  6. Provide an archived link to the tweet (or similar) being shown if it's a major figure or a politician.

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] hoshikarakitaridia@lemmy.world 17 points 1 month ago (3 children)

Because in a lot of applications you can bypass hallucinations.

  • getting sources for something
  • as a jump off point for a topic
  • to get a second opinion
  • to help argue for r against your position on a topic
  • get information in a specific format

In all these applications you can bypass hallucinations because either it's task is non-factual, or it's verifiable while promoting, or because you will be able to verify in any of the superseding tasks.

Just because it makes shit up sometimes doesn't mean it's useless. Like an idiot friend, you can still ask it for opinions or something and it will definitely start you off somewhere helpful.

[–] ms_lane@lemmy.world 25 points 1 month ago (1 children)

Also just searching the web in general.

Google is useless for searching the web today.

[–] fibojoly@sh.itjust.works 1 points 1 month ago

Not if you want that thing that everyone is on about. Don't you want to be in with the crowd?! /s

[–] WalnutLum@lemmy.ml 22 points 1 month ago (1 children)

All LLMs are text completion engines, no matter what fancy bells they tack on.

If your task is some kind of text completion or repetition of text provided in the prompt context LLMs perform wonderfully.

For everything else you are wading through territory you could probably do easier using other methods.

[–] burgersc12@mander.xyz 1 points 1 month ago (1 children)

I love the people who are like "I tried to replace Wolfram Alpha with ChatGPT why is none of the math right?" And blame ChatGPT when the problem is all they really needed was a fucking calculator

[–] leftzero@lemmynsfw.com 3 points 1 month ago

The fucking problem is they stole my damn calculator and now they're trying to sell me an LLM as a replacement.

LLMs are an interesting if mostly useless toy (an excessively costly one, though; Eliza achieved mostly the same results at a fraction of the cost).
The massive scam bubble that's been built around them, however, and its absurd contribution to enshittification and global warming, is downright monstrous, and makes anyone defending commercial LLMs worthy of the utmost contempt, just like those who defended cryptocurrencies before LLMs became the latest fad.

[–] ohwhatfollyisman@lemmy.world 3 points 1 month ago (2 children)

so, basically, even a broken clock is right twice a day?

[–] dev_null@lemmy.ml 5 points 1 month ago (2 children)

Yes, but for some tasks mistakes don't really matter, like "come up with names for my project that does X". No wrong answers here really, so an LLM is useful.

[–] ohwhatfollyisman@lemmy.world 0 points 1 month ago (2 children)

great value for all that energy it expends, indeed!

[–] archomrade@midwest.social -1 points 1 month ago (1 children)

The energy expenditure for GPT models is basically a per-token calculation. Having it generate a list of 3-4 token responses would barely be a blip compared to having it read and respond entire articles.

There might even be a case for certain tasks with a GPT model being more energy efficient than making multiple google searches for the same. Especially considering all the backend activity google tacks on for tracking users and serving ads, complaining about someone using a GPT model for something like generating a list of words is a little like a climate activist yelling at someone for taking their car to the grocery store while standing across the street from a coal-burning power plant.

[–] ohwhatfollyisman@lemmy.world 4 points 1 month ago (1 children)

... someone using a GPT model for something like generating a list of words is a little like a climate activist yelling at someone for taking their car to the grocery store while standing across the street from a coal-burning power plant.

no, it's like a billion people taking their respective cars to the grocery store multiple times a day each while standing across the street from one coal-burning power plant.

each person can say they are the only one and their individual contribution is negligible. but get all those drips together and you actually have a deluge of unnecessary wastage.

[–] archomrade@midwest.social 0 points 1 month ago

Except each of those drips are subject to the same system that preferences individualized transport

This is still a perfect example, because while you're nit-picking the personal habits of individuals who are a fraction of a fraction of the total contributors to GPT model usage, huge multi-billion dollar entities are implementing it into things that have no business using it and are representative for 90% of llm queries.

Similar for castigating people for owning ICE vehicles, who are not only uniquely pressued into their use but are also less than 10% of GHG emissions in the first place.

Stop wasting your time attacking individuals using the tech for help in their daily tasks, they aren't the problem.

[–] Rekorse@sh.itjust.works -4 points 1 month ago (1 children)

How is that faster than just picking a random name? Noone picks software based on name.

[–] dev_null@lemmy.ml 5 points 1 month ago* (last edited 1 month ago) (1 children)

And yet virtually all of software has names that took some thought, creativity, and/or have some interesting history. Like the domain name of your Lemmy instance. Or Lemmy.

And people working on something generally want to be proud of their project and not name it the first thing that comes to mind, but take some time to decide on a name.

[–] Rekorse@sh.itjust.works 1 points 1 month ago (1 children)

Wouldnt they also not want to take a random name off an AI generated list? How is that something to be proud of? The thought, creativity, and history behind it is just that you put a query into chatgpt and picked one out of 500 names?

Maybe its just a difference of perspective but thats not only not a special origin story for a name, its taking from others in a way you won't be able to properly credit them, which is essential to me.

I would rather avoid the trouble and spend the time with a coworker or friend throwing ideas back and forth and building an identity intentionally.

I suppose AI could be nice if I was alone nearly all the time.

[–] dev_null@lemmy.ml 1 points 1 month ago* (last edited 1 month ago)

The process of throwing ideas back and forth usually doesn't include just choosing one, but generating ideas as jumping off points, usually with some existing concept in mind. Talking with friends, looking at other projects, searching for inspiration online and in the real world, and now also generating some more ideas with an LLM to add to the mix. Using one source and just picking a suggestion probably won't get you a good result.

[–] onionsinmypores@sh.itjust.works 5 points 1 month ago* (last edited 1 month ago)

No, maybe more like, even a functional clock is wrong every 0.8 days.
https://superuser.com/questions/759730/how-much-clock-drift-is-considered-normal-for-a-non-networked-windows-7-pc

The frequency is probably way higher for most LLMs though lol