We can't blame chatgpt for the change in headline writing over the last few years, though.
Maybe we need LLMs to (first of all, we don't need LLMs for this) rewrite headlines any time the word "slams" gets in there.
Writers Slam The One Thing They Hate About AI.
Author: “write me a 4000 word article on why microplastics are bad
ChatGPT: generates 4000 words of text explaining what micro means, what plastic means, and paraphrasing the “controversy” section of the Wikipedia page on microplastics
Reader: “Summarise this article”
GhatGPT: “Microplastics are bad”
And in reality: https://chatgpt.com/share/66f519a6-1348-8002-96eb-bb61fb25287b
woah woah woah, lets have less of this looking at reality here. We all know generative AI is a fad that never works for anything and anyone using it is an idiot, we don't need to have our prejudices challenged
We're in an online echo chamber, we don't need to look at reality. Just find the opinions that we agree with, and agree with us, and put 'em at the top!
whatever shortens everyone's attention span the quickest: it makes for efficient hoodwinking
People don't like it when you tell them that despite what they personally believe modern AIs are a bit more sophisticated than Machiavellian chains.
Now there are people running around saying that we will have a super intelligence by 2030 and it will make us all immortal and build a Dyson sphere, and a faster than light spaceship. I don't know if that's true or not, but really it has nothing to do with the conversation about if AI are useful now. We don't need something to be intelligent to be useful.
And another thing! Kids these days aren't learning cursive handwriting. It's the death of culture, I tell you.
There’s this podcast I used to enjoy (I still enjoy it, but they stopped making new episodes) called Build For Tomorrow (previously known as The Pessimists Archive).
It’s all about times in the past where people have freaked out about stuff changing but it all turned out okay.
After having listened to every single episode — some multiple times — I’ve got this sinking feeling that just mocking the worries of the past misses a few important things.
- The paradox of risk management. If you have a valid concern, and we collectively do something to respond to it and prevent the damage, it ends up looking as if you were worried over nothing.
- Even for inventions that are, overall, beneficial, they can still bring new bad things with them. You can acknowledge both parts at once. When you invent trains, you also invent train crashes. When you invent electricity, you also invent electrocution. That doesn’t mean you need to reject the whole idea, but you need to respond to the new problems.
- There are plenty of cases where we have unleashed horrors onto the world while mocking the objections of the pessimists. Lead, PFAS, CFCs, radium paint, etc.
I’m not so sure that the concerns about AI “killing culture” actually are as overblown as the worry about cursive, or record players, or whatever. The closest comparison we have is probably the printing press. And things got so weird with that so quickly that the government claimed a monopoly on it. This could actually be a problem.
you know how banks will say "Past performance of financial securities does not represent potential future performance" or whatever: this is much the same thing. There are plenty of things that people freaked out about that turned out to be nothing much. There are plenty of things that people did not freak out about when they really should have. People are basically shit about telling the difference between them.
While i do get this vibe from the headline, the article actually closes with a call to be mindful of the shortcomings of generative AI (while using it)
My favorite tell is when a write-up starts with a verbose explanation of given knowledge on a subject. Yes, we all know what 'World Wide Web' and 'Internal Combustion Engines' are.
Get to the f'ing point.
This is just basic “undergrad pads word count” strategy.
Ironically, one of the nice uses I'm finding for AI is auto-summaries of exactly that sort of overly verbose article (or more often, Youtube video).
Whether it's text or video, there will always be a "Let me tell you about that time when I was on vacation" before the damn pot roast recipe or "Subscribe and play Raid Shadow Legends" followed by a 15 min padding.
No “we” all don’t. Ask anyone who works support how fucking stupid the general population is about shit they use daily. Let alone stuff they heard years/decades ago. Seriously. Just start asking people to point to “the computer” and see how many point at the monitor even when it’s clearly an 80” wall hung TV.
Ask anyone who works support how fucking stupid the general population
They're going to have a huge selection bias though - much of the "general population" will start elsewhere with things like documentation or brains.
That's become, by far, the most obvious tell for AI generated content for me. It's just so damn unnatural.
Biggest issue I see is that these LLMs tend to repeat themselves after a surprisingly short number of times (unless they're sufficiently bloated like ChatGPT).
If you ask any of the users of Sillytavern or RisuAI they'll tell you that these things have a long tail of not being very creative.
👌👍
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed