490
submitted 2 months ago* (last edited 2 months ago) by True@lemy.lol to c/microblogmemes@lemmy.world
top 39 comments
sorted by: hot top controversial new old
[-] wise_pancake@lemmy.ca 41 points 2 months ago

I can't believe this got released and this is still happening.

This is the revolution in search of RAG (retrieval augmented generation), where the top N results of a search get fed into an LLM and reprioritized.

The LLM doesn't understand it want to understand the current, it loads the content info it's content window and then queries the context window.

It's sensitive to initial prompts, base model biases, and whatever is in the article.

I'm surprised we don't see more prompt injection attacks on this feature.

[-] towerful@programming.dev 36 points 2 months ago
[-] AnUnusualRelic@lemmy.world 3 points 2 months ago

That's why you should always have your backpack when you board a plane (along with your towel, of course).

[-] Klear@sh.itjust.works 2 points 2 months ago

Thanks! The OP is a mess (at least when using Boost).

[-] johny@feddit.org 30 points 2 months ago

At first I was at least impressed that it came up with such a hilarious idea, but then it of course turns out (just as with the pizza glue) it just stole it from somewhere else.

[-] Ilovethebomb@lemm.ee 16 points 2 months ago

"AI" as we know them are fundamentally incapable of coming up with something new, everything it spits out is a combination of someone else's work.

[-] echodot@feddit.uk 26 points 2 months ago* (last edited 2 months ago)

The big problem with these AI is not that the technology is fundamentally flawed. It's that stupid humans just dump huge amounts of data at it without checking any of it for validity.

If you train a human on bullshit of course they're not going to know the difference between truth and lies, how could they possibly do that if all they're ever told is utter nonsense. I really would have thought better of people at Google, you'd have thought someone there would have had the intelligence to realize that not everything on the internet is solid gold.

This is why I've never really bothered that they were training on data from Reddit.

[-] rambling_lunatic@sh.itjust.works 17 points 2 months ago
[-] JadenSmith@sh.itjust.works 17 points 2 months ago

I thought this was fake, but I searched Google for "parachute effectiveness" and that satirical study is at the top, and literally every single link below it is a reference to that satirical study. I have scrolled for a good minute and found no genuine answer...

[-] echodot@feddit.uk 10 points 2 months ago* (last edited 2 months ago)

I have scrolled for a good minute and found no genuine answer...

Presumably because the effectiveness of parachutes is pretty self-evident and no one has done a formal study on the subject because why would they.

It's not like they're going to find that parachutes actually aren't needed and the humans just float gently to the ground on their own. Or that a sufficiently tall top hat can be just as effective.

[-] my_hat_stinks@programming.dev 7 points 2 months ago* (last edited 2 months ago)

Parachute effectiveness is a very reasonable thing to study, it's pretty important to know how one parachute design performs compared to other designs and the obvious baseline is no parachute. A lot of things which appear to be self-evident have been extensively studied, generally you don't want to just assume you know how something works.

Though throwing people out of a plane at altitude with no parachute probably isn't the most ethical way to study parachute effectiveness.

[-] echodot@feddit.uk 1 points 2 months ago

I think we know enough about aerodynamics that we can probably simulate it in the computer if we really cared to. My point is more that it's probably never been studied at an academic level, I'm sure parachute manufacturers ans various militaries have studied all sorts of things. But none of that would have made it into a research paper.

[-] my_hat_stinks@programming.dev 2 points 2 months ago

Here's a study on cadavers to determine whether people have the same number of nose hairs in each nostril. In academia there is no such thing as too trivial.

There's plenty of studies on parachutes for spacecraft (eg, here's one on aerodynamics of parachutes for mars landing) so if you follow the references somewhere down the line you'll probably find studies on general parachute effectiveness.

[-] AlotOfReading@lemmy.world 1 points 2 months ago

You have to search using language that papers might actually use though. "Parachute effectiveness" means what the satirical paper is exploring, whether it prevents death or not. The only serious studies that might have used that language would be old WW2 studies that threw people out of planes with different parachutes to see how many survived.

If you want to know how to design an effective parachute, you should be looking at reference books like Parachute Recovery Systems instead.

[-] JadenSmith@sh.itjust.works 2 points 2 months ago

This is a good point I didn't think of. It's possible that the answer is so obvious, that the only articles made about it would be jokes.

[-] robocall@lemmy.world 16 points 2 months ago

Why waste money on parachutes when everyone has a backpack at home?!

[-] bappity@lemmy.world 14 points 2 months ago
[-] True@lemy.lol 18 points 2 months ago

Better yet, don't use Google for searching when possible.

[-] nieminen@lemmy.world 10 points 2 months ago

I just searched "do maple trees have a tap root", ai overview says yes. Literally all other reputable sources say no 🙄

[-] elxeno@lemm.ee 9 points 2 months ago

Representative study participant jumping from aircraft with an empty backpack. This individual did not incur death or major injury upon impact with the ground

[-] msage@programming.dev 7 points 2 months ago
[-] CodexArcanum@lemmy.world 5 points 2 months ago

Hah! I just told someone the other day that LLMs trick people into seeing intelligence basically by cold reading! At last, sweet validation!

[-] CookieOfFortune@lemmy.world 4 points 2 months ago

At least in the actual Gemini chat when asked about parachute study effectiveness correctly notes this study as satire.

[-] ngwoo@lemmy.world 3 points 2 months ago

Weird that it actually made note of that but didn't put it in the summary

[-] Buddahriffic@lemmy.world 1 points 2 months ago

It's because it has no idea what is important or isn't important. It's all just information to it and it all carries the same weight, whether it's "parachutes aren't any more effective than empty backpacks" or "this study is a satire making fun of other studies that extrapolate information carefully". Even though that second bit essentially negates the first bit.

Bet the authors weren't expecting their joke study to hit a second time like this, demonstrating that AI is just as bad at extrapolating since it extrapolated true information from false.

It's reckless to use these AIs in searches. If someone jokes about pretending to be a doctor and suggesting a stick of butter being the best treatment for a heart attack and that joke makes it into the training set, how would an AI have any idea that it doesn't have the same value as some doctors talking about patterns they've noticed in heart attack patients?

[-] intensely_human@lemm.ee -5 points 2 months ago

LLMs indeed have a way of detecting satire. The same way humans do.

It’s just that now they’re like the equivalent of five year olds in terms of their ability to detect sarcasm.

this post was submitted on 03 Sep 2024
490 points (96.4% liked)

Microblog Memes

5687 readers
1615 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 1 year ago
MODERATORS