this post was submitted on 18 Mar 2024
546 points (98.1% liked)

Science Memes

10988 readers
3201 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Pyro@programming.dev 172 points 8 months ago* (last edited 8 months ago) (3 children)

GPT doesn't really learn from people, it's the over-correction by OpenAI in the name of "safety" which is likely to have caused this.

[–] lugal@sopuli.xyz 65 points 8 months ago (1 children)

I assumed they reduced capacity to save power due to the high demand

[–] MalReynolds@slrpnk.net 49 points 8 months ago (1 children)

This. They could obviously reset to original performance (what, they don't have backups?), it's just more cost-efficient to have crappier answers. Yay, turbo AI enshittification...

[–] CommanderCloon@lemmy.ml 40 points 8 months ago (1 children)

Well they probably did power down the performance a bit but censorship is known to nuke LLM's performance as well

[–] MalReynolds@slrpnk.net 11 points 8 months ago

True, but it's hard to separate, I guess.

[–] rtxn@lemmy.world 46 points 8 months ago* (last edited 8 months ago) (1 children)

Sounds good, let's put it in charge of cars, bombs, and nuclear power plants!

[–] OpenStars@startrek.website 10 points 8 months ago (2 children)

Even getting 2+2=2 98% of the time is good enough for that. :-P

spoiler(wait, 2+2 is what now?)

[–] lugal@sopuli.xyz 18 points 8 months ago (1 children)

2+2 isn't 5 anymore? Literally 1985

[–] OpenStars@startrek.website 9 points 8 months ago

Stop trying to tell the computer what to do - it should be free to act however it wants to! :-P

[–] FiniteBanjo@lemmy.today 2 points 8 months ago (1 children)

It used to get 98%, now it only gets 2%.

2% is not good enough.

[–] OpenStars@startrek.website 2 points 8 months ago

I mean... some might argue that even 98% wasn't enough!? :-D

What are people supposed to - ask every question 3 times and take the best 2 out of 3, like this was kindergarten? (and that is the best-case scenario, where the errors are entirely evenly distributed across the entire problem space, which is the absolute lowest likelihood model there - much more often some problems would be wrong 100% of the time, while others may be correct more like 99% of the time, but importantly you will never know in advance which is which)

Actually that does on a real issue: some schools teach the model of "upholding standards" where like the kids actually have to know stuff (& like, junk, yeah totally) - whereas conversely another, competing model is where if they just learn something, anything at all during the year, that that is good enough to pass them and make them someone else's problem down the line (it's a good thing that professionals don't need to uh... "uphold standards", right? anyway, the important thing there is that the school still receives the federal funding in the latter case but not the former, and I am sure that we all can agree that when it comes to the next generation of our children, the profits for the school administrators are all that matters... right? /s)

All of this came up when Trump appointed one of his top donors, Betsy Devos to be in charge of all edumacashium in America, and she had literally never stepped foot inside of a public school in her entire lifetime. I am not kidding you, watch the Barbara Walters special to hear it from her own mouth. Appropriately (somehow), she had never even so much as heard of either of these two main competing models. Yet she still stepped up and acknowledged that somehow she, as an extremely wealthy (read: successful) white woman, she could do that task better than literally all of the educators in the entire nation - plus all those with PhDs in education too, ~~jeering~~ cheering her on from the sidelines.

Anyway, why we should expect "correctness" from an artificial intelligence, when we cannot seem to find it anywhere among humans either, is beyond me. These were marketing gimmicks to begin with, then we all rushed to ask it to save us from the enshittification of the internet. It was never going to happen - not this soon, not this easily, not this painlessly. Results take real effort.

[–] Redward@yiffit.net 16 points 8 months ago

Just for the fun of it, I argued with chatgpt saying it’s not really a self learning ai, 3.5 agreed that it’s a not a fully function ai with limited powers. 4.0 on the other hand was very adamant about being fully fleshed Ai