this post was submitted on 16 Aug 2024
67 points (100.0% liked)
Technology
37707 readers
281 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There's no mechanism in LLMs that allow for anything. It's a blackbox. Everything we know about them is empirical.
It's a lot like a brain. A small, unidirectional brain, but a brain.
I'll bet you a month's salary that this guy couldn't explain said math to me. Somebody just told him this, and he's extrapolated way more than he should from "math".
I could possibly implement one of these things from memory, given the weights. Definitely if I'm allowed a few reference checks.
Okay, this article is pretty long, so I'm not going to read it all, but it's not just in front of naive audiences that LLMs seem capable of complex tasks. Measured scientifically, there's still a lot there. I get the sense the author's conclusion was a motivated one.