this post was submitted on 13 Aug 2023
228 points (100.0% liked)

Technology

37712 readers
280 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

I asked Google Bard whether it thought Web Environment Integrity was a good or bad idea. Surprisingly, not only did it respond that it was a bad idea, it even went on to urge Google to drop the proposal.

you are viewing a single comment's thread
view the rest of the comments
[–] novibe@lemmy.ml 18 points 1 year ago (2 children)

That ignores all the papers on emergent features of LLMs and the fact they are basically black boxes. Yes, we “trained” them to write what we want to hear. But we don’t really understand what happens inside of it. We can’t categorically claim things like “they are only regurgitating what they heard”. Because that is not a scientific or even philosophical statement.

If you think about it for a second, it’s also applicable to human beings…

[–] Drewelite@lemmynsfw.com 7 points 1 year ago

Exactly, the reason LLMs are so fascinating to us is how close they get to sounding human. Thing is, it's not a trick. When people dismiss LLMs because, "Oh they mostly just echo their training data set". That's just culture in humans. Then it's the emergent behavior that makes us feel unique. I'm not saying LLMs are human equivalent. But they're fairly close in design to how a huge part of our psyche works.

[–] MJBrune@beehaw.org 2 points 1 year ago (1 children)

To assume otherwise would be incorrect with the data we have currently. You shouldn't assume something is doing more than it is until it can prove it. Otherwise, you get rocks that keep tigers away.

[–] novibe@lemmy.ml 7 points 1 year ago* (last edited 1 year ago)

I think to assume what you assume is also incorrect given current data.

And that’s my entire point…. What is it doing? How what it’s doing is different from a mind or intelligence?

Like our brains and minds evolved to “fill in the blank”. For many situations, due to survival and millions of years of selection. So what is the actual difference?

I’m not saying it’s “conscious”, but why is it not a mind?