this post was submitted on 30 Jun 2023
26 points (100.0% liked)

Technology

37724 readers
736 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Naatan@beehaw.org 6 points 1 year ago (4 children)

I have a hard time seeing what we are currently calling "AI" evolve to address issues like this. It's not real intelligence, it's just text prediction. It seems fundamentally flawed for use-cases where you need 100% certainty of the answers being appropriate.

This isn't the AI people think it is. And the only danger it poses is irresponsible use.

[–] morry040@kbin.social 4 points 1 year ago

Stephen Wolfram's article on how ChatGPT works was enlightening: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

Like you said, it's just text prediction, using online content as the training ground.

[–] ConsciousCode@beehaw.org 2 points 1 year ago (1 children)

Part of why the NLP community is so excited about it is because text prediction as an optimization problem eventually necessitates some form of intelligence in order to reduce the loss, and the architectures we're using scale nearly linearly in quality vs size and show no real signs of diminishing returns, meaning you can make them arbitrarily smart just by making them bigger.

I would encourage you to consider what you mean by "real" intelligence and "just text prediction", because AI throws a lot of our assumptions out the window. Talk to GPT-4 in a chatbot cognitive architecture for a few hours and you get a sense of just how intelligent it can be (with the right prompting), but the architecture itself is literally incapable of "thinking" (some wiggle room for inter-layer states) - that is, internal, stateful, causal processes which drive external behavior. A chatbot CA can vaguely approximate it via chain of thought prompting, but without that it essentially has to guess what its thoughts "would" be if it had them, which is very weird and hard to understand intuitively.

In case it isn't clear, what I mean by "cognitive architecture" is the machinery surrounding the language model which lets it interact with the world. A language model in isolation is a causal autoregressive inference engine that will happily autocomplete anything. They are not chatbots, only components in chatbots - that's just the modality we're most familiar with because ChatGPT broke ground, but it's not their only or even most useful form. The LLM is comparable to a human broca's area, which will generate an endless stream of language if you let it. It's the neural circuitry around it which give rise to coherent thoughts and subjective experience.

To be able to accurately discuss these concepts, we need to change the language we use. Words like "intelligence", "consciousness", "sentience", "sapience", etc have always been incredibly vague, approaching completely undefined. They can't be adequately applied to AI until they've been operationalized, such that you could objectively falsify whether or not they apply to a given system.

[–] cellador@feddit.de 2 points 1 year ago

Very nicely put. If I observe any real person replying in text, what im seeing is essentially just them thinking about what word to put next and entering it on the keyboard. It is an extremely complex task. I'm not saying that state of the art language models are also mulling the same thoughts in their "minds" like we are but that they're solving the same problem. And our current paradigm of training these models show no sign of slowing progress so I understand sentiment that calling these models just "text prediction machines" is too simplistic.

[–] Emperor@feddit.uk 1 points 1 year ago

This isn’t the AI people think it is.

It's definitely not as good as people think it is. The best description I heard was that AI outputs "hallucinations" as it only needs to look plausible, it doesn't have to be right.

Which is why using it to detect cheating is a concern - you'd hope that it would only be used for a first pass only to be reviewed by a human later but some people are going to think that AI is infallible and leave it there.

[–] Peanutbjelly@sopuli.xyz 1 points 1 year ago

But it is intelligence. Just a very different form than we are generally used to. It's not entirely trustworthy or accurate in it's output yet, but that's ok for what is effectively early stage AI. humans have never been fully functional or reliable, but can still be useful. We have fully functional agents capable of doing complex things like building a functional computer out of eBay listings, or ordering a pizza in the style you request. I've trained less reliable or capable human beings. It is not sentient, it is not perfect or completely reliable, but it is more than just a parrot. It is capable of creating and responding to some novel situations. Of course there is still a lot more to be worked on.

Do you think there is no stochastic element to our natural use of language? Are you never confused by a word that came out of your mouth that upon immediate reflection isn't a word you would have intended to say at all? What we have built is just a piece of the puzzle, but it's not stopping there.

There is a lot of work to be done in mechanistic interpretability and alignment. users also need to understand the abilities and limitations of the tool, but it's absurd not to be impressed and excited by the current state of neural networks.