148
submitted 5 months ago by ooli@lemmy.world to c/chatgpt@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] kromem@lemmy.world 8 points 5 months ago* (last edited 5 months ago)

In truth, we are still a long way from machines that can genuinely understand human language. [...]

Indeed, we may already be running into scaling limits in deep learning, perhaps already approaching a point of diminishing returns. In the last several months, research from DeepMind and elsewhere on models even larger than GPT-3 have shown that scaling starts to falter on some measures, such as toxicity, truthfulness, reasoning, and common sense.

I've rarely seen anyone so committed to being a broken clock in the hope of being right at least once a day.

Of course, given he built a career on claiming a different path was needed to get where we are today, including a failed startup in that direction, it's a bit like the Upton Sinclair quote about not expecting someone to understand a thing their paycheck depends on them not understanding.

But I'd be wary of giving Gary Marcus much consideration.

Generally as a futurist if you bungle a prediction so badly that four days after you were talking about diminishing returns in reasoning a product comes out exceeding even ambitious expectations for reasoning capabilities in an n+1 product, you'd go back to the drawing board to figure out where your thinking went wrong and how to correct it in the future.

Not Gary though. He just doubled down on being a broken record. Surely if we didn't hit diminishing returns then, we'll hit them eventually, right? Just keep chugging along until one day those predictions are right...

this post was submitted on 14 Apr 2024
148 points (94.0% liked)

ChatGPT

8824 readers
6 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS