148
submitted 5 months ago by ooli@lemmy.world to c/chatgpt@lemmy.world
you are viewing a single comment's thread
view the rest of the comments
[-] ericjmorey@discuss.online 1 points 5 months ago

I'd like to read the research you alluded to. What research specifically did you have in mind?

[-] huginn@feddit.it 2 points 5 months ago

Sure: here's the article.

https://arxiv.org/abs/2304.15004

The basics are that:

  1. LLM "emergent behavior" has never been consistent, it has always been specific to some types of testing. Like taking the SAT saw emergent behavior when it got above a certain number of parameters because it went from missing most questions to missing fewer.

  2. They looked at the emergent behavior of the LLM compared to all the other ways it only grew linearly and found a pattern: emergence was only being displayed in nonlinear metrics. If your metric didn't have a smooth t transition between wrong, less wrong, sorta right, and right then the LLM would appear emergent without actually being so.

this post was submitted on 14 Apr 2024
148 points (94.0% liked)

ChatGPT

8824 readers
6 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS