this post was submitted on 23 Nov 2023
49 points (88.9% liked)

Technology

34828 readers
48 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

you are viewing a single comment's thread
view the rest of the comments
[–] malijaffri@feddit.ch 5 points 11 months ago (4 children)

Doesn't WolframAlpha already do this?

[–] NounsAndWords@lemmy.world 12 points 11 months ago (3 children)

A calculator does most of it too, but this is a LLM that can do lots of other things also, which is a big piece of the "general" part of AGI.

Richard Feynman said “You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say, “How did he do it? He must be a genius!”

We are close to a point where a computer that can hold all the problems in its "head" can test all of them against all of the tricks. I don't know what math problems that starts to solve but I bet a few of them would be applicable to cryptology.

But then again, I have no idea what I'm talking about and just making bold guesses based on close to no information.

[–] malijaffri@feddit.ch 1 points 11 months ago (1 children)

Even so, I think I'll hold off on calling anything AGI until it can at least solve simple calculus problems with a 90% success rate (reproducibly). I think that's a fair criteria, in my opinion.

[–] NounsAndWords@lemmy.world 1 points 11 months ago

I'd say more than that. I don't think anyone is that close to AGI...yet

load more comments (1 replies)
load more comments (1 replies)