this post was submitted on 10 Jun 2024
88 points (85.5% liked)

Technology

59378 readers
3229 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] kibiz0r@midwest.social 9 points 5 months ago (7 children)

Article says it’s likely an OpenAI partnership.

[–] PrivateNoob@sopuli.xyz 19 points 5 months ago (6 children)
[–] AliasAKA@lemmy.world 2 points 5 months ago (5 children)

Depends. If they get access to the code OpenAI is using, they could absolutely try to leapfrog them. They could also just be looking at ways to get near ChatGPT4 performance locally, on an iPhone. They’d need a lot of tricks, but succeeding there would be a pretty big win for Apple.

[–] abhibeckert@lemmy.world 1 points 5 months ago* (last edited 5 months ago) (1 children)

near ChatGPT4 performance locally, on an iPhone

Last I checked, iPhones don't have terabytes of RAM. Nothing that runs on a small battery powered device is ever going to be in the ballpark of ChatGPT. At least not in the foreseeable future.

[–] AliasAKA@lemmy.world 1 points 5 months ago

They don’t, but with quantization and distillation, as well as fancy use of fast ssd storage (they published a paper on this exact topic last year), you can get a really decent model to work on device. People are already doing this with things like OpenHermes and Mistral (given, 7B models, but I could easily see Apple doubling ram and optimizing models with the research paper I mentioned above, and getting 40B models running entirely locally). If the start of the network is good, a 40B model could take care of a vast majority of user Siri queries without ever reaching out to the server.

For what it’s worth, according to their wwdc note, they’re basically trying to do this.

load more comments (3 replies)
load more comments (3 replies)
load more comments (3 replies)