30
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 29 Jul 2024
30 points (91.7% liked)
Apple
17443 readers
35 users here now
Welcome
to the largest Apple community on Lemmy. This is the place where we talk about everything Apple, from iOS to the exciting upcoming Apple Vision Pro. Feel free to join the discussion!
Rules:
- No NSFW Content
- No Hate Speech or Personal Attacks
- No Ads / Spamming
Self promotion is only allowed in the pinned monthly thread
Communities of Interest:
Apple Hardware
Apple TV
Apple Watch
iPad
iPhone
Mac
Vintage Apple
Apple Software
iOS
iPadOS
macOS
tvOS
watchOS
Shortcuts
Xcode
Community banner courtesy of u/Antsomnia.
founded 1 year ago
MODERATORS
My guess is they thought they were 99% done but that the 1% (“just gotta deal with these edge case hallucinations”) ended up requiring a lot more work (maybe even an entirely new sub-system or a wholly different approach) than anticipated.
I know I suggested the issue might be hallucinations above, but what I’m genuinely curious about is how they plan to have acceptable performance without losing half or more of your usable RAM to the model.
Will it run locally? I just assumed it would be run on Apple servers in some way.
They framed it like most of the stuff is running on device while some in some cases, I suppose image generations, it will use the "very secure" apple servers. Additionally appleAI can decide that it would make sense to ask chatGPT on their servers and gives you the option to do so.
I thought they had confirmed at least some of the image generation stuff happening locally. I am in the intelligence beta now and went offline and played around with Siri and lots of stuff worked. Not really doing much new right now, but the speed and quality of understanding and dealing with when you stumble over words are way better.
Nice 😃
Locally but there's also an option to use OpenAI's API, I believe.
Ok, then I’m also curious on how they would solve that.