this post was submitted on 03 May 2024
855 points (97.7% liked)

Technology

59429 readers
2693 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 
  • Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
  • Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
  • AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
you are viewing a single comment's thread
view the rest of the comments
[–] deafboy@lemmy.world 13 points 6 months ago (1 children)

The best way to do on-device AI would still be a standard SoC. We tend to forget that these mass produced mobile SoCs are modern miracles for the price, despite the crapy software and firmware support from the vendors.

No small startup is going to revolutionize this space unless some kind of new physics is discovered.

[–] Buddahriffic@lemmy.world 3 points 6 months ago (1 children)

I think the plausibility comes from the fact that a specialized AI chip could theoretically outperform a general purpose chip by several orders of magnitude, at least for inference. And I don't even think it would be difficult to convert a NN design into a chip or that it would need to be made on a bleeding edge node to get that much more performance. The trade off would be that it can only do a single NN (or any NNs that single one could be adjusted to behave identically to, eg to remove a node you could just adjust the weights so that it never triggers).

So I'd say it's more accurate to put it as "the easiest/cheapest way to do an AI device is to use a standard SoC", but the best way would be to design a custom chip for it.

[–] AdrianTheFrog@lemmy.world 1 points 6 months ago* (last edited 6 months ago) (1 children)

They're not a chip ~~manufacturer~~ designer though, and modern phone processors are already fast enough to do near real time text generation and fast image generation (20 tokens/second llama 2, ~1 second for a distilled SD 1.5, on Snapdragon 8 Gen 3)

Unfortunately, the cheapest phones with that processor seem about $650, and the Rabbit R1 costs $200 and uses a MediaTek Helio P35 from late 2018.

[–] Buddahriffic@lemmy.world 1 points 6 months ago

Neither AMD nor nVidia are chip manufacturers. They just design them and send them off to TSMC or Samsung to get made.