102
7 years of software updates for the Pixel 8 series
(blog.google)
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Most of the normal apps on the phone are using AI on the edges.
Image processing has come a long way using algorithms trained through those AI techniques. Not just the postprocessing of pictures already taken, like unblurring faces, removing unwanted background people, choosing a better frame of a moving picture, white balance/color profile or noise reduction, but also in the initial capture of the image: setting the physical focus/exposure on recognizable subjects, using software-based image stabilization in longer exposed shots or in video, etc. Most of these functions are on-device AI using the AI-optimized hardware on the phones themselves.
On-device speech recognition, speech generation, image recognition, and music recognition has come a long way in the last 5 years, too. A lot of that came from training on models using big, robust servers, but once trained, executing the model on device only requires the AI/ML chip on the phone itself.
In other words, a lot of these apps were already doing these things before on-device AI chips started showing up in 2013 or so. But the on-device chips have made all these things much, much better, especially in the last 5 years when almost all phones started coming with dedicated hardware for these tasks.