this post was submitted on 08 Dec 2023
162 points (94.5% liked)
Technology
59378 readers
3185 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm sure we'll get there eventually, but robots still suck at doing stuff like this. Maybe when they marry robots up with AI, we'll have robots that can figure out what to do when there's the slightest deviation to the operating conditions, like a piece of trash shows up on the line, or they get twisted 30 degrees off from their station, or a part of the line gets moved 2 inches. For now though, robots are only great at following pre-programmed instructions EXACTLY the same way every time. Even then, they still manage to fuck that up some of the time. I worked with welding robots for years that only had one task and one task only, to apply welds to car seat parts, and they fucked up on us all the time, on a daily basis. The technology will get there one day, but I doubt we're there.
I'm actually working on this problem right now for my master's capstone project. I'm almost done with it; I can have it generating a series of steps to try and fetch me something based on simple objectives like "I'm thirsty", and then in simulation fetching me a drink or looking through rooms that might have a fix, like contextually knowing the kitchen is a great spot to check.
There's also a lot of research into using the latest advancements in reasoning and contextual awareness via LLMs to work towards better more complicated embodied AI. I wrote a blog post about a lot of the big advancements here.
Outside of this I've also worked at various robotics startups for the past five years, though primarily in writing data pipelines and control systems for fleets of them. So with that experience in mind, I'd say we are many years out from this being in a reasonable product, but maybe not ten years away. Maybe.