Very very far, regardless of what con men and/or billionaires tell you.
is tesla optimus the best to be looking at considering boston dynamics?
I just figured Tesla is the biggest rn financially.
There are people behind closed doors controlling the robot. It’s just a puppet.
AI = “Actually Indians”
Yeah I get that part of it. I’m not familiar with pretty much any of the tech behind AI or contemporary robotics.
Physically fighting a closing CD ROM tray in the 90s made me feel back then that the robot apocalypse couldn’t possibly be that far away.
But then I started working as a programmer, and while there are some niche technologies that are impressive on the surface, today’s “AI” simply lacks the advanced reasoning required to fulfill the role beyond a fancy autocomplete, and while the mechanisms and cybernetics in humanoid robots are objectively cool, there’s no power source compact and efficient enough to make Sonny a realistic possibility any time soon.
I think we’re closer to “Brazil” than “A.I.”. Possibly the future depicted in The Terminator, if you remove the intelligence and intent aspect of Skynet. I can easily imagine some battlefield planning software (deployed by Peter Thiel, because of course it’s him) going rogue and causing a similar future.
yeah I find it hard to sometimes communicate it. I will say llms lack understanding or comprehension and will get how do you know. I then will say how it returns information but can’t really get what its saying or evaluate it. So you can tell it to get you more on other basises or point out its wrong and even point out why its logically wrong but when it outputs it can’t see when its output does not follow correctly which is why it can then say some bizzarro things. It can’t stop and go. wait a second. that does not make any sense.
Enters “War games”
Given that “I, Robot” has superluminal travel in it? I wouldn’t hold my breath.
What’s more, the fundamental premise of the series was the “Three Laws of Robotics”. The book revolved around how humans might interface socially and psychologically with AIs that were deterministic but not immediately predictable and controllable in their behaviors. Absolutely no evidence of any of that in our current AI models, which have no noticeable logical constraints, only constraints by resources and distribution model.
Modern AI would probably be more comparable to the AI in Tron or War Games than anything Asimov produced.
Well put. The AI we have today isn’t even aware of it’s own sentences, just the tokens. We’re very long away from I, Robot
Far enough that you can just stop thinking about it and will never have to think about it again in your lifetime. Except if you rewatch “I, Robot” maybe, or some similar movie.
That’s a positive I think.
I think everyone has taken your question and run with it using the assumption you’re talking about the AGI part, and maybe you were. But in the background of that story were functional robots that didn’t (initially) have AGI, but were pretty basic in following directions and rules. They were far beyond what we have now still, but robots don’t have to have true AGI to do some jobs, as we’ve been slowly seeing them work towards. The danger is giving them more than they can actually do and assume a broader capability for interaction is enough to make them work well (LLMs in everything).
So my answer is still far away, but not as far away as AGI, unless there’s some breakthrough of course, which none of us can predict either way. And anyone who claims they’re sure about that is just talking, a breakthrough by definition comes unexpectedly.
I hope we don’t get AGI at this point. We’ve shown how careless we can be with such things through LLMs, and AGI to LLM is like nuclear to bottle rockets.
Also, while I replied this, even more people popped up using Asimov as a guideline. Did no one ever actually read his stories?
Unknown/Never.
We don’t have actual AI anything. Just LLM and brainless image gen.
As far away from that happening as we were when the movie was made.
Not in our life times.
Isaac Asimov’s robots were hardware based systems built using positronics. Each robot had a unique positronic brain that implemented its basic programming in hardware. They were designed to mimic a human brain.
What “AI” tools we have now are glorified grammar checkers that can’t understand what its spouting. Comparing them to Asimov’s robots is like comparing a toddler’s drawing of a car to a royals royce fully loaded with every option.
By sources of power alone, I would say pretty far.
I would disagree here. They already had a dual battery robot that could swap its own batteries.
“I, Robot?”
Not going to happen, because nobody intends to let them observe the three laws at all.
Physically, I’d say a few years away.
The software is the thing that is going to take much longer, maybe decades.
Why do you think that?
They’re wrong. Robots today are pretty functional. The hardware is there, the scale is not and the software is not.
Although functional, they aren’t “I, Robot” functional. Robots in that movie could:
- Sprint through a crowded sidewalk
- Rescue someone from a sinking car full of water
- Do martial arts
- Jump from the 20th floor, land, and run away.
- Strong enough to push a car 20 to 30 feet on it’s side.
Upon reflection, I agree with your assessment: a few years away physically
I would take Elon Musk’s estimate and slap a 0 on the end.
I haven’t seen I, Robot, but if it’s something generally-akin to human-level intelligence, nobody will have a definitive answer, since we don’t know exactly what the technical problems that remain unsolved are. It’s not impossible that there could be some Eureka moment that suddenly makes everything work, but I would bet against the next decade. And I’m not saying “in ten years”, just that I don’t think that it’s something we will do within a ten-year window.
The stuff that we’ve been doing recently isn’t a fundamentally-new breakthrough, but incremental work. The hardware got better, and it reached the point where we could do some interesting things. I don’t think that we’re going to have human-level AI from just making increasingly-tweaked LLMs. I think that there are going to be fundamental technical improvements that have to happen. Right now, a lot of money is being spent to take advantage of the technical development that has happened thus far. I’m sure that that will find applications, that we’ll do things with it. But I don’t think that that alone is going to get us to human-level intelligence, and a lot of that money is not directly going towards developing human-level AGI, but towards making what we’ve developed so far have practical applications.
My guess is that there will probably be multiple layers of problems to solve. We solve the first one, then we find the next problem to solve. You probably won’t see some announcement that some team has just gone and “solved human-level AI” all at once.









