AI chatbots have made significant progress in recent years, finding roles as personal assistants, customer service representatives, and even therapists. These advances are powered by large language models (LLMs) trained on vast amounts of internet text data. Tech leaders such as Elon Musk and NVIDIA CEO Jensen Huang believe that a similar approach could soon produce humanoid robots capable of performing complex tasks like surgery or serving as in-home butlers.
However, robotics experts remain skeptical about these predictions. Ken Goldberg, a professor at the University of California, Berkeley, argues that current expectations for humanoid robots are exaggerated.
“No; I agree that robots are advancing quickly but not that quickly. I think of it as hype because it’s so far ahead of the robotic capabilities that researchers in the field are familiar with,” said Goldberg.
He explained that while AI systems like ChatGPT have shown impressive abilities in vision and language processing, this does not mean humanoid robots will reach similar milestones soon. “We’re all very familiar with ChatGPT and all the amazing things it’s doing for vision and language, but most researchers are very nervous about the analogy that most people have, which is that now that we’ve solved all these problems, we’re ready to solve [humanoid robots], and it’s going to happen next year,” Goldberg said. “I’m not saying it’s not going to happen, but I’m saying it’s not going to happen in the next two years, or five years or even 10 years. We’re just trying to reset expectations so that it doesn’t create a bubble that could lead to a big backlash.”
Goldberg pointed out dexterity as one major challenge: “The big one is dexterity, the ability to manipulate objects. Things like being able to pick up a wine glass or change a light bulb. No robot can do that.” He referenced Moravec’s paradox—the idea that tasks humans find easy often prove difficult for machines—explaining why simple actions like picking up a glass remain elusive for robots.
To illustrate what he calls the “100,000-year data gap,” Goldberg compared the amount of text used to train LLMs with available training data for robotics: “To calculate this data gap, I looked at how much text data exists on the internet and calculated how long it would take a human to sit down and read it all. I found it would take about 100,000 years.” He added: “We don’t have anywhere near that amount of data to train robots… We believe that training robots is much more complex, so we’ll need much more data.”
Some suggest using videos from sources like YouTube for training robot motions; however, Goldberg says this approach falls short because translating 2D video into detailed 3D motion information remains difficult.
Simulation has helped improve some robotic skills—especially dynamic movements—but has limited success when applied to fine motor tasks required in many real-world jobs. Teleoperation offers another method for gathering training data but scales slowly: every eight hours worked provides only eight hours of new data.
Goldberg believes robotics is experiencing a paradigm shift marked by debate between proponents of traditional engineering methods—physics-based modeling—and those who argue massive datasets alone will drive future breakthroughs: “Most roboticists still believe in what I call good old-fashioned engineering… But there is a new dogma… They say that data is all we need to get us to fully functional humanoid robots.”
Despite enthusiasm among investors and some researchers for purely data-driven approaches—a trend seen across media coverage—Goldberg advocates blending both traditions: “I’ve been advocating that engineering, math and science are still important because they allow us to get these robots functional so they can collect the data we need.”
Real-world examples include Waymo collecting driving experience from autonomous vehicles on public roads (https://waymo.com/), gradually improving performance through accumulated operational knowledge. Ambi Robotics uses package-sorting robots deployed in warehouses where ongoing work generates additional training information (https://ambirobotics.com/).
Goldberg cautions against fears of widespread job loss due to automation: “To my mind as a roboticist, the blue-collar jobs, the trades, are very safe. I don’t think we’re going to see robots doing those jobs for a long time.” Instead he sees greater potential impact on roles involving routine paperwork or certain customer service functions.
“One example that’s very subtle is customer service… Many companies want to replace customer service jobs with robots,” Goldberg noted. Yet computers cannot express empathy—a key component when dealing with frustrated customers.
Similarly regarding healthcare automation he asked: “Some claim AI can read X-rays better than human doctors. But do you want a robot to inform you that you have cancer?”
While concerns persist over automation’s effect on employment prospects—a fear dating back centuries—Goldberg remains optimistic about human relevance: “The fear that robots will run amok and steal our jobs has been around for centuries but I’m confident humans have many good years ahead—and most researchers agree.”
Recent developments show deep learning techniques help improve robotic manipulation skills (https://news.berkeley.edu/2020/09/17/deep-learning-helps-robots-grasp-and-move-objects-with-ease), yet experts emphasize these advancements fall short of delivering general-purpose humanoid laborers anytime soon.



