CES 2026, the world’s largest consumer technology expo, felt close to being overrun by robots in Las Vegas this week. They rolled through aisles, paused at intersections, carried crates, guarded entrances and drew crowds. Movement across the Las Vegas Convention Centre turned into a shared exercise in human and machine awareness, with visitors adjusting their pace to accommodate devices that appeared increasingly comfortable in public space.
The robots were not confined to a single hall. Industrial machines demonstrated logistics tasks with robotic arms that felt more like factory fixtures than prototypes. Dozens of low-cost Chinese-made humanoid robots seemed to portend the imminent arrival of affordable home robots for all. Little wonder that estimates suggest a market worth $50bn (about R827bn) annually at present, more than doubling by the end of the decade. Some of the more wild-eyed forecasts put the number in the trillions.
Humanoid robots were the primary attraction.
UniX AI arrived at the show with its full-size humanoids, Wanda 2.0 and Wanda 3.0, positioned as ready-to-work service robots for hotels and managed facilities. In staged scenarios, they smoothly poured drinks and replenished amenities.
Nearby, the focus shifted to the domestic sphere. Luka showcased its family-focused Luka Robot, designed for “generation Alpha”. Unlike the industrial units, the Luka Robot is built for multimodal interaction with children: reading stories, holding conversations, and responding to emotional cues. Its presence underlined how robotics now stretches beyond labour into educational and emotional engagement, provided the environment is controlled and safe.
The difference between robots built to work and robots built to engage shaped the experience of the show.
“Robots tend to be used in very simple use cases,” said Paul Stafford, CEO of robotics and physical-AI company Haply. “They’re used in highly automated environments, where the robot moves from point A to point B. It does that, and it doesn’t know anything other than A to B.”
They’re used in highly automated environments, where the robot moves from point A to point B. It does that, and it doesn’t know anything other than A to B.
The moment robots entered shared, unpredictable spaces, the strain showed, he said. Some paused when people crossed their paths; others slowed noticeably. This is where the technical gap between vision and sensation becomes a chasm.
Many robots at the show, including the high-profile humanoids, rely heavily on vision systems — processing the world through cameras and AI models like Nvidia’s project GR00T. They see a person, process the pixels and then calculate a response. But this creates a subtle “stutter” in logic.
“Bridging the gap between how a human carries out a task and bringing that into a robotic world is extremely complex,” said Stafford.
“A lot of what’s being done now uses vision. Cameras see at a relatively slow rate: typically 30 to 60 frames per second. But when you touch something, your biological sensors and the haptic loop are working much faster, often at 1,000Hz or more.”
This technical gap explains the stiffness visible in many humanoids. Because they lack instinctive tactile feedback, their motion follows pre-calculated scripts rather than real-time physical sensation. If a Wanda robot grasps a glass, it is largely “calculating” the grip based on what it sees. If the glass is slightly oily or heavier than expected, the vision system might not catch the slip until it is too late.
Stafford’s work at Haply focuses on this precise “sensation” gap. The company develops systems that capture human movement through touch and force, recording motion at extremely high precision so robots can learn from how people actually perform tasks, rather than just how they look while doing them.
“We have a device that captures your movements in a very natural way,” he said. “You hold it and move as you would when doing a task. It captures movement at extremely high rates, with very high accuracy.”
That approach showed its value in robots trained through direct human guidance. They adapted more smoothly to variation, behaving less like machines following instructions and more like extensions of human intent. It suggests that the “ChatGPT moment” for robotics will not come from better vision models but from better “feeling” models: systems that can sense pressure, weight and friction in real time.
The most credible robots at CES shared a common assumption: people remain involved. Warehouse robots worked under supervision; lab robots assisted technicians. Even the service humanoids operated inside defined scenarios. Care robots and companions spoke gently and reacted to faces, but the functional boundaries remained. If a person falls, can the robot pick them up? Usually, the answer in 2026 is still no.










Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.