Artificial intelligence has always had a brain. Robotics gives it hands. The era of devices that think and act has arrived
There is a deeper irony here. The very tool designed to help us document our lives can pull us out of them. Optimized for scrolling and passive consumption, today’s smartphones compete relentlessly for our attention. The numbers are sobering: research from the Center for Humane Technology shows that people spend an average of 150 minutes on social media every day, which adds up to more than a full year of screen time over a decade. The device meant to connect us has, in many ways, untethered us from the present moment.
Three principles for the next generation of personal devices
As explained here, unlocking the next wave of human creativity requires more than faster chips or smarter algorithms. It demands a fundamental rethinking of the devices themselves—their bodies, not just their brains. Three guiding principles can chart that path forward.
1. Break free from static form factors. For too long, hardware innovation has been confined to the interior of an unchanging shell. The next generation of devices must be as dynamic as the lives they are meant to serve. Form should not merely follow function—it should amplify it, unlocking creative possibilities that a rigid rectangle simply cannot offer.
2. Design for creation, not just consumption. The smartphone’s architecture has been refined over the years to make it easier to watch, scroll, and swipe. The devices of tomorrow must be purpose-built for making, equipped with specialized hardware that puts professional-grade creative tools directly in people’s hands.
3. Bring AI into the physical world. Software intelligence operating behind a flat display is only part of the equation. The real breakthrough lies in embedding that intelligence into hardware that can perceive, move, and respond to physical space—devices that act less like screens and more like collaborators.
An industry beginning to wake up
These principles are no longer purely theoretical. A growing number of technology leaders are arriving at the same conclusion: the era of the attention-hungry display device must give way to something more human-centered. Recent announcements from companies, including Apple and OpenAI, about next-generation AI wearables signal that the broader industry is beginning to explore hardware that fits into life rather than demanding life fit around it. The race to build genuinely helpful personal devices is now underway.
From manifesto to machine: The robot phone
At HONOR, these principles have shaped our long-term research and development philosophy—grounded in the conviction that technology should adapt to human life, not demand the reverse. At MWC in Barcelona, they introduced a concrete expression of that philosophy: the HONOR Robot Phone.
At the heart of the device is a robotic camera gimbal—an AI-driven, motorized stabilizer capable of precise, automated camera movement. Rather than a minor hardware upgrade, this represents a genuine rethinking of the smartphone’s form. The gimbal can autonomously track subjects, intelligently compose shots, and adapt to the physical environment in real time, effectively functioning as a personal cinematographer. Users are freed from the lens, able to be present in the moments they are capturing rather than managing the mechanics of the shot.
The engineering behind this required genuine ingenuity. To house the mechanism without inflating the phone’s footprint, HONOR developed a bespoke micro motor that is 70% smaller than motors typically found in smartphones. When not in use, the robotic arm retracts into a discreet compartment on the rear of the device. To deploy, a panel slides open, the arm extends, and the camera is ready—a sequence that doubles as a kind of visual personality. The camera module itself carries a 200-megapixel sensor and a four-degrees-of-freedom gimbal with three-axis stabilization, currently the smallest of its kind. The result is footage that stays smooth regardless of how the arm moves, with full 360-degree rotation and support for precise tilting, panning, and cinematic spin shots.
HONOR describes the philosophy underpinning this as “embodied AI”—intelligence that manifests through physical movement, not merely voice responses or on-screen prompts. In practice, this translates into a device that behaves more like a collaborator than a tool. AI-powered object tracking locks onto and follows subjects during video calls and recordings, removing the need for constant manual adjustment. Super Steady Video mode compensates for any motion caused by the arm itself, while a feature called SpinShot enables one-handed 90- or 180-degree rotational shots that lend footage a dramatic, cinematic quality. For color science, HONOR partnered with ARRI Image Science—synonymous with professional filmmaking—to calibrate highlights, depth rendering, and tonal balance, yielding results that look far closer to professionally shot material than typical smartphone video. Perhaps most disarmingly, the camera also doubles as the phone’s expressive face: in demos, it nods, shakes, tilts in apparent curiosity, and bobs rhythmically to music, with HONOR having composed bespoke melodies to accompany these small choreographed reactions.
The prototype shown at MWC appears production-ready. HONOR drew on materials and durability expertise from its foldable phone line to engineer moving parts that feel solid and well-built, and the company reports the device has undergone rigorous testing—a reassurance given that previous smartphone experiments with mechanical cameras have had a mixed track record. The phone’s rear panel is slightly thicker where the compartment and gimbal housing sit, though it reportedly remains comfortable to hold.
The road ahead
The Robot Phone is a beginning, not an endpoint. To understand why, it helps to consider what has always been missing from artificial intelligence: a body. For all its remarkable capabilities—reasoning, predicting, generating, and advising—AI has until now been fundamentally disembodied. It exists behind glass, dependent on humans to act on whatever it concludes. It can think, but it cannot reach. Robotics closes that gap. It gives AI the means to intervene in physical reality, not just describe it. In the most precise sense, robotics is the hand of AI.
The analogy runs deeper than it first appears. The human brain is extraordinary, but without a nervous system connected to limbs, it can accomplish little in the world. AI has been in exactly that position—immensely capable in the abstract, but stranded. Robotics provides the nervous system and the hands combined. And just as human intelligence evolved through physical interaction with the world—touching, building, manipulating—AI embedded in robotic systems may develop capabilities that purely digital AI simply cannot reach.
From assistance to agency
This is the philosophical leap that makes the combination so significant. Today, AI assists. It helps you make a decision, draft a message, or find information—but it always stops short of acting. Once AI has robotic hands, whether that is a camera gimbal, a home robot, a surgical arm, or something not yet invented, it crosses from assistance into agency. It does not just advise; it acts.
That shift will ripple through nearly every field. In medicine, AI already diagnoses with remarkable accuracy; robotic hands will let it treat, operate, and care. In manufacturing, AI already optimizes processes; robotic hands will let it build and adapt on the fly. In daily life, AI already manages schedules and surfaces information; robotic hands will let it manage the physical environment directly, not just digitally. The devices of the near future will not simply respond to the world—they will act within it.
Many hands, one intelligence
Not all robotic hands will look like hands. Some will be camera gimbals on a smartphone. Some will be microscopic actuators inside a wearable. Some will be autonomous vehicles navigating a city, surgical instruments operating with sub-millimeter precision, or home appliances that physically adapt to their environment rather than simply alerting you to it. The form will vary enormously depending on the task, but the underlying logic remains the same: AI providing the intelligence, robotics providing the physical reach.
Ordinary devices will feel this shift sooner than many expect. Laptop cameras that physically reframe during calls, home speakers that orient toward whoever is speaking, wearables with actuators that adjust in real time, and appliances that move and adapt rather than simply beep—these are not distant prospects. They are the near-term consequences of the engineering breakthroughs being demonstrated today. The HONOR Robot Phone is one proof point among what will become many.
There are real obstacles to navigate: moving parts wear out, robotic features add cost, and people will need time to grow comfortable with devices that physically move around them. Energy consumption is a genuine concern in a world still reliant on batteries. These are not trivial challenges. But they are engineering and cost problems—the kind that industries have consistently solved when the underlying idea is compelling enough.
The underlying idea here is compelling enough. AI has spent a decade proving what it can think. The next decade will be defined by what it can do—and robotics will be how it does it. The static, passive device is becoming a thing of the past. In its place, slowly and then all at once, will come technology that does not wait to be operated but acts alongside us. The long-promised vision of a truly helpful digital companion is no longer purely theoretical. It has a body now.

