Figure AI’s third-generation robot prioritizes mass production over cutting-edge performance
Figure AI has unveiled its third-generation humanoid robot, Figure 03—a bipedal autonomous machine that could mark a turning point in robotics. Rather than chasing cutting-edge performance, the company has focused on something more ambitious: creating a robot practical enough for everyday environments and affordable enough for mass production.
The model T moment
As explained here, the automotive industry’s history offers an instructive parallel. Ford’s Model T wasn’t the first car, nor the most sophisticated. Its significance lay elsewhere—it was engineered from the ground up for assembly-line manufacturing and priced for ordinary consumers. Aldous Huxley understood its cultural impact well enough to make it an object of worship in Brave New World.
Today’s humanoid robotics landscape resembles the early automobile era: dominated by engineers and enthusiasts, with luxury prototypes and industrial machines occupying opposite ends of the spectrum. The transformative moment will arrive when a practical humanoid robot becomes as commonplace as a robotic vacuum.
Designed for daily life
Figure 03 aims to bridge that gap. The robot executes tasks typically performed by people, learning through direct interaction via Figure’s proprietary Helix vision-language-action AI system. Its intended domains span residential spaces, warehouses, hotels, and similar everyday settings.
Safety considerations shaped the design philosophy. The robot features washable soft exteriors with no exposed mechanical components. Compared to its predecessor, Figure 02, the new version weighs 9% less while occupying less space. Its wirelessly-charged battery meets UN38.3 safety standards.
Manufacturing at scale
What distinguishes Figure 03 is its production strategy. The company abandoned expensive CNC machining in favor of die-casting and injection molding—techniques that dramatically reduce both cost and manufacturing time. This approach enables production of 12,000 units annually, with ambitions to reach 100,000 units over four years through a dedicated supply chain at the BotQ facility in San Jose, California.
Technical capabilities
Beyond artificial intelligence, Figure 03 incorporates several advanced features:
The sensor suite delivers a 60% wider field of view per camera. Each hand detects pressure as light as a few grams, while palm-mounted cameras assist with grasping operations. The onboard processor manages substantial visual data streams even in complex real-world conditions. An upgraded auditory system enhances sound recognition and response capabilities.
The robot even accommodates customizable clothing to suit different contexts. Data transmission occurs wirelessly while the unit charges at its station.
Performance and perception
Promotional footage shows Figure 03 moving with fluid grace across diverse environments—washing dishes, serving beverages, staffing hotel reception desks. The robot interacts seamlessly with both humans and human-designed equipment like washing machines. These capabilities matter precisely because humanoid robots derive their advantage from operating in spaces built for people, using tools designed for human hands, and collaborating with human workers without triggering anxiety or disruption.
A necessary caution
However compelling these demonstrations appear, they warrant skepticism. Recent years have produced numerous videos of robots performing spectacular feats—only for closer examination to reveal severe limitations. A robot that executes a perfect backflip may have no other parkour skills whatsoever.
When we observe Figure 03 performing household chores or customer service tasks, our minds instinctively fill gaps in understanding. We assume human-level capability and flexibility rather than recognizing potential constraints—a system that might freeze when an object occupies an unexpected position, or extend a menu toward empty space when a customer deviates from predicted behavior.
Figure AI’s approach reveals an unconventional strategy: bypass industrial adoption entirely and aim directly for households. Rather than waiting for factories to validate the technology before gradually filtering it down to consumers—the traditional path for most automation—the company is betting that domestic environments will be the proving ground. This direct-to-consumer trajectory could accelerate acceptance by making humanoid robots familiar fixtures in daily life before workplace displacement becomes a contentious issue.
Yet this strategy carries the same cautionary lessons we’ve learned from artificial intelligence deployment. Moving fast and hoping to iterate later can introduce unforeseen consequences when the technology enters intimate spaces. A malfunctioning industrial robot affects production schedules; a household robot that misinterprets a child’s behavior or fails during a critical moment raises more profound concerns about safety, privacy, and dependence.
The risks extend beyond physical safety. As with AI systems, we must consider data collection practices, algorithmic decision-making in our homes, and the psychological effects of anthropomorphizing machines that lack genuine understanding. The question isn’t whether humanoid robots will eventually arrive in our lives—Figure 03 suggests that moment is approaching—but whether we’re implementing adequate safeguards before they do.
Figure AI has chosen the faster, riskier path. Whether this proves visionary or reckless will depend not just on the robot’s capabilities, but on how seriously the industry takes the responsibilities that come with entering our most personal spaces.

