Figure 01 + ChatGPT = Groundbreaking integration raises ethical concerns

A new humanoid robot that runs on ChatGPT from OpenAI reminds us the AI Skynet from the science fiction movie Terminator.

Although Figure 01 is not a lethal robot, it is capable of basic autonomous activities and, with ChatGPT’s assistance, real-time human conversations.

According to this article, this machine uses ChatGPT to visualize objects, plan actions for the future, and even reflect on its memory, as shown in a demonstration video given by Figure AI.

The robot receives photos from its cameras that capture their environment and forwards them to an OpenAI-trained large vision-language model, which translates the images back to the robot.

In the video, a man asked the humanoid to wash dishes, put away dirty clothing, and give him something to eat, and the robot duly accomplished the duties, though Figure seems more hesitant to respond to questions than ChatGPT.

In an attempt to address worker shortages, Figure AI expects that its first artificial intelligence humanoid robot will prove capable of tasks dangerous for human workers.

‘Two weeks ago, we announced Figure + OpenAI are joining forces to push the boundaries of robot learning,’ Figure founder Brett Adcock wrote on X.

‘Together, we are developing next-generation AI models for our humanoid robots,’ he added.

Adcock added that there was no remote control of the robot from a distance, and ‘this was filmed at 1.0x speed and shot continuously.’

The comment about it not being controlled may have been a dig at Elon Musk, who shared a video of Tesla’s Optimus robot to show off its skill; but it was later found that a human was operating it from a distance.

In May 2023, investors such as Jeff Bezos, Nvidia, Microsoft, and OpenAI contributed $675 million to Figure AI.

‘We hope that we’re one of the first groups to bring to market a humanoid,’ Brett Adcock told reporters last May, ‘that can actually be useful and do commercial activities.’

In the latest video, a guy gives Figure various jobs to complete, one of which is to ask the robot to give him something edible off the table.

Adcock said that the video demonstrated the robot’s reasoning through the use of its end-to-end neural networks—a term for the process of training a model through language acquisition. ChatGPT was trained to have conversational interactions with human users using vast amounts of data. The chatbot can follow instructions in a prompt and provide a detailed response, which is how the language learning model in Figure works. The robot ‘listens’ for a prompt and responds with the help of its AI.

Nevertheless, a recent study that used war gaming scenarios to test ChatGPT discovered that, like Skynet in Terminator, it decided to destroy enemies almost 100% of the time.

But now Figure is assisting people. The guy in the video also performed another demonstration, asking the robot to identify what it saw on the desk in front of it.

Figure responded: ‘I see a red apple on a plate in the center of the table, a drying rack with cups and a plate, and you standing nearby with your hand on the table.’

Figure uses its housekeeping abilities in addition to communication when it puts dishes in the drying rack and takes away the trash.

‘We feed images from the robot’s cameras and transcribed text from speech captured by onboard microphones to a large multimodal model trained by OpenAI that understands both images and text,’ Corey Lynch, an AI engineer at Figure, said in a post on X.

‘The model processes the entire history of the conversation, including past images, to come up with language responses, which are spoken back to the human via text-to-speech,’ he added.

Figure exhibited hesitation while answering questions in the demo video, hesitating with “uh” or “um,” which some users said gave the bot a more human-like voice. Adcock stated that he and his team are “starting to approach human speed,” even if the robot is still moving more slowly than a person.

A little over six months following the $70 million fundraising round in May of last year, Figure AI revealed a groundbreaking agreement to deploy Figure on BMW’s factory floors.

The German automaker signed a deal to employ the humanoids initially in a multibillion dollar BMW plant in Spartanburg, South Carolina, which produces electric vehicles and assembles high-voltage batteries.

Although the announcement was vague on the exact responsibilities of the bots at BMW, the companies stated that they planned to “explore advanced technology topics” as part of their “milestone-based approach” to working together.

Adcock has presented its goals as addressing a perceived gap in the industry about labor shortages involving complex, skilled labor that traditional automation methods have not been able to resolve.

‘We need humanoid [robots] in the real world, doing real work,’ Adcock said.

It was to be expected that ChatGPT’s conversational capabilities would be used as the brain for reasoning and dialoguing robots, given its already excellent performance. Gradually, the path towards robots capable of fluid movements and reasoning capabilities incomparable to those of the previous generation, before the advent of OpenAI, is emerging.

While the integration of ChatGPT into a humanoid robot like Figure 01 demonstrates exciting progress in AI and robotics, it also raises important questions about safety and ethical considerations. ChatGPT, like many large language models, is essentially a “black box”; its decision-making processes are opaque, and its outputs can be unpredictable or biased based on the training data used.

As we move towards deploying such AI systems in physical robots that can interact with and affect the real world, we must exercise caution and implement robust safety measures. The potential consequences of failures or unintended behaviors in these systems could be severe, particularly in sensitive environments like manufacturing plants or around human workers.

Perhaps it is time to revisit and adapt principles akin to Isaac Asimov’s famous “Three Laws of Robotics” for the age of advanced AI. We need clear ethical guidelines and fail-safe mechanisms to ensure that these AI-powered robots prioritize human safety, remain under meaningful human control, and operate within well-defined boundaries.

Responsible development and deployment of these technologies will require close collaboration between AI researchers, roboticists, ethicists, and policymakers. While the potential benefits of AI-powered robotics are vast, we must proceed with caution and prioritize safety and ethics alongside technological progress.

Ultimately, as we continue to push the boundaries of what is possible with AI and robotics, we must remain vigilant and proactive in addressing the potential risks and unintended consequences that could arise from these powerful systems.