Figure founder and CEO Brett Adcock Thursday revealed a new machine learning model for humanoid robots. The news, which arrives two weeks after Adcock announced the Bay Area robotics firm’s decision to step away from an OpenAI collaboration, is centered around Helix, a “generalist” Vision-Language-Action (VLA) model.
VLAs are a new phenomenon for robotics, leveraging vision and language commands to process information. Currently, the best-known example of the category is Google DeepMind’s RT-2, which trains robots through a combination of video and large language models (LLMs).
Helix works in a similar fashion, combining visual data and language prompts to control a robot in real time. Figure writes, “Helix displays strong object generalization, being able to pick up thousands of novel household items with varying shapes, sizes, colors, and material properties never encountered before in training, simply by asking in natural language.”

In an ideal world, you could simply tell a robot to do something and it would just do it. That is where Helix comes in, according to Figure. The platform is designed to bridge the gap between vision and language processing. After receiving a natural language voice prompt, the robot visually assesses its environment and then performs the task.
Figure offers examples like, “Hand the bag of cookies to the robot on your right” or, “Receive the bag of cookies from the robot on your left and place it in the open drawer.” Both of these examples involve a pair of robots working together. This is because Helix is designed to control two robots at once, with one assisting the other to perform various household tasks.
Figure is showcasing the VLM by highlighting the work the company has been doing with its 02 humanoid robot in the home environment. Houses are notoriously tricky for robots, given they lack the structure and consistency of warehouses and factories.
Difficulty with learning and control are major hurdles standing between complex robot systems and the home. These issues, along with five- to six-digit price tags, are why the home robot hasn’t taken precedence for most humanoid robotics companies. Generally speaking, the approach is to build robots for industrial clients, both improving reliability and bringing down costs before tackling dwellings. Housework is a conversation for a few years from now.
When TechCrunch toured Figure’s Bay Area offices in 2024, Adcock showed off a some off the paces the company was putting its humanoid through in the home setting. It appeared at the time that the work was not being prioritized, as Figure focuses on workplace pilots with corporations like BMW.

With Thursday’s Helix announcement, Figure is making it clear that the home should be a priority in its own right. It’s a challenging and complex setting for testing these sorts of training models. Teaching robots to do complex tasks in the kitchen — for example — opens them up to a broad range of actions in different settings.
“For robots to be useful in households, they will need to be capable of generating intelligent new behaviors on-demand, especially for objects they’ve never seen before,” Figure says. “Teaching robots even a single new behavior currently requires substantial human effort: either hours of PhD-level expert manual programming or thousands of demonstrations.”
Manual programming won’t scale for the home. There are simply too many unknowns. Kitchens, living rooms, and bathrooms vary dramatically from one to the other. The same can be said for the tools used for cooking and cleaning. Besides, people leave messes, rearrange furniture, and prefer a range of different environmental lighting. This method takes way too much time and money — though Figure certainly has plenty of the latter.
The other option is training – and lots of it. Robotic arms trained to pick and place objects in labs often use this method. What you don’t see are the hundreds of hours of repetition is takes to make a demo robust enough to take on highly variable tasks. To pick something up right the first time, a robot needs to have done so hundreds of times in the past.
Like so much surrounding humanoid robotics at the moment, work on Helix is still at a very early stage. Viewers should be advised that a lot of work happens behind the scenes to create the kinds of short, well-produced videos seen in this post. Today’s announcement is, in essence, a recruiting tool designed to bring more engineers on board to help grow the project.