Register for free and continue reading

Join our growing army of changemakers and get unlimited access to our premium content

Login Register

Top 5 embodied AI innovations

How are innovators creating AI that can perceive, learn from, and interact with the environment?

For all the hype around generative AI, the tools we are familiar with today, such as ChatGPT and Bard, exist in digital space. AI, to date, has had far less of an impact on the physical world in which we all play out our daily lives.

That could all be about to change, however, with the market for ‘embodied AI’ set to triple between 2023 and 2030, reaching a value of $9.4 billion.

What is embodied AI?

Put simply, embodied AI can be defined as AI systems that have the ability to learn from, perceive, and interact with the surrounding environment and are embedded in physical objects like drones, cars, or robots.

But on a deeper level, many experts believe that AI having a physical ‘body’ has a more profound significance. For example, the concept of ‘embodied cognition’ proposes that, in humans, intelligence doesn’t emerge solely from the brain but rather from the interaction between the brain, body, and the person’s surroundings. Applied to AI, this suggests that truly human-like intelligence will only be achieved if we develop AI that has a physical presence and the ability to interact with the environment.

What are the challenges of embodied AI?

There are some practical challenges to embodied AI, such as the need for advanced sensors that reflect human senses, as well as sophisticated mechanical systems for physical robots. These remain hurdles despite significant developments in camera and microphone technology. There is a trade-off between complexity and scalability when it comes to applying embodied AI in the real-world economy.

Another major challenges is training – do you train embodied AI systems ‘IRL’ or through virtual simulators? Picture a toddler. They learn by continuously interacting with their environment – moving around, touching things, and picking up objects. In the process, they make mistakes, falling over or dropping things to the frustration of their parents. Embodied AI systems trained in the real world will also make mistakes, but here the stakes are higher. For example, when the AI makes a mistake, it could destroy the hardware or harm nearby humans.

An alternative approach is to create a virtual representation of the real-world and train the AI on that. Meta, for example, has recently released its ‘Habitat 3.0’ simulator. This has the benefit of being both quick and safe (simulations can be performed much faster than real-world training, allowing for the system to be trained on more examples). However, the real world is messy, so there is inevitably a gap between simulated experiences and real ones.

Despite these challenges, several startups are working on embodied AI in a range of industries.

What startups are developing embodied AI?

Photo source: Skild AI

A UNIVERSAL AI ‘BRAIN’ IDEAL FOR ROBOT RETROFITS

Founded by two Carnegie Mellon University professors Deepak Pathak and Abhinav Gupta, Pennsylvania-based company Skild AI has created a general-purpose robotics foundation model – or robot ‘brain’ – that has been trained on 1,000 times more data points than its competitors.

Instead of designing a robot for a specific function, the idea is that the Skild Brain embodied system could be retrofitted into virtually any kind of robot, whether that’s a two-legged humanoid machine or a more resilient quadruped.

Once installed, the embodied AI system could, in theory, fulfil a huge range of tasks across various industries, including construction and manufacturing. It could even complete activities within the home.

In particular, this kind of technology could revolutionise dangerous jobs like working on oil rigs or with heavy machinery, helping human workers to stay safe as they work alongside the new machines. Plus, intelligent robots could also help to fill growing labour gaps in industries like logistics, construction, and even healthcare. Read more.

Photo source: Wayve

EMBODIED AI REVOLUTIONISES AUTONOMOUS VEHICLES

One of the main use cases for embodied AI is in autonomous vehicles. UK startup Wayve is building foundation models, similar to a “GPT for driving”, which enable a vehicle to ‘see’ and respond to changes on the road and drive through any environment.

Using camera, GPS, and Radar/LiDAR sensors and end-to-end AI, the technology converts external sensor inputs into safe driving outputs within a car, having been trained on and continually learning from real-world data and scenarios on the road. The mapless, hardware-agnostic AI Driver can be adapted to any kind of vehicle and upgraded as Wayve’s technology advances. 

The company has a scale of AI Driver solutions, ranging from an L2+ AI Driver Assist for an ‘eyes on, hands off’ experience, to the L4 AI Driver, which is a fully automated ‘eyes off, hands off’ offering. Read more.

Photo source: @ geralt from pixabay via Canva.com

USING NEUROSCIENCE FOR AI THAT MAKES HUMAN-LIKE DECISIONS

When the human brain perceives the world around it, it is not just passively receiving data. Instead, it is continuously predicting what will happen next to build an ‘internal model’ of the world that is continuously updated by incoming sensory perception and discrepancies between predictions and actual events. Now, startup Stanhope AI, is taking this idea, called Active Inference, and applying it to AI.

Large language models, or ‘LLMs’, which today’s popular generative AI chatbots like ChatGPT are based on, do not use Active Inference. Instead, they can only make ‘best-guess’ decisions based on the vast amounts of data they have already ‘seen’ during their training. They cannot learn on the go. Stanhope AI’s inference models, by contrast, are constantly making and refining predictions based on incoming data. Not only does this make their decisions more ‘human-like’, but it also reduces the risk of hallucination. And, because the models require less energy to run than data-crunching LLMs, they can run on small devices such as drones.

The technology is currently being tested with autonomous machines and delivery drones through partnerships with the Royal Navy and Germany’s Federal Agency for Disruptive Innovation. Going forward, it could be used in industrial robotics, manufacturing, and embodied AI. Find out more.

Photo source: Sanctuary AI

GENERAL-PURPOSE SEMI-AUTONOMOUS ROBOTS

In the popular sci-fi imagination, robots are often envisaged as humanoid butlers that can perform a range of tasks. However, to date, this has not been how robotics have been implemented in the real economy. Instead, most of the work to date has gone into creating machines that can perform specific, singular tasks in fields such as factory automation.

Canadian startup Sanctuary AI, however, is working on creating general-purpose humanoid robots with human-like intelligence. The goal is for these robots to work alongside humans to address increasing challenges with global labour shortages.

There are three parts to Sanctuary’s technology. The company’s 70-kilogramme, 5-foot 7-inch robot design, called ‘Phoenix’ forms the hardware. This is then controlled by the ‘Carbon’ AI control system, which mimics sub-systems found in the human brain such as sight, sound, touch, and memory. Working in concert, this technology can be trained to perform any human task, with training conducted in the physical world, or in Sanctuary’ virtual environment called ‘Sanctuary World Engine.’

There are three ways in which the robots can be operated, with differing levels of autonomy. They can be directly piloted by people or operated using pilot assist. Alternatively, they can supervised by a human with the robot in the hands of the in-built Carbon control system.

A key feature of Sanctuary’s technology is the fact it uses a blend of symbolic and neural reasoning. Symbolic reasoning is the more traditional form of AI which uses rules and symbols to process data and make decisions. Neural reasoning, meanwhile, is inspired by the human brain and finds patterns and formulates predictions when faced with huge amounts of data. Using both approaches in combination maximises their individual benefits while minimising their weaknesses. Find out more.

Photo source: Mentee Robotics

HUMANOID ROBOTS FOR EVERYDAY LIFE

Another startup looking to develop humanoid robots is Israel-based Mentee. The company describes itself as AI-first, meaning that it develops its own AI in house, using many different models together to create robots that can perform real-world tasks. At the same time, it also develops all the mechanics and electronics that go into its robots.

The startup integrates between 20-30 AI models, most of which are developed in-house, although a small number are bought off-the-shelf. To perform this integration, the technology breaks down tasks into sub-tasks using large language models. This enables the robot to translate voice commands into a set of actions.

To navigate, the robot combines a 3D ‘semantic’ map of the world with a dynamic map of what’s going on right now based on stereo vision. When the robot is introduced to a new working area, it needs to map the surroundings. This is achieved by the robot following a person who ‘shows them around’ the space.

One of the company’s main focuses is on optimising ‘Sim2Real’ – the process of converting experience from a virtual simulator into the real world. The ‘Menteebot’ is trained in a virtual simulator with the startup applying a novel approach to closing the gap between the simulator and the real-world.

The robots have a very natural gait, meaning they can walk in any direction, balance, make small turns, and turn in place. This contrasts with other robot solutions that typically move rigidly and in straight lines. When carrying a heavy load, the robot adjusts its gait as a human would, and its arms and hands also have a full range of motion to perform delicate tasks.

The robots are designed for household and warehouse use, and the production-ready version is expected to be available early next year. Find out more.