Over the past three years, Péter Fankhauser’s industrial robots went from being able to climb stairs, to jumping between boxes, doing backflips and performing other parkour-style tricks.
The robots were not programmed to perform these new actions, instead adapting to their environment powered by new artificial intelligence models.
“These are the moments where you think this is the next revolution,” said Fankhauser, chief executive of ANYbotics, a Zurich-based robotics start-up. “These things started to move really artistically, and it’s almost scary because the robots play with physics.”
Over the past decade, the $74bn robotics sector has accelerated in capabilities due to significant leaps in AI, such as advances in neural networks, systems that mimic the human brain.
The world’s biggest tech and AI companies, from Google, OpenAI and Tesla, are among those racing to build the AI “brain” that can autonomously operate robotics in moves that could transform industries from manufacturing to healthcare.
In particular, improved computer vision and spatial reasoning capabilities have allowed robots to gain greater autonomy while navigating varied environments, from construction sites to oil rigs and city roads.
Training and programming robots previously required engineers to hardwire rules and instructions that taught the machine how to behave, often specific to each system or environment.
The advent of deep learning models in recent years has enabled experts to train AI software that allows machines to be far more adaptive and reactive to unexpected physical challenges in the real world and learn by themselves.
Generative AI — technology that can generate and parse multimedia and text — has also allowed machines to acquire a greater understanding of the world around them and communicate with humans more easily. The technology has helped those without coding skills to instruct computers using text or voice prompts.
“It’s like watching a toddler learning,” said Carina Namih, a partner at Plural, a London-based early-stage investment fund. “Because the robots are not deterministically programmed but self-learning, you don’t have high engineering costs in the same way.”
While much of the advances are expected to be seen in industrial settings and factory floors, there has also been renewed focus on humanlike robots at large AI companies, known as humanoids.
Earlier this year, Google DeepMind announced a suite of advances in its research, including harnessing large language models to train humanoid robots and help them understand and navigate their surroundings better and more safely. This is also a problem being worked on by World Labs, founded by the “godmother of AI” Fei-Fei Li, which has gained a $1bn valuation in just four months.
ChatGPT-creator OpenAI created a robotics research group last month after disbanding its general-purpose robotics efforts in 2020 while also investing in start-ups. This includes Figure, which raised $675mn at a valuation of $2.6bn in February from investors such as OpenAI, Microsoft, Jeff Bezos and Nvidia.
OpenAI also invested in Oslo-based 1X Robotics, which raised more than $100mn this year, in its effort to create everyday bots for domestic tasks.
A recent analysis by McKinsey valued the global market for humanoid robots at just over $1bn, a small fraction of the total robotics market. However, it said it was growing at more than 20 per cent annually and three times faster than the market for conventional industrial robots.
Experts warn the technology is still lacking and expensive. China’s Unitree Robotics sells its humanoid robot for $16,000. Elon Musk said Tesla would start using and making humanoids next year and sell them more widely from 2026.
Still start-ups have tried to capitalise on the hype. Deals in robotics and drones have hit $6.5bn in value this year across 552 deals, on track to overtake the $9.7bn raised over the whole of 2023 across 1,256 deals, according to data from data provider PitchBook. However, overall investment has steadily fallen in the sector since 2021.
“Robotics is a really hard challenge to crack — prior solutions have been expensive and inflexible, hampering adoption among small and medium-sized companies in particular,” said Luciana Lixandru, a partner at venture capital firm Sequoia Capital, which has invested in a number of AI robotics start-ups including RobCo and Collaborative Robotics. “Advances in AI can help overcome some of the constraints.”
Much of the investing is in early-stage companies. Mytra, a robotics company for warehouse automation, announced this week it has raised $78mn across three rounds. Munich-based RobCo raised $42.5mn in February to support its flexible robotic hardware kits.
San Francisco-based Tetsuwan Scientific recently closed a $2.5mn funding round and aims to create an AI robot scientist that can conduct research and physical experiments in a lab. Its chief executive, Cristian Ponce said robots would be able to reproduce experiments with greater accuracy than humans, freeing up scientists for creativity and discovery.
“Generative AI [has been] applied to stupid things that don’t matter, like back-office tasking or accounting software, but applying generative AI to scientific discovery is the most impactful thing that we could do,” Ponce said.
Meanwhile, consumers’ mass adoption of AI tools has had a knock-on effect on attitudes towards robotics, said Sonali Fenner of management consultancy Slalom.
This has enabled companies to consider using robots in public-facing settings. Fenner used the example of a large retail client who deployed Spot, a Boston Dynamics robot dog powered by Google’s Gemini Pro model, in its stores to assess inventory.
“[The hype] has opened the door a little bit to what is expected in particular environments and, even if you don’t want Spot walking around your retail store, you might accept a slightly less intrusive robot,” Fenner added.
Ahti Heinla, co-founder of Skype and chief executive of delivery bot start-up Starship Technologies, who has deployed the small grocery robots across more than 100 cities and towns in Europe and the UK, said he was surprised at how easily people “perceive the robots to be normal participants in public space, and accept them as natural”.
Read the full article here