The future of AI agents: highly lucrative but surprisingly boring

0 1

Stay informed with free updates

Silicon Valley visionaries dream of making mega-money out of cool, futuristic products that thrill consumers, such as the metaverse, self-driving cars or health-monitoring apps. The duller reality is that most venture capitalists generate the best returns from investing in boring stuff sold to other businesses.

Over the past two decades, Software-as-a-Service has emerged as one of the most lucrative fields for VC investment, generating 337 unicorns, or tech start-ups valued at more than $1bn. But typical SaaS businesses, such as customer relationship management systems, payments processing platforms and collaborative design tools, rarely quicken the consumer’s pulse. Investors love them all the same: they are capital-light and speedily scalable and can generate torrents of revenue from dependable, and often price insensitive, corporate licences.

That may well be the case with generative artificial intelligence, too. For the moment, consumers are still dazzled by the seemingly magical ability of foundation models to generate reams of plausible text, video and music and to clone voices and images. The big AI companies are also trumpeting the value of consumer-facing personal digital agents that are supposedly going to make all our lives easier. 

“Agentic” is going to be the word of next year, OpenAI’s chief financial officer, Sarah Friar, recently told the FT. “It could be a researcher, a helpful assistant for everyday people, working moms like me. In 2025, we will see the first very successful agents deployed that help people in their day to day,” she said.

While the big AI companies, like OpenAI, Google, Amazon and Meta, are developing general-purpose agents that can be used by anyone, a small army of start-ups is working on more specialised AI agents for business. At present, generative AI systems are mostly seen as co-pilots that augment human employees, helping them write better code, for example. Soon, AI agents may become autonomous autopilots to replace business teams and functions altogether.

In a recent discussion, the partners at Y Combinator said the Silicon Valley incubator had been deluged with mind-blowing applications from start-ups looking to apply AI agents to fields that include recruitment, onboarding, digital marketing, customer support, quality assurance, debt collection, medical billing, and searching and bidding for government contracts. Their advice was: find the most boring, repetitive administrative work you can and automate it. Their conclusion was that vertical AI agents could well become the new SaaS. Expect more than 300 AI agent unicorns to be created.

Two factors may slow the rate of adoption, however. First, if AI agents really are capable of replacing entire teams and functions then it seems unlikely that line managers will adopt them in a hurry. Managerial suicide is not a strategy taught at most business schools. Ruthless chief executives, who understand the technology, might impose brutality on their underlings in pursuit of greater efficiency. Or, more likely, new company structures will evolve as start-ups seek to exploit AI agents to the maximum. Some founders are already talking about creating zero-employee autonomous companies. Their Christmas parties may be a bit of a drag, though.

The second frustrating factor may be concerns about what happens when agents increasingly interact with other agents and humans are out of the loop. What does this multi-agent ecosystem look like and how does it work in practice? How can anyone ensure trust and enforce accountability? 

“You have to be super careful,” says Silvio Savarese, a professor at Stanford University and chief scientist at Salesforce, the giant SaaS company that is experimenting with AI agents. “We need guardrails to make these systems behave appropriately.”

Trying to model — and control — intelligent multi-agent systems is one of the most intriguing areas of research today. One way is to train AI agents to flag areas of uncertainty and seek assistance when confronted with unrecognised challenges. “An AI should not be a competent liar. It has to come to a human and say, ‘Help me,’” Savarese says.

Otherwise, the worry is that improperly trained agents might run out of control just like the magical broom instructed to fetch buckets of water in Johann Wolfgang von Goethe’s poem “The Sorcerer’s Apprentice. “The spirits that I summoned are ignoring my command, they are out of my control,” the apprentice laments, surveying the mess caused by his inexpert magic. It’s funny how age-old fictional dilemmas are now taking on surprising new computational forms.

[email protected]

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy