Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
Humanity “needs to wake up” to the potentially catastrophic risks posed by powerful AI systems in the years to come, according to Anthropic boss Dario Amodei, whose company is among those pushing the frontiers of the technology.
In a nearly 20,000-word essay, posted on Monday, Amodei sketched out the risks that could emerge if the technology develops unchecked — ranging from large-scale job losses to bioterrorism.
“Humanity is about to be handed almost unimaginable power and it is deeply unclear whether our social, political and technological systems possess the maturity to wield it,” Amodei wrote.
The essay marked a stark warning from one of the most powerful entrepreneurs in the AI industry that current safeguards around AI are inadequate.
Amodei outlines the risks that could arise with the advent of what he calls “powerful AI” — systems that would be “much more capable than any Nobel Prize winner, statesman or technologist” — which he predicts is likely in the next “few years”.
Among those risks is the potential for individuals to develop biological weapons capable of killing millions or “in the worst case even destroying all life on earth”.
“A disturbed loner [who] can perpetrate a school shooting, but probably can’t build a nuclear weapon or release a plague . . . will now be elevated to the capability level of the PhD virologist,” wrote Amodei.
He also raises the potential of AI to “go rogue and overpower humanity” or to empower authoritarians and other bad actors, leading to “a global totalitarian dictatorship”.
Amodei, whose company Anthropic is the chief rival to ChatGPT-maker OpenAI, has clashed with David Sacks, President Donald Trump’s AI and crypto ‘tsar’ over the direction of US regulation.
He has also likened the administration’s plans to sell advance AI chips to China to selling nuclear weapons to North Korea.
Trump signed an executive order last month to hamper state-level efforts to regulate AI companies, and published an AI action plan last year laying out plans to accelerate US innovation.
In the essay, Amodei warned of sweeping job losses and a “concentration of economic power” and wealth in Silicon Valley as a result of AI.
“This is the trap: AI is so powerful, such a glittering prize, that it is very difficult for human civilisation to impose any restraints on it at all,” he added.
In a veiled reference to the controversy around Elon Musk’s Grok AI, Amodei wrote that “some AI companies have shown a disturbing negligence towards the sexualisation of children in today’s models, which makes me doubt that they’ll show either the inclination or the ability to address autonomy risks in future models”.
AI safety concerns such as bioweapons, autonomous weapons and malicious state actors featured prominently in public discourse in 2023, partly driven by warnings from leaders such as Amodei.
That year, the UK government organised an AI safety summit in Bletchley Park, where countries and labs agreed to work together to counter such risks.
But political decisions around AI are increasingly being driven by a desire to seize the opportunities presented by the new technology rather than mitigate its risks, according to Amodei.
“This vacillation is unfortunate, as the technology itself doesn’t care about what is fashionable, and we are considerably closer to real danger in 2026 than we were in 2023,” he wrote.
Amodei was an early employee at OpenAI but left to co-found Anthropic in 2020 after clashing with Sam Altman over OpenAI’s direction and AI guardrails.
Anthropic is in talks with groups including Microsoft and Nvidia and investors including Singaporean sovereign wealth fund GIC, Coatue and Sequoia Capital about a funding round of $25bn or more, valuing the company at $350bn.
Read the full article here