TIME magazine has picked the ‘Architects of AI’ as their person of the year for 2025, when the potential for AI “roared into view” with no turning back.
“For delivering the age of thinking machines, for wowing and worrying humanity, for transforming the present and transcending the possible, the Architects of AI are TIME’s 2025 Person of the Year,” the magazine announced.
2025 was the year when artificial intelligence’s full potential roared into view, and when it became clear that there will be no turning back.
For delivering the age of thinking machines, for wowing and worrying humanity, for transforming the present and transcending the… pic.twitter.com/mEIKRiZfLo
— TIME (@TIME) December 11, 2025
Whether you use it or not, AI has dominated headlines all year. On one hand, AI has proven useful in an increasing number of applications. On the other, it’s potentially rotting our brains. What’s hilarious is that less than 6 months ago, TIME published a piece titled “ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study.”
In it, MIT researchers found that “the usage of LLMs could actually harm learning, especially for younger users.”
“What really motivated me to put it out now before waiting for a full peer review is that I am afraid in 6-8 months, there will be some policymaker who decides, ‘let’s do GPT kindergarten.’ I think that would be absolutely bad and detrimental,” said the paper’s main author Nataliya Kosmyna.
Then there’s ‘vibe coding’ – where ‘programmers’ simply ask an AI to write code for them instead of programming it manually, and of course, you can’t scroll Facebook now (why would you?) without running into mountains of ‘AI slop’ – which spans everything from fake scientific journals to brain-rotting videos seemingly designed to pull western society’s average IQ into double-digits.
McDonald’s has unveiled its own AI-generated Christmas ad that somehow looks even worse than Coca-Cola’s.
Terrible AI visuals? Check. Horrible messaging? Check. A like-to-dislike ratio that says it all? Oh, you better believe that’s a check:https://t.co/XQHnVLoG5T pic.twitter.com/Vestu3uNJS
— 80 LEVEL (@80Level) December 8, 2025
In 2023, Elon Musk called AI one of humanity’s “biggest threats,” which is why he says he set off to create a “politically neutral” and “maximally truth-seeking” chatbot (Grok) with the aim of minimal bias.
Maximally truth-seeking is absolutely essential to ensuring a good AI future for humanity https://t.co/6QnWy3v1AB
— Elon Musk (@elonmusk) July 25, 2025
“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial – it has the potential of civilization destruction,” Musk told Tucker Carlson. “A regulatory agency needs to start with a group that initially seeks insight into AI, then solicits opinion from industry, and then has proposed rule-making.”
According to TIME, “It was hard to read or watch anything without being confronted with news about the rapid advancement of a technology and the people driving it. Those stories unleashed a million debates about how disruptive AI would be for our lives. No business leader could talk about the future without invoking the impact of this technological revolution. No parent or teacher could ignore how their teenager or student was using it.”
“Every industry needs it, every company uses it, and every nation needs to build it,” Nvidia CEO Jensen Huang told the outlet. “This is the single most impactful technology of our time.“
Indeed.
Loading recommendations…
Read the full article here