Big Tech leaders are exaggerating the existential threat AI poses to humanity to solidify their market shares through government regulation, a leading figure in artificial intelligence told an Australian financial publication Monday.
“There are definitely large tech companies that would rather not have to try to compete with open source, so they’re creating fear of AI leading to human extinction,” Andrew Ng, co-founder of Google Brain (now DeepMind) and an adjunct professor at Stanford University, told John Davidson, of the Australian Financial Review.
“It’s been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community,” he added.
OpenAI CEO Sam Altman has been outspoken about the need for government regulation of AI. In May, Altman, along with DeepMind CEO Demis Hassabis and Anthropic CEO Dario Amodei, signed a statement from the Center for AI Safety that compared the risks of AI to humanity to those of nuclear wars and pandemics.
“The notion that AI systems will spiral out of control and make humans extinct is a compelling plotline in sci-fi thrillers, but in the real world, the fear is more an exaggeration than a likely scenario,” said Aswin Prabhakar, a policy analyst for the Center for Data Innovation, a think tank studying the intersection of data, technology, and public policy, in Washington, D.C.
Prabhakar explained that the journey towards creating artificial general intelligence (AGI), a form of AI that surpasses human intellect across all fields, still has a long, uncertain road ahead.
“Even if AGI were realized, which is by no means certain, for it to pose an existential threat, it would have to go rogue and break free from the control of its human creators, which is still an incredibly speculative scenario,” he told TechNewsWorld.
He added that contemplating an AI-induced apocalypse unfairly sidelines the technology’s immense and concrete benefits. “The gains from AI in fields like health care, education, and economic productivity are enormous and could significantly uplift global living standards,” he asserted.
Posing Challenges to Open-Source AI
Ala Shaabana, co-founder of the Opentensor Foundation, an organization committed to developing AI technologies that are open and accessible to the public, said of the Big Tech AI leaders: “They are definitely using scare tactics. There are a lot more things that we should be more closely concerned with than the risk of humanity becoming extinct. That’s a tad exaggerated. It’s all PR.”
“Creating artificial general intelligence — AI with consciousness that can think for itself and do for itself without human intervention — is the holy grail of AI,” he told TechNewsWorld. “But how can we develop something that’s conscious when we don’t understand consciousness ourselves?”
Government regulation could pose a threat to the open-source AI community, noted Rob Enderle, president and principal analyst with the Enderle Group, an advisory services firm in Bend, Ore.
“It depends on how the laws and regulations are written,” he told TechNewsWorld. “But governments often do more harm than good, particularly in areas not yet well understood.”
Prabhakar added that overly broad government rules on AI could pose challenges for the open-source community.
“Such regulations, especially if they place responsibility on developers of open-source AI systems for how their tools are used, could discourage contributors,” he said. “They might fear legal issues if their open-source AI tools are misused, making them less likely to share their work freely.”
Choking Open Source With Red Tape
Prabhakar recommended the government take a tailored approach to AI oversight. “Recognizing that open-source projects have different incentives compared to commercial ones and creating exceptions in the regulations for open-source models might be a solution,” he explained.
“By adjusting the rules to better fit the open-source spirit, we can aim for a scenario where regulation and open-source innovation coexist and thrive,” he reasoned.
Shaabana maintained that the Executive Order on Artificial Intelligence released by the White House Monday contained provisions favoring Big Tech AI companies over open-source developers.
“The Executive Order is extremely manual,” he explained. “It requires a lot of resources to comply with if you’re a small company or small researcher.”
“One requirement is that any company developing artificial intelligence models will have to start reporting the training of that model and getting it approved by the U.S. government,” he said. “Unless you’re a big-time researcher or a Meta, OpenAI, or Google, you’re not going to make it through that red tape. Those companies will have their own divisions to get through that.”
The open-source community won’t be the only community that could be harmed by government regulation of AI, he continued.
“The scientific community will also be affected,” he contended. “In the last two years, researchers in climate change, biology, astronomy, and linguistics have used open-source AI to do their research. If those open-source models weren’t available, that research would be unavailable now.”
Hidden Costs of AI Regulation
While regulations can hurt small, open-source AI players, they can benefit the current AI establishment. “Strict regulation on AI creates significant barriers to entry, particularly for emerging ventures lacking the requisite capital or expertise to navigate the many regulatory mandates,” Prabhakar explained.
“The upfront costs associated with adhering to stringent regulations could potentially stifle the emergence of innovative startups, thereby consolidating the market around well-established players,” he continued.
“Big Tech firms are better poised to comply with and absorb the costs associated with a rigorous regulatory framework,” he said. “Unlike their SME counterparts, they have the capital, expertise, and infrastructure necessary to navigate the regulatory maze. This disparity creates a moat around the established players, potentially shielding them from the brunt of competition.”
The absence of a moat to stave off open-source competition was the subject of a memo attributed to a Google researcher that caused quite a stir when it was leaked online in May. In it, the researcher argued that “a third faction has been quietly eating our lunch” — open-source AI models that are “faster, more customizable, more private, and pound-for-pound more capable” than those of Google and OpenAI.
Shaabana praised President Biden’s Executive Order for aiming to integrate AI into government but added, “A lot of it looks like Big Tech trying to close the door behind them.”
“They’ve created this fancy AI, and they really don’t want any competition or just competition among a handful of companies that can get through the government process,” he continued.
“Ironically,” he said, “a lot of the government’s fears about bias, transparency, and anti-competitiveness can all be resolved with open source AI and without regulation.”
“What’s going to happen if we let this slide, and we let these companies control everything? They will continue to create their own AI and continue to make themselves richer from our data,” he predicted.
Read the full article here