AI is approaching an open-source inflection point

0 2

Unlock the Editor’s Digest for free

The cause of open-source artificial intelligence — the idea that the inner workings of AI models should be openly available for anyone to inspect, use and adapt — has just passed an important threshold.

Mark Zuckerberg, chief executive of Meta, claimed this week that the latest of his company’s open-source Llama models is the first to reach “frontier-level” status, meaning it is essentially on a par with the most powerful AI from companies such as OpenAI, Google and Anthropic. From next year, according to Zuckerberg, future Llama models will move ahead to become the world’s most advanced.

Whether or not that happens, both the welcome and unwelcome effects of opening up such a powerful technology for general use have been thrown into sharp relief. Models like Llama are the best hope of preventing a small group of large tech companies from tightening their stranglehold on advanced AI. But they could also put a powerful technology into the hands of disinformation-spreaders, fraudsters, terrorists and rival nation-states. If anyone in Washington had been thinking of challenging the open spread of advanced AI, now would probably be the time.

The emergence of Meta as the main champion of open source in the AI world has had an unlikely feel to it. Early on, the company once known as Facebook reversed course on becoming an open-platform company, where any developer would be able to build services, to turn itself into one of the internet’s most closed “walled gardens”. Nor is Meta’s open-source AI exactly open source. The Llama models have not been released under a licence recognised by the Open Software Initiative. Meta retains the right to prevent other large companies from using its technology.

Yet the Llama models meet many of the tests of openness — most people can inspect or adapt the “weights” that determine how they work — and Zuckerberg’s claims to being a convert to open source out of enlightened self-interest have the ring of truth.

Unlike Google or Microsoft, Meta’s business doesn’t involve selling direct access to AI models and it would find it hard to compete head-to-head in the technology. Yet leaving itself dependent on other companies’ technology platforms could be a risk — as Meta discovered to its cost in the smartphone world, when Apple changed its iPhone privacy rules in ways that devastated Meta’s business.

The alternative — nurturing an open-source alternative that could win wider backing in the tech industry — is a well-trodden strategy. The list of companies lining up behind the latest Llama model this week suggest it is starting to have an effect. They include Amazon, Microsoft and Google, which are offering access through their clouds.

In claiming that open source is in many ways safer than the traditional proprietary alternative, Zuckerberg has tapped into a powerful force. Many users want to see the inner workings of the technology they depend on, and much of the world’s core infrastructure software is open source. In the words of computer security expert Bruce Schneier: “Openness = security. It’s only the tech giants that want to convince you otherwise.”

Yet for all the advantages of the open-source approach, is it simply too dangerous to release powerful AI in this form?

The Meta CEO argues that it is a myth to believe the most valuable technology can be kept secure from nation-state rivals: China, he says, will steal the secrets regardless. For a national security establishment wedded to the idea that there are such things as secrets that can be kept secret, that argument is likely to ring hollow.

When it comes to less powerful adversaries, meanwhile, Zuckerberg argues that the experience of running a social network shows that combating malignant uses of AI is an arm’s race that can be won. As long as the good guys have more powerful machines at their disposal than the bad guys, all will be fine. Yet that assumption may not hold. Anyone can theoretically rent powerful technology on demand, through one of the public cloud platforms.

It’s possible to imagine a future world where access to such massive computing power is regulated. Like banks, cloud companies could be required to follow a “know your customer” rule. There have been suggestions that governments should directly control who has access to the chips needed to build advanced AI.

That may eventually be the world we’re heading towards. But if so, it’s still a long way off — and through open source, freely available AI models are already racing ahead.

[email protected]

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy