Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The writer is international policy director at Stanford University’s Cyber Policy Center and special adviser to the European Commission
Hardly a day goes by without a new proposal on how to regulate AI: research bodies, safety agencies, an idea from the International Atomic Energy Agency branded ‘IAEA for AI’ . . . the list keeps growing. All these suggestions reflect an urgent desire to do something, even if there is no consensus on what that “something” should be. There is certainly a lot at stake, from employment and discrimination to national security and democracy. But can political leaders actually develop the necessary policies when they know so little about AI?
This is not a cheap stab at the knowledge gaps of those in government. Even technologists have serious questions about the behaviour of large language models (LLMs). Earlier this year, Sam Bowman, a professor at NYU, published “Eight Things to Know about Large Language Models”, an eye-popping article which revealed that these models often behave in unpredictable ways and experts do not have reliable techniques with which to steer them.
Such questions should give us serious pause. But instead of prioritising transparency, AI companies are shielding data and algorithmic settings as trademark-protected proprietary information. Proprietary AI is notoriously unintelligible — and growing ever more secretive — even as the power of these companies expands.
The recent Foundation Model Transparency Index, published by my colleagues at Stanford’s Institute for Human-Centered Artificial Intelligence, sheds light on how much we don’t, and may never, know. Researchers assessed LLMs from the market leaders: OpenAI, Meta, Anthropic and Hugging Face, examining everything from the data to the human labour involved in their construction and distribution policies. Among the 100 indicators, researchers scrutinised the use of copyrighted data, privacy measures and security testing. The top-scoring model, Meta’s Llama 2, received only 54 out of 100: a resounding F if it were a student paper.
Transparency isn’t just an important ideal. It is essential to successful AI accountability. Without it, researchers cannot assess foundational models for bias or security threats. Regulators and the public will remain completely unaware of the risks embedded in technologies for healthcare or law enforcement.
This information desert also gives corporate leaders a disproportionate influence over how politicians understand the technology. Those in government seem likely to view these executives as exalted experts with noble intentions and unmatched insight — despite their very obvious profit motives. We saw this dynamic on full display at the US Senate’s recent AI forum, a closed-door gathering where many of the industry’s leading CEOs and venture capitalists uncritically cheered on the technology’s progress.
The lack of transparency on the inner workings of powerful tech products has not produced particularly good outcomes in the past. As Rishi Bommasani, a lead researcher on the Transparency Index recounts, “We’ve seen deceptive ads and pricing across the internet, unclear wage practices in ride-sharing, dark patterns tricking users into unknowing purchases, and myriad transparency issues around content moderation that have led to a vast ecosystem of mis- and disinformation on social media.” Businesses and consumers using AI will experience intensifying harms that grow harder to detect. Antitrust violations happen in black box systems and those seeking “fake” favourable product reviews benefit from the same AI technology that can be used to put deceptive political ads on steroids.
Politicians are often reluctant to admit what they don’t know, but creating the conditions for inquiry, scrutiny and oversight is key to producing well-informed AI regulation. Foundational models need transparency standards. Data access for researchers is equally important, yet curiously lacking in the EU AI Act, which is currently the world’s most comprehensive AI law. The G7’s newly released code of conduct as well as this week’s White House’s executive order on AI also fail to put transparency front and centre. The UK government has narrowly focused on safety scrutiny ahead of its AI summit this week.
Ongoing opacity will stand in the way of accountability and allow companies to act recklessly. The result is a cascade of problems, from the empowerment of Silicon Valley to a critical information deficit among the media, the public and regulators. More relevant policy depends on a better understanding of the technology and its business models. Only this will tackle the multitude of risks that AI presents.
Read the full article here