Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
The writer is a winner of the Turing Award, Full Professor at Université de Montréal and the Founder and Scientific Director of Mila — Quebec Artificial Intelligence Institute
We still don’t know what actually happened at OpenAI. But recent events should prompt us to take a step back and ask broader questions about the kind of governance required for organisations that are both developing powerful frontier artificial intelligence systems and explicitly aiming at creating human-level intelligence, or AGI.
Should they be for-profit organisations overseen by a board responsible to shareholders? Should they be non-profit organisations with a mission of greater good? Or a hybrid version? Could they be nationalised and fully under government control? Or do we need new forms of governance that would seek to reconcile our shared democratic values with the financial and power gain that future frontier systems promise to those who control them?
I often remind myself that democracy is foremost about sharing power, and that our democratic institutions — with their checks and balances — are designed to avoid its concentration, even in the hands of a few elected officials.
When it comes to AI, future outcomes are highly dependent on who has decision rights. As development accelerates, benefits and risks grow accordingly. We already have AIs that can generate realistic fake images of politicians to influence elections. In the near future, many are concerned that malicious actors could use AI systems to help design and release deadly biological weapons. There is growing fear that risks could escalate beyond our ability to rein them in.
The decisions of how to develop and deploy AI, then, may soon drastically affect society. If that is the case, should they be completely left in private hands? As seen with the oil and gas industry in regards to climate change, we cannot fully trust profit-driven companies to take into account the societal implications of their activity. It is a classic tragedy of the commons, where the best move by individual players is not well aligned with collective wellbeing.
Unfortunately, we may also not be able to rely on non-profit structures to prevail. As seen at OpenAI, investors ultimately hold a great deal of influence. Another option would be for AI progress to be funded by governments, essentially nationalising the labs or directing their mission through contracts and oversight. However, this comes with its own set of risks, including misuse for authoritarian goals or warfare.
For true responsible governance of AI, we therefore need to avoid a single point of failure. As I laid out in a recent paper, we need strong independent and democratic oversight, involving not only a national regulator but also civil society, independent academics and the international community. The governance structure must be multi-stakeholder and multilateral. One objective is to minimise conflicts of interest with commercial goals and focus on safety-first R&D. Another is to protect against the possibility that a lab’s system falls into the wrong hands or becomes a dangerous runaway entity. Having labs share their results means the other institutions would be there to defend society.
While a lot of uncertainty remains, wrestling with these questions is urgent. Reports suggest that OpenAI may have recently made a breakthrough, Q*, which may have greatly increased reasoning and mathematical abilities. If and when this is proven to be true, we could have become closer to AGI.
I am well positioned to appreciate the implications of such a breakthrough through my own research. My group came up with the attention mechanisms which led to the transformers that are the engine behind today’s frontier systems. I believe the main remaining gap between current advanced systems and AGI is what we could refer to as conscious cognition — abilities such as reasoning, deliberate thought and explicit planning.
I have argued for many years that although deep learning has made huge strides in cognitive capabilities corresponding to human intuition (system 1), methods are still weak regarding the conscious cognition crucial for humans to provide correct answers in settings for which we need to reason (system 2). If OpenAI has made progress on this, AGI may be much closer than many of us expected.
Regardless of who gets there first, if we bridge the gap to human intelligence soon, will society be ready to respond? Do we have the appropriate governance mechanisms and other means to mitigate potentially dangerous outcomes? I do not think so. This discussion must happen urgently. It should reflect democratic values and democratic will. The OpenAI saga should serve as a strong warning.
Read the full article here