The drama at artificial intelligence company OpenAI this month has sparked renewed focus on the technology’s potential and limitations.
There is speculation that the reason its founder and CEO Sam Altman was fired (only to be later rehired) was because his team had made a great advance in AI technology that alarmed the board.
This supposed advance is what’s called artificial general intelligence, or AGI, and it would be a quantum leap from AI as we know it so far.
What Is Artificial Intelligence, or AI?
Like any computer program, AI is a set of rules used to produce answers to problems. An algorithm, in other words. A human plugs in information and asks the program to sort and analyze it, and the program comes up with the answers.
The difference between AI and a pocket calculator is just a matter of scale–it’s possible to plug in an awful lot of information, and the program has an awful lot of rules it can use to sort through it.
That’s the basis of ChatGPT, the language program developed by OpenAI. The program can absorb all the information on Wikipedia and from thousands of books. The set of rules it uses are the rules of the English language. So when you ask it a question, it can mimic a human response by mining that trove of information to come up with a convincing answer.
The same basic principle is what’s behind the AI that can imitate people’s voices, create fake images or videos, or run self-driving cars. The enormity of the information that can be put into the program and the sophistication of the analysis it can do with it–as well as the development of an easy-to-use interface with the program–is what has come so far in recent years.
Now, it’s important to remember how AI is generating these humanlike actions–it’s using the information it has been fed along with rules and statistical probabilities to produce the outcomes. It’s only as good as the information that goes into it and the quality of the rules it has to work with.
It’s also worth pointing out that that’s not how humans think. When the famous chess player Garry Kasparov played against the supercomputer Deep Blue, he relied on memory, instinct, and judgment to make his moves. Deep Blue, on the other hand, simply considered every possible statistical possibility for what could happen next. (As it turns out, Kasparov won the first match in 1996. Deep Blue won the second a year later.)
How Is Artificial General Intelligence, or AGI, Different?
The difference between AI and AGI is in how it learns. Currently AI learns by being fed more information by humans. AGI, by contrast, would be able to recognize when it doesn’t know something and be able to seek out new information for itself. It could create or modify its own algorithms when it sees that its outputs don’t match the real world. It could essentially teach itself–something that current AI doesn’t really do.
There is speculation that Sam Altman and OpenAI may have made a step in this direction by creating a program called Q*. Roughly speaking, it would be able to learn through trial and error as well as by predicting possible new problems in the future. That would certainly be an advance–though, according to a report, it may only have the capacity to solve math problems at a grade-school level.
Still, if the programs can teach themselves, it may or may not be possible that they develop judgment, reasoning, and instinct—things that are only in the gift of humans at present.
This is where the worry creeps in, if computers become smarter than people. It’s certainly the stuff of science fiction.
But it hasn’t happened in real life. At least not yet.
Write to Brian Swint at [email protected]
Read the full article here