The promise — and peril — of generative AI

0 1

Receive free Artificial intelligence updates

The writer is founder of Sifted, an FT-backed site about European start-ups

Even though it was written 37 years ago, Melvin Kranzberg’s first law of technology resonates today in our fast-evolving world of artificial intelligence. “Technology is neither good nor bad, nor is it neutral,” Kranzberg wrote.

As the historian explained, whether a technology is considered good or bad mostly depends on context and time. The same invention can produce different results in different contexts and these can change as circumstances evolve.

So, for example, the use of the pesticide DDT was initially welcomed as a means of killing malaria-bearing mosquitoes and boosting agricultural productivity. But after the environmentalist Rachel Carson exposed how DDT wrought ecological damage and poisoned people, it was banned in many countries. Even then, India continued to use DDT for disease prevention, helping to cut the annual death toll from malaria from 750,000 to 1,500.

Two recent, and radically different, use cases of AI highlight the duality of our latest general purpose technology as well as the difficulties of separating the good from the bad.

First, the good. Since 1971, volunteers working for Project Gutenberg have built a valuable public resource: a vast library of free and accessible digital books. The project now makes available more than 60,000 non-copyright books as part of its mission to break down the “bars of ignorance and illiteracy.” For years, the project has been keen to turn those ebooks into audiobooks to benefit people with impaired vision but the costs have been prohibitive. Now, thanks to AI, it has generated 5,000 audiobooks at incredible speed and minimal cost and is releasing them on Spotify, Google and Apple.

This audio project, led by pro bono researchers at the Massachusetts Institute of Technology and Microsoft, used the latest neural text-to-speech technology to mimic the quality and tone of human voices and create 35,000 hours of audiobook recordings in just over two hours. “It’s a little more robotic than your normal human. But our goal was to get them out fast in the least offensive manner,” Mark Hamilton, an MIT researcher and co-lead of the project, tells me.

The researchers have demonstrated an even more astonishing application that enables audiobooks to be read in anyone’s voice using just five seconds of sample audio. Parents might one day use that app to “read” night-time books to their children even when they are away from home. But the potential dangers of voice cloning are obvious and the research team is rightly wary about releasing the technology. “A lot of people in Al like to say that it’s all roses. But realistically, machine learning is a very, very powerful tool that can be wielded for good and evil,” says Hamilton.

An example of how AI technology has been used for evil occurred this month in the Spanish town of Almendralejo, where a group of boys shocked the community by circulating AI-generated nude images of 28 local girls on WhatsApp and Telegram. Pictures of the girls were copied from their social media accounts and then manipulated using a generative AI app. Amid a national political furore, a prosecutor is now examining whether any crime has been committed.

Gema Lorenzo, a mother of a 16-year-old son and a 12-year-old daughter, said all parents were concerned. “You’re worried about two things: if you have a son you worry he might have done something like this; and if you have a daughter, you’re even more worried, because it’s an act of violence,” she told the BBC.

The promise, and peril, of generative AI is that it is now so readily accessible and can be deployed at extraordinary speed and scale. More than 100mn users experimented with OpenAI’s ChatGPT chatbot within two months of launch. And technology companies are now building even more powerful multimodal models that combine text, images, audio and video.

“Many of our technology-related problems arise because of the unforeseen consequences where apparently benign technologies are employed on a massive scale,” Kranzberg wrote. That is true with social media and is becoming the case with generative AI, too. As with other technologies, a messy period of behavioural, societal and legislative adaptation will follow.

A Sequoia Capital report posted this month suggested that generative AI was still in its “awkward teenage years.” The industry certainly has a lot of growing up to do and parental intervention will be required.

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy