We might be surprised by our reactions to generative AI

0 0

Stay informed with free updates

One of the puzzles about generative artificial intelligence is why the text it conjures is so long-winded. Why does ChatGPT produce 10 paragraphs when one would do? Another puzzle is how we will all behave when it proliferates. There is a chance that we are about to get a lot more terse.  

AI is stretching the reality gap between Silicon Valley and the rest of the world. Most of us are not yet firing up chatbots to write emails or carry out research. But tech companies think it is a foregone conclusion that we soon will. If they are right, the way we communicate is about to be transformed. 

The turning point could be 2024. Plans are in motion to knit generative AI into our everyday lives, particularly at work. Google is about to release its AI model Gemini while Microsoft will be selling its AI-assistant Copilot. We could find ourselves surrounded by prompts offering to summarise meetings, write emails and fill in spreadsheets.

No one knows how the public will react. Governments fret about catastrophe. On Monday, President Joe Biden issued regulations that require AI companies to notify the government if they are developing models that pose a security risk. The UK’s eye-catching AI summit at Bletchley Park this week is likely to examine everything that could go wrong, from enabling fraud to facilitating attacks. But these sorts of outcomes will not occur immediately, if at all. Nor will the dreaded job losses. What will happen first is a shift in our own behaviour.

Most discussion about generative AI focuses on the ways in which it can help users. Little time is spent thinking about the impact on recipients. But computer-generated words carry less weight. At a recent debate on AI in Hong Kong, one of the speakers revealed that what she was reading had been generated by an AI chatbot. The words sounded convincing, but her revelation also stripped them of meaning. My attention instantly wavered.  

My guess is that you will react the same way if you know that what you are reading or hearing was not made by another person. It could also change the way we interact online. Once you realise there is no one at the other end of a message, there is no need to type out complete sentences. What matters are key words. Courtesies become unnecessary too. Even when sending a message to a real person, the knowledge that they are using generative AI to parse that message and extract information means there is no need for niceties. Communication could be pared back to a brusque exchange of facts. 

If you are an optimist, you may believe that this will make us all more productive. It could make real-world interactions more precious too. Fussy formalities may not be missed. Much depends on whether we will have fail-safe ways to know what has been generated by AI and what has not. Without clear watermarks, I think trust will falter.

Not everyone agrees with me. The people I know who already use generative AI (mostly friends who work in tech in San Francisco) say that drafting the prompts necessary for creating the right content means there is still human-to-human involvement. 

Some tech companies believe we will not interact with AI-generated text any differently than we already do with each other. Meta is planning to roll out more AI chatbots with their own Instagram and Facebook accounts. Mark Zuckerberg expects AI experiences to become a meaningful part of all of Meta’s apps, with creators across social media able to create their own AI versions that can interact with fans, create content and, in time, interact with one another. It will be, he says, “almost a new kind of medium and art form”. 

“Art form” is a dubious description. Low-effort monetisation scheme for busy creators might be more accurate. But what sort of interactions with fans does Meta expect? The comments on videos produced for an early version of this idea, featuring videos of dancer Charli D’Amelio on an AI account called Coco, are variations of “I don’t get it” and “this is scary”.

All new technology goes through a life cycle. AI is in the research and development spending stage. If it reaches the next phase, we can expect mass deployment. If that happens, the scale of AI’s impact is being compared to the industrial revolution.

Our own reactions could come as a surprise to us. There was a mini-trend this summer in which people aped the mannerisms of computer-generated non-player characters, or NPCs, in their videos. The jerky movements and blank expressions of creators like Pinkydoll and Nicole Hoff make them look like digital avatars.

Many of us will accept the help that AI offers. Some of us will reject it. Other attitudes may be more peculiar. Instead of trying to distinguish what is and is not real, we might instead opt to blur the lines.

[email protected]

Video: Can generative AI live up to the hype? | FT Tech

Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy