Receive free Future of work updates
We’ll send you a myFT Daily Digest email rounding up the latest Future of work news every morning.
As the hype grows over generative artificial intelligence — emerging technology that can create text, images and code — many businesses are drawn to the potential for automating repetitive tasks and cutting costs as jobs are displaced.
Research suggests that the technology will indeed shake up the workplace. One study published in March found that the introduction of large language models such as ChatGPT could result in some 80 per cent of the US workforce having at least 10 per cent of their tasks affected by the technology. Nearly a fifth of workers could have at least 50 per cent of their tasks affected, and the impact is likely to be felt far beyond tech roles, in more language-intensive areas such as law, advertising and finance.
But some experts believe that the technology will not cause the jobs market to shrink but could instead be a boon for it — with new types of roles being created and the emergence of human-AI symbiosis.
According to Stanford University professor and AI specialist Erik Brynjolfsson, in the past many advances and transformative technologies have tended to increase gaps in income inequality between more and less experienced workers, or college and non-college students.
But this is not necessarily the case with generative AI. In a study conducted alongside other academics from the Stanford Digital Economy Lab, Brynjolfsson found that deploying generative AI in a customer service centre — to suggest the best answers that staff could give to customers — boosted employee productivity by 14 per cent. Less experienced or skilled staff benefited the most, with about a 30 per cent boost to productivity, he says, adding that this could typically be applied across many industries.
“Some of the most productive uses [of generative AI] are the ones that augment humans rather than imitate or replace [them],” says Brynjolfsson, who has co-founded a work management platform called Workhelix that reviews the tasks typically carried out in a company and flags opportunities for AI to help.
Research suggests that the way most managers justify return on investment in technologies such as AI is through the promise of a swift reduction in headcount. But Mohammad Hossein Jarrahi, associate professor at the University of North Carolina at Chapel Hill and an authority on AI, warns that the hunt for short-term efficiencies through task automation could in the longer term lead to unintended consequences, such as the loss of valuable expertise.
“Generative AI systems are a task centre,” he says. “For the most part this intelligence is very related to specific tasks . . .[But] machines cannot penetrate that tacit understanding of the organisation as a whole.”
By contrast, Jarrahi says, humans have “important competitive advantages”, including emotional and social intelligence, and strategic, holistic thinking based on years of internalised experience.
As a result, AI-human partnerships will be of prime importance, he says, as will our ability to collaborate with these machines. “The human experts have to stay in the loop,” he says. “Our roles are going to shift as partners to these systems. It’s not about human-to-human skills but co-operation skills between us and them.”
Nevertheless, as part of the shift, humans will need to acquire new skills in order to stay ahead, such as increased AI literacy. This might include greater familiarity with large language models such as ChatGPT, to understand where the underlying data comes from, and whether there are biases in the algorithms, or other flaws.
“That is going to require strong critical thinking skills,” says Ravit Dotan, an AI ethics adviser and researcher. “In generative AI, lots of social and cultural assumptions are being made and these can be invisible to those who are not culturally aware.” She emphasises that companies will need to appoint individuals who will take on the thorny ethical questions.
In some cases, new AI-specific roles will emerge to cater to the technology’s explosion in popularity. Already there is a growing market for the data-labellers or annotators who help to train the algorithms — for example, by clarifying what a particular object is. “Labelling can be a really important skill but it needs to be recognised and valued for what it is,” Dotan says. “It can be a very sensitive job [but] people look at this as cheap labour. It will often be outsourced to other countries.”
Another up-and-coming role in this arena is the “prompt engineer” — a software engineer whose job it is to test and develop the text prompts that will generate the most accurate and desirable response from any AI application.
“It becomes increasingly important to think about the right questions to ask — so having an understanding of what the customer problems are and what you want to aim this powerful tool at is important,” says Brynjolfsson. “We need to be encouraging [employees] to lean in and think about how I can use this tool to be more creative.”
Read the full article here