Stay informed with free updates
Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.
My French has reached an unhappy level: not good enough to be able to watch French television without subtitles, good enough to be annoyed by bad translations. These errors stick in my mind long after I have finished watching. Some of the choices made in the subtitles of Amazon’s excellent Baron Noir, for example, are either the work of a very ancient computer or a very lazy human.
Then there are the translation choices I puzzle over, not because they are bad but because they are difficult. The brilliant comedy Call My Agent!, for example, features a moment in which Sofia, the agency’s receptionist, asks one of the agents if they should use more informal “tu” pronouns: an exchange for which there is no literal translation. The translator opted for “shall we become friends?” a vague and clunky line. But I’m not sure what, if anything, they could have done better. Faced with a sentence that has no English analogue, there are no good answers.
In the end, with translation sometimes you have to make an imperfect choice because no two languages are exactly alike. This is true even for the closest comparisons: just think about how the words “quite nice” carry very different meanings if the speaker is British or American, or which swear words are appropriate in a workplace conversation in Sydney as opposed to New York.
The question of what to do when you only have imperfect choices is a useful illustration for how we should think about AI and ethics. We don’t yet know how smart artificial intelligence will be or how effective large language models can become at replacing human labour in most fields. But in translation, we can say that machine intelligence already provides consistent, reliable work. We have already reached the point where we could, if we wished, do without many human translators.
Yet as distinctions large and small show, we shouldn’t want to, precisely because translation is always going to be an imperfect task involving difficult judgment calls. The problem is not that machines can’t make judgment calls — it’s that inevitably these judgment calls will have consequences and someone needs to be able to defend those consequences. A machine can take on any number of things, but the one thing it can never take is responsibility.
Marring my enjoyment of an otherwise flawless French political drama is one thing — but what about the translation of an important transaction? People deserve a reasonable expectation that someone will take responsibility for consequential decisions.
Just because a computer can do it isn’t the reason for any organisation, be it a company or a state, to shrug its shoulders and say that they have outsourced a tricky decision to a machine. The difficult legwork, maybe: but the ultimate decisions about what words to use, what decision to make, whether those decisions are large or small — these should always be made by a human being. It is corrosive to accountability and to people’s control over their own lives if they don’t have anyone they can complain to or seek redress from.
We see a more visceral example of this playing out on X right now, where the website’s AI chatbot, Grok, has agreed to user requests to produce images of real people, including children, stripped of their clothing. But it isn’t “Grok” who is degrading humans in this manner. It is individual people who are choosing to make the request and an individual platform that is choosing to host the chatbot and the images. These are decisions being made by individuals and they should be considered as such.
We can reasonably expect X to prevent Grok from doing this and we can expect the state to step in, as the British government is doing, if it does not. We shouldn’t expect ChatGPT or any translation software to stop a translator from using it: the responsibility for a bad translation still ultimately rests on the person who signs off the finished work.
That is true whether the decisions are small — how to express sentiments that exist in one language in another that has no exact translation — or enormous and illegal, like producing sexualised images of another person without their consent. Machines cannot claim moral agency. Even if a human’s role is just to come in at the end, tidy up the more difficult bits or adjudicate on properly knotty problems, they must accept responsibility for the outcome. Instead of being fobbed off with an uncertain answer and pointed towards a machine, we should all have the reassurance of knowing that in the end, we can seek restitution from or have gratitude for the work of a real person.
Read the full article here