How to stay curious while avoiding distraction

0 0

My wife doesn’t even bother to roll her eyes any more when I fail to complete the simplest of household tasks. “Did you get distracted?” she will ask, although she knows the answer. Thankfully, now I have cover, because if there is one person in the household more likely to stop halfway through putting on his shoes or brushing his teeth, because he suddenly remembers something he wanted to read or watch or listen to, it’s my 13-year-old son. When they make Getting Distracted an Olympic sport, my money’s on him being a medal contender.

My wife, of course, cuts him more slack than me.

“He gets distracted because he’s so curious,” she said. And the remark stuck in my mind, partly because I’d read almost exactly the same thing from the design guru Don Norman, who wrote: “My curiosity frequently leads me to insights that have helped me in my career. So why is this wonderful, creative trait of curiosity given the negative term ‘distraction’?” These are ideas to ponder. Yet surely there is a distinction to be teased out between the essential trait of curiosity and its evil twin, distractibility.

Janelle Shane’s exploration of AI, You Look Like a Thing and I Love You (2019), sheds light on the question under controlled conditions by looking at the behaviour of curious, and distractible, AI systems. As Shane explains, AI systems are often trained by using some form of trial and error, with a “reward function” deciding which experiments should be regarded as a success and which should be regarded as a failure. For example, you might teach a computer to learn to ride a virtual bike in a simulated 3D environment by rewarding the distance pedalled, and penalising the number of times the bike falls over.

The challenge comes when the reward function misses what the human programmers really wanted. Perhaps the AI will avoid the risk of falls by leaving the bike on the floor, or maximise distance pedalled by wobbling in a big circle or even by standing the bike upside down and cranking the pedals. These are not merely theoretical possibilities. One algorithm was designed to sort a list of numbers and simply deleted the list, instantly ensuring that not a single number was out of place.

These are fairly simple problems. The more complex the desired behaviour, the easier it is to accidentally reward the wrong thing. But there is a clever and effective approach for training computers to solve a fairly wide range of problems: reward curiosity. More precisely, reward the computer when it encounters situations in which it finds the outcome unpredictable. Off it will go in search of something it hasn’t seen before.

Shane writes: “A curiosity-driven AI will learn to move through a video-game level so it can see new stuff, avoiding fireballs, monsters and death pits because when it gets hit by those, it sees the same boring death sequence.” Death is to be avoided not for its own sake, but because it’s terribly predictable.


All this is fascinating in its own right, and hints at why humans themselves might have evolved a sense of curiosity. But AI systems, like 13-year-old boys, can also be curious to the point of distractibility themselves. For example, ask a curiosity-driven AI to teach itself to play a Pac-Man-style game in which ghosts move randomly around a maze, and you will struggle: the AI doesn’t need to do anything to have its curiosity satisfied, because unpredictable ghosts are endlessly fascinating. Or, as Shane explains, a curiositybot will quickly learn to navigate a maze, unless one of the maze walls has a TV on it that shows a series of random images. “As soon as the AI found the TV, it was transfixed.” Much like my son. Or, for that matter, me.

This problem is sufficiently well known to AI researchers that it has a name: the “noisy TV problem”. And, for a clever programmer, it can be solved. Alas, our modern world is full of distractions as perfectly designed to grab our attention as a TV full of static is designed to grab the attention of a curiositybot, and we cannot simply reprogram ourselves to avoid these intellectual empty calories.

One solution is defensive: avoid noisy TVs. Delete your social media account (or, at least, remove the app from your phone and install two-step verification to make it annoying to log in). Don’t sleep with your phone in the bedroom. Switch off all but essential notifications. We know all this, and if you can make yourself do it, it works. But a second approach focuses more on the positive. As well as trying to cut out mere novelty, we should seek out things worth being curious about. This is easier than one might think, because thoughtful curiosity builds knowledge, and knowledge builds thoughtful curiosity.

As Ian Leslie explains in his book Curious: The Desire To Know and Why Your Future Depends on It (2014), human curiosity usually requires a reasonable base of facts to underpin it. “The curiosity zone is next door to what you already know,” he writes.

That seems right. I am vastly more curious about new ideas in fields about which I already know a bit, such as economics, table-top games or callisthenics, than I am about subjects in which I have no intellectual toehold, such as anthropology, knitting or hockey.

So the plan for both distractible members of the Harford household must be the same: keep learning. The more you know, the more you will prefer something in-depth, rather than the next thumbnail recommended by YouTube.

Tim Harford’s children’s book, ‘The Truth Detective’ (Wren & Rook), is now available

Follow @FTMag to find out about our latest stories first and subscribe to our podcast Life and Art wherever you listen



Read the full article here

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy