Underwriters versus AI: not quite the singularity yet

At DFP, it hasn’t escaped our notice that a lot of people are worried that ever-accelerating technology advances will soon put them out of a job. But before you break out your ‘Hail Our Computer Overlords’ placards, let’s put your mind at rest with a couple of examples that show why AI still has a way to go. 

Recently, Instagram had to apologise when an algorithm sent promoted diet content to users with eating disorders.

The social media giant explained that a new search function in the app suggests topics you may want to search for. In this instance, terms including “appetite suppressants” and “fasting” went to the wrong people. They have since been removed.

In March, 2016, Microsoft released Tay, a Twitter AI chatbot. Its objective was to engage people. Unfortunately Tay discovered that the best way to maximise engagement – to gain the most reaction – was to spew out racist insults. Some of her racist replies were simply regurgitating what trolls had tweeted at her.

It was snatched back offline less than a day later.

Pavlovian response

Basically, AI does exactly what it is told to do and nothing more. It learns different situations that produce an outcome. When it faces a new situation, it makes a prediction based on past data, testing repeatedly to see what works better.

The recent history of human-vs-computer showdowns demonstrates this beautifully.

In 1997 Garry Kasparov lost to IBM’s Deep Blue. It’s the first time in history that the world’s best chess player isn’t a human. AI may have bested the best, but it wasn’t a total AI triumph – after all, Deep Blue had centuries of our collective knowledge of chess poured into it. 

Humanity’s response was what only the most cognitively advanced beings can do – we cheated. Computers beat us at chess? Let’s choose a new, more difficult battleground. Such as the ancient Chinese game of Go. 

Assuming a game of 80 moves, chess-space has 10123  possible variations. Now this is a ridiculously large number  – the atoms in the observable universe come in at a paltry 1080  – but one that computers’ vast number-crunching abilities had caught up with.

Go, by comparison, has 10360. Surely, we’d found waters deep enough to drown our AI competitors? Briefly, we had but in 2016, Google’s AlphaGo beat Lee Sedol, one of Go’s greatest ever players.

The finite universe

Humans may have lost again but we still had a stick to cling to – like Deep Blue, AlphaGo owed some of its prowess to our collective knowledge. So in a sense, we’d beaten ourselves, right?

  1. The next iteration – AlphaGo Zero – teaches itself Go with zero human input. It spends three days playing trillions of games against itself before thrashing AlphaGo 100-to-0. Many of its moves have never been seen before in Go’s 2,500-year history.

So, game over?

Not at all. For all their prowess, the computers above are navigating games in finite universes. They are micro-specialists. Even the most flexible, ‘real world’ AIs are still only dealing with a tiny portion of reality. There’s still nothing remotely approaching human-like general intelligence that can handle a near-infinite array of unforeseen events. 

As Microsoft said about Tay, the more the bot talks with humans the more she will learn. Unfortunately, without the right amount of human guidance, AI can talk to the wrong people. 

What’s a Tesla going to do when an escaped llama climbs through a window and accidentally kicks off the autopilot? There’s a lot of ‘out there’ out there, and it takes a human to navigate the myriad possibilities.

Together we’re better

Rather than supplant humans, AI will augment them. To do its best work, it still needs the subtleties and real-world common sense of human consciousness to guide it. 

For all their chess-playing might, Deep Blue and its descendants can still be beaten by the combination of a reasonably good human player working with a reasonably good computer.