Artificial Intelligence

Working for a company that employs Artificial Intelligence and data science techniques in everything we do, it’s impossible to avoid buzzwords. In meetings, I often find sentences are laden with terminology that is assumed to be understood, but never explicitly explained. After saying AI, Data Science, Machine Learning and NLP (Natural Language Processing) in one breath (with a straight face), I thought it was time to check my understanding of these. This led me down a rabbit-hole, sense-checking everything I thought I knew about Computers and Computer Science. Here’s what I found.

Where it all started…

 

Computing has fundamentally transformed every aspect of our lives, far beyond how we interact with them daily. A future without computing is impossible to imagine and this isn’t just smartphones and connected devices. If computers disappeared the global economy and the very foundation of our daily lives would crash. We know this, but we rarely think about it – and with the clarity of hindsight it’s likely we are living through a period that will be known as ‘the electronic age’. Technology is growing at an exponential pace that was unfathomable only a decade ago – our greatest invention is in its very infancy.

 

Close-Up of technician checking cables in a rack mounted server

 

To try and understand the lofty, but as yet unrealised, ideals of true, autonomous artificial intelligence – a world in which computers have emotional intelligence and the ability to make autonomous decisions – it makes sense to understand the existing rate of change. In 1819, Charles Babbage began work on the ‘difference engine’, the first computer that would compute more than one task at the same time. The machine had 25,000 components, weighed 15 tonnes and took two decades to build, before being abandoned. But it was the learning from this machine that inspired an imagined machine, the ‘analytical engine’. Ada Lovelace wrote hypothetical programs for this machine – she’s widely credited as the world’s first computer programmer as a result. Charles Babbage’s role in laying the foundations for the exponential growth that happened from the first home computer, released in 1957, to artificial intelligence and machine learning can’t be overstated. I’ll be first in the queue to see the 15th Terminator sequel, where a CGI Arnie returns to the 18th century to murder Charles Babbage.

 

Two external factors drove innovation and capability in computing – the advent of electronic transistors to replace manual technology, and war. When the Harvard Mark 1 (a general purpose electromechanical computer that was used in the war effort during the last part of World War II) was completed in 1944 by IBM, it’s scale was unprecedented – it contained five hundred miles of wire connections. The Harvard Mark 1 was power-hungry and unreliable and represented a huge, daily maintenance challenge. The five hundred miles of wires generated a huge amount of warmth, meaning the machine was attractive to insects – particularly moths who were attracted to the Vacuum Tube transistors. When a calculation went wrong, Grace Hopper (one of the first programmers of the Mark 1) identified the problem as a population of moths breaking the circuit. From then on, when anything went wrong with a computer, it was said that “it had bugs in it”.

 

Back then, computers were large, inaccessible and non-portable because the transistors were a couple of cm wide and long. Today’s computers and smartphones use transistors that are smaller than 50 nanometres. To put that into perspective, a single sheet of A4 paper is roughly 100,000 nanometres thick. The development of these transistors was a focal point for innovation in the Santa Clara Valley, California, in the 60s and 70s. The innovators there discovered that Silicon was great transistor material and to this day, the re-monikered Silicon Valley remains a bastion of innovation.

 

A lot of the buzz in Silicon Valley at the moment is based around the development of Artificial Intelligence. AI can be surrounded by a quiet mania – it’s responsible for ‘the fourth industrial revolution’ and as one Forbes study found, could be responsible for 800 million job losses(!). This mania means it can be hard to cut through the hyperbole and get to the fact. The mania isn’t new – Charles Babbage hinted at displacement; “At each increase of knowledge, as well as on the contrivance of every new tool, human labour becomes abridged”. (Note abridged, not replaced, which is whole other story).

AI in Insurance

 

Young asian businessman sitting at the table with laptop and holding umbrella over gray background. Looking up-1

 

Artificial Intelligence, however, isn’t ‘new’. In fact it has been used in Insurance since the 1980s in case-based reasoning and rules engines, and deployed for underwriting and claims fraud. It has been developing rapidly over the last five years due to the convergence of massive computing power and the proliferation of data. AI is really a catch-all term for a host of technologies. There’s two types – Classic AI and Deep AI. So,  what are the use cases that are typically found in Insurance?

 

Classic AI: Based on Human Experience. Inputs are human knowledge and expertise and responses are programmed:

 

  • Rules Engines: A method of machine learning that automatically identifies relational rules that can be used to manipulate, store or apply data – an ‘if that then this’ methodology.
  • Data Mining: A field of computer science that extracts insights from very large sets of data.

 

Deep AI: Learns on its own. Inputs are data and guiding algorithms. System learns, identifies patterns and formulates responses:

 

  • Natural Language Processing: A sub-field of computer science used to program computers to analyse large amounts of text on websites and decipher insights. NLP can also be used to understand the spoken word, and to generate text.
  • Machine Learning: A sub-field of computer science that enables computers to learn how to do a task without being programmed to do so. In the same way your brain uses experience to learn how to perform a task, so can computers. Machines can ingest massive amount of information and learn from it iteratively. 

 

There’s so much here, it’s easy to become victim to the hyperbole and hype. It’s clear that computers and computer science are iterating at a pace that’s hard to predict. It’s also clear that we’re already working WITH Artificial Intelligence, not against it, and more often than not it’s augmenting how we work, not replacing us. As far back as 1978, Prime Minister James Callaghan was enlisting a think tank to tackle automation (from computer chips) at a time when the White House’s first computer was operating on 16kb of memory. The reality then, is somewhere in-between: AI is already being used and implemented at a rapid pace, and the future pace is impossible to predict – but the AI apocalypse is currently a hype train we’d all do well to jump off.

 

Written by Ben Abbott

Comments are closed.