Attempts to create fully conscious Artificial Intelligence (AI) have seen extraordinary progress in recent decades. However, AI has been a source of fascination, a quest and a challenge for thinkers and scientists throughout recorded history. The history of AI can be said to begin with Greek myths and philosophers.
Myths and tales from antiquity frequently feature characters that could be described as intelligent robots. For example, Talos was an automaton that protected Crete from invaders. Similarly, many myths also feature artificial beings. Some of these include Galatea, a carved statue that came to life, and Pandora, a human woman created by the gods.
Philosophers in classical Greece began creating frameworks in an attempt to understand and systematize every aspect of life. Mechanical or ‘formal’ reasoning, without which modern machines and software would be impossible, has its roots in Aristotle’s analysis of syllogism and Euclid’s model of formal reasoning in his work The Elements.
In 1818, Mary Shelley published Frankenstein. This is the tale of Victor Frankenstein, a scientist who creates an artificial (biological) creature analogous to a human. The story is an interrogation of the bioethics of creating conscious beings.
In 1950, an English mathematician named Alan Turing proposed the Turing test as the standard measure of machine intelligence. This is a measure of whether a machine is able to exhibit intelligent behavior equivalent to, or indistinguishable from, a human being. Human judges interact with a human and a machine using a text-only channel; if the judges are not able to distinguish reliably between the human and the machine interlocutor, the machine has passed the test.
In 1997, the most advanced Chess Computer, IBM’s Deep Blue, defeated the world champion Garry Kasparov in a series of matches. Kasparov is widely seen as one of the greatest chess players of all time. Even though this contest took place within the confines of a chessboard, it represents a highly significant advance for Artificial Intelligence.
In the late 1990s, MIT’s AI Lab was able to demonstrate an Intelligent Room. This is able to track occupants as they move in the room and use word spotting to gain some understanding of their communications.
Web crawlers were invented in the late 1990s. These are programs that browse the internet in a systematic fashion in order to index it. Web crawlers are used extensively by web search engines such as Google and Bing in order to systematize the internet and make it searchable. Crawlers are required to have a significant degree of intuition in order to make the web intelligible for searches, so their creation represents a substantial advance.
Watson on Jeopardy!
Watson is a program created by IBM that uses intuitive machine learning to categorize and understand information on the internet. It is sufficiently effective to be able to answer trivia questions with a high degree of accuracy. In 2011, Watson defeated human champions on the well-known television show Jeopardy! for the first time.
In recent years, Apple’s Siri represents the pinnacle of widely available, intuitive AI that is capable of limited learning. Siri is the name for a function present on Apple smartphones (‘iPhones’) that users can interact with as if it were a human being.
Siri can understand many simple spoken commands such as requests for directions, recommendations for restaurants and asking about the weather. It remembers preferences and habits to enhance future interaction. Since Siri has a limited capacity to learn and relies heavily on pre-programmed responses, it remains some way from ‘genuine’ artificial intelligence. However, given that it provides the public with a largely frictionless experience evoking a highly effective personal assistant, it can appear to be a major step in that direction.