While the study of intelligence is one of the oldest disciplines, Artificial Intelligence is one of the newest, founded as an academic field in 1956.
Thinking machines came to the scene in the early 1950s, and in 1955, John McCarthy, an Assistant Professor of Mathematics at Dartmouth College, organised a brainstorming summer workshop, coining the name of the new field of study and stating its aims as: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves”.
Many of the attendees became the future leaders of AI research, granted substantial funds to make their vision of a machine as intelligent as a human being come true.
The initial optimism was soon displaced by disappointment and loss of funding, notably the reduce of U.S. Government funding in the field of Machine Translation following the 1966 Automatic Language Processing Advisory Committee report concluding that machine translation is expensive and not likely to reach the quality of a human translator.
The various concerns in automatic language translation systems were further discussed at the NATO Summer School, held during the same year in Venice, where one of the first academics in the field of Machine Translation, Bar-Hillel, expressed his scepticism: “Though computers have been programmed to do certain things..it would be disastrous to extrapolate from these primitive exhibi-tions of artificial intelligence to something like translation.”
In the following decades, Artificial Intelligence, largely viewed as synonymous to ‘false promises,’ experienced what came-to-be-known as ‘AI Winter’; with optimism gradually increasing in the last decade, and today AI and machine learning forming the core of the fourth industrial revolution.
Once criticised, nowadays, Artificial intelligence is largely considered as the technology with the biggest potential to revolutionize nearly every industry in the next decade and generate a global economic growth of over $15 trillion by 2030.
Machines become smarter and more capable, three years after the chatbot named Eugene Goostman, became the first computer to pass the Turing test, convincing one third of the judges at the Royal Society in London that it was human, some experts believe that machines will reach human level intelligence in the next couple of decades. Further predictions see a world where deep learning and AI will enable robots to do most of the basic human daily tasks within the next 10 years.
As the key is not to develop a technology that can replace human intelligence, but one that can effectively solve problems, a typical Hollywood scenario of an AI Takeover seems far from plausible, yet has its advocates, for example, Stephen Hawking who thinks that artificial intelligence will be either the best or worst thing for humanity and that the development of full artificial intelligence, pose a threat to the existence of the human race.