Deep learning is a branch of machine learning that uses algorithms to, for example, recognize objects and human speech. It is based on the concept of artificial neural networks, defined by Dr. Robert Hecht-Nielsen, the inventor of one of the first neurocomputers as: a “computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external input,” or with other words, computer systems that imitate the way the human brain functions.
The history of deep learning starts in 1943, when Warren McCulloch and Walter Pitts create a computational model for neural networks, called Threshold logic, based on mathematics and designed to mimic the way a neuron was thought to work, and their paper A Logical Calculus of the Ideas Immanent in Nervous Activity is the first written source to introduce the neural network concept.
A significant advancement came in 1980, when Kunihiko Fukushima developed the Neoconitron, which used a hierarchical, multilayered design allowing computers to “learn” to recognize handwriting, to later inspire convolutional neural networks, successful in identifying different objects and power vision in robots.
In 1985, Terry Sejnowski, a computational neuroscientist created NETtalk, a program that learned how to pronounce English words in much the same way a child does.
Improvements in shape recognition and word prediction followed the development of Backpropagation theories and the introduction of Long Short Term Memory Networks in 1997.
The very term deep learning began to gain popularity in the mid-2000s after a paper by Geoffrey Hinton and Ruslan Salakhutdinov showed how a many-layered neural network could be pre-trained one layer at a time. And a big free database of millions of labelled images needed to train neural networks was launched in 2009 by Fei-Fei Li, an AI professor at Stanford, who stated that: “Big Data would change the way machine learning works. Data drives learning.”
3 years later, the Cat Experiment was performed, where 10 million unlabeled images – randomly taken from YouTube – were presented to a neural network spread over thousands of computers that at the end of the learning session has taught itself to recognize and identify cats; and while the success rate was only around 15%, the experiment turned the term deep learning into a buzzword.
In two years’ time, the DeepFace neural networks showed capability to identify faces with 97.35% accuracy and the research in deep learning accelerated rapidly becoming a growing trend in the business world, along with Artificial Intelligence.
According to Stratistics MRC, the global deep learning market is expected to reach USD 72 billion by 2023, finding main applications in data mining, image and speech recognition, video diagnostic, chatbots, autonomous driving.