Over the past few years, the term “deep learning” has firmly worked its way into business language when the conversation is about Artificial Intelligence (AI), Big Data and analytics. And with good reason – it is an approach to AI which is showing great promise when it comes to developing the autonomous, self-teaching systems which are revolutionizing many industries.
Deep Learning is used by Google in its voice and image recognition algorithms, by Netflix and Amazon to decide what you want to watch or buy next, and by researchers at MIT to predict the future. The ever-growing industry which has established itself to sell these tools is always keen to talk about how revolutionary this all is. But what exactly is it? And is it just another fad being used to push “old fashioned” AI on us, under a sexy new label?
In my last article I wrote about the difference between AI and Machine Learning (ML). While ML is often described as a sub-discipline of AI, it’s better to think of it as the current state-of-the-art – it’s the field of AI which today is showing the most promise at providing tools that industry and society can use to drive change.
In turn, it’s probably most helpful to think of Deep Learning as the cutting-edge of the cutting-edge. ML takes some of the core ideas of AI and focuses them on solving real-world problems with neural networks designed to mimic our own decision-making. Deep Learning focuses even more narrowly on a subset of ML tools and techniques, and applies them to solving just about any problem which requires “thought” – human or artificial.
How does it work?
Essentially Deep Learning involves feeding a computer system a lot of data, which it can use to make decisions about other data. This data is fed through neural networks, as is the case in machine learning. These networks – logical constructions which ask a series of binary true/false questions, or extract a numerical value, of every bit of data which pass through them, and classify it according to the answers received.
Because Deep Learning work is focused on developing these networks, they become what are known as Deep Neural Networks – logic networks of the complexity needed to deal with classifying datasets as large as, say, Google’s image library, or Twitter’s firehose of tweets.
With datasets as comprehensive as these, and logical networks sophisticated enough to handle their classification, it becomes trivial for a computer to take an image and state with a high probability of accuracy what it represents to humans.
Pictures present a great example of how this works, because they contain a lot of different elements and it isn’t easy for us to grasp how a computer, with its one-track, calculation-focused mind, can learn to interpret them in the same way as us. But Deep Learning can be applied to any form of data – machine signals, audio, video, speech, written words – to produce conclusions that seem as if they have been arrived at by humans – very, very fast ones. Let’s look at a practical example.
Take a system designed to automatically record and report how many vehicles of a particular make and model passed along a public road. First, it would be given access to a huge database of car types, including their shape, size and even engine sound. This could be manually compiled or, in more advanced use cases, automatically gathered by the system if it is programmed to search the internet, and ingest the data it finds there.
Next it would take the data that needs to be processed – real-world data which contains the insights, in this case captured by roadside cameras and microphones. By comparing the data from its sensors with the data it has “learned”, it can classify, with a certain probability of accuracy, passing vehicles by their make and model.
So far this is all relatively straightforward. Where the “deep” part comes in, is that the system, as time goes on and it gains more experience, can increase its probability of a correct classification, by “training” itself on the new data it receives. In other words it can learn from its mistakes -just like us. For example it may incorrectly decide that a particular vehicle was a certain make and model, based on their similar size and engine noise, overlooking another differentiator which it determined had a low probability of being important to the decision. By learning that this differentiator is, in fact, vital to understanding the difference between two vehicles, it improves the probability of a correct outcome next time.
So what can Deep Learning do?
Probably the best way to finish this article and give some insight into why this is all so ground breaking is to give some more examples of how Deep Learning is being used today. Some impressive applications which are either deployed or being worked on right now include:
Navigation of self-driving cars – Using sensors and onboard analytics, cars are learning to recognize obstacles and react to them appropriately using Deep Learning.
Recoloring black and white images – by teaching computers to recognize objects and learn what they should look like to humans, color can be returned to black and white pictures and video.
Predicting the outcome of legal proceedings – A system developed a team of British and American researchers was recently shown to be able to correctly predict a court’s decision, when fed the basic facts of the case.
Precision medicine – Deep Learning techniques are being used to develop medicines genetically tailored to an individual’s genome.
Automated analysis and reporting – Systems can analyze data and report insights from it in natural sounding, human language, accompanied with infographics which we can easily digest.
Game playing – Deep Learning systems have been taught to play (and win) games such as the board game Go, and the Atari video game Breakout.
It is somewhat easy to get carried away with the hype and hyperbole which is often used when these cutting edge technologies are discussed (and particularly, sold). But in truth, it’s often deserved. It isn’t uncommon to hear data scientists say they have tools and technology available to them which they did not expect to see this soon – and much of it is thanks to the advances that Machine Learning and Deep Learning have made possible.
Bernard Marr is an internationally best-selling author, popular keynote speaker, futurist, and a strategic business & technology advisor to governments and companies. He helps organisations improve their business performance, use data more intelligently, and understand the implications of new technologies such as artificial intelligence, big data, blockchains, and the Internet of Things.