Bengio, 55, and two other protagonists of that revolution won the highest honor in computer science, the ACM Turing Award, known as the Nobel Prize of computing. The other winners are Google researcher Geoff Hinton, 71, and NYU professor and Facebook chief AI scientist Yann LeCun, 58. We will have a look at their work in this article.

Yoshuo Bengio

Yoshua Bengio is a world-renowned Canadian computer scientist, most noted for his work on artificial neural networks and deep learning. Along with Geoffrey Hinton and Yann LeCun, Bengio is considered by Cade Metz one of the three people most responsible for the advancement of deep learning during the 1990s and 2000s. Whereas the other two went to work for Google and Facebook respectively, Bengio has stayed in academia. Among the computer scientists with an h-index of at least 100, Bengio is the one with the most recent citations per day, according to MILA. In October 2016, Bengio co-founded Element AI, a Montreal-based business incubator that seeks to transform artificial intelligence (AI) research into real-world business applications. In May 2017, Bengio announced that he was joining Montreal-based legal tech startup Botler AI, as of a strategic adviser.

More than 20 years later, the tech industry fell in love with that idea too. Neural networks are behind the recent bloom of progress in AI that has enabled projects such as self-driving cars and phone bots practically indistinguishable from people.

Previously, in the late 1980s, Canadian master’s student Yoshua Bengio became captivated by an unfashionable idea. A handful of artificial intelligence researchers were trying to craft software that loosely mimicked how networks of neurons process data in the brain, despite scant evidence it would work. “I fell in love with the idea that we could both understand the principles of how the brain works and also construct AI,” says Bengio, now a professor at the University of Montreal.

Yann LeCun, VP & Chief Scientist at Facebook, made significant contributions to the understanding and development of convolutional neural networks, particularly in the field of image recognition.

He was born near Paris, France, in 1960. He received a Diplôme d’Ingénieur from the Ecole Superieure d’Ingénieur en Electrotechnique et Electronique (ESIEE), Paris in 1983, and a Ph.D. in Computer Science from Université Pierre et Marie Curie in 1987 during which he proposed an early form of the back-propagation learning algorithm for neural networks. He was a postdoctoral research associate in Geoffrey Hinton’s lab at the University of Toronto from 1987 to 1988. In 1988, he joined the Adaptive Systems Research Department at AT&T Bell Laboratories in Holmdel, New Jersey, the United States, headed by Lawrence D. Jackel, where he developed a number of new machine learning methods

He spent much of the late ’80s and early ’90s working with AT&T, first as a researcher and eventually as the Head of their Image Processing Research Department, where was one of the main creators of image compression technology DjVu. He joined NYU as a Professor of Computer Science Neural Science in 2003 and became the head of Facebook’s Artificial Intelligence laboratory in 2013.

Geoffrey Hinton

He received his Ph.D. in Artificial Intelligence from Edinburgh in 1978 and spent five years as a faculty member in Computer Science at Carnegie-Mellon.

Hinton points out a training solution for the so-called deep network. In 1983, he co-invented the Boltzmann machine, one of the first neural network devices to use statistical probabilities. Today, technology has been improved and used by large technology companies such as Facebook and Amazon.

Hinton is one of the first researchers in the field of neural networks. While he was a Professor at Carnegie Mellon University, he was one of the first researchers who demonstrated the generalized back-propagation algorithm. This was in 1985. But due to the lack of computational power at that time, not much could be achieved using the novel algorithm. It was later in 2012 that he used the same algorithm to train deep neural networks and created a major milestone in image recognition.

One crucial moment took place in 2012, when Hinton, then at the University of Toronto, and two grad students surprisingly won an annual contest for software that identifies objects in photos. Their triumph left the field’s favored methods in the dust, correctly sorting more than 100,000 photos into 1,000 categories within five guesses with 85 percent accuracy, more than 10 percentage points better than the runner-up. Google acquired a startup founded by the trio early in 2013, and Hinton has worked for the company ever since.

You can look back on what happened and think science worked the way it’s meant to work,” Hinton says. That is, “until we could produce results that were clearly better than the current state of the art, people were very skeptical.

Despite deep learning’s many practical successes, there’s still much it can’t do. Neural networks are brain-inspired but not much like the brain. The intelligence that deep learning gives computers can be exceptional at narrowly defined tasks—play this particular game, recognize these particular sounds—but isn’t adaptable and versatile like human intelligence.

Hinton and LeCun say they would like to end the dependence of today’s systems on explicit and extensive training by people. Deep learning projects depend on an abundant supply of data labeled to explain the task at hand—a major limitation in areas such as medicine. Bengio highlights how, despite successes such as better translation tools, the technology is not able to actually understand language.

None of the trios claim to know how to solve those challenges. They advise anyone hoping to make the next Turing-winning breakthrough in AI to emulate their own willingness to ignore mainstream ideas. “They should not follow the trend—which right now is deep learning,” Bengio says.

Despite optimism about the future of machine learning and neural networks, scientists still express a cautious attitude about the practical applications they can bring, especially in the development of weapon system.

Le Hoang

Part 1: Trio AI Godfathers, who are they?

Related posts: