Predicting & discovering linguistic structure with Neural Networks

38

In the field of natural language processing, recent research has shown that deep neural network models are quite brittle when confronting linguistically motivated attacks. In order to build NLP systems that can generalize and work well in practice, it is important to investigate the ability of current models in capturing linguistic phenomena.

  • In this document, the author presents his latest work on answering the two following questions:
    Which type of architectures is better at capturing implicitly hierarchical structure? We show empirically that recurrency is important for modeling hierarchical structure.
  • What syntax a neural machine translation (NMT) model needs to maximize its performance? We introduce a model that simultaneously translates while inducing dependency trees. We show that our dependency trees are 1. language pair dependent and 2. improve translation quality.

             About the author:

Mr. Tran Manh Ke was a PhD candidate at Informatics Institute/Information and Language Processing Systems, University of Amsterdam. He worked on Statistical Machine Translation, advised by Christof Monz and Arianna Bisazza. He is interested in natural language processing, deep learning, and probabilistic programing. Before starting PhD, Mr. Ke completed the master, EMLCT, in computational linguistics at University of Groningen and Charles University in Prague. Marco Wiering and Daniel Zeman were his master thesis supervisors.

Download HERE

The document was shared at “al+ AI seminar 03”

Related posts: