The Context-dependent Additive Recurrent Neural Net Contextual sequence mapping is one of the fundamental problems in Natural Language Processing. Instead of relying solely on the information presented in a text, the learning agents have access to a strong external signal given to assist the learning process.

This paper proposes a novel family of Recurrent Neural Network unit: the Context-dependent Additive Recurrent Neural Network (CARNN) that is designed specifically to leverage this external signal. The experimental results on public datasets in the dialog problem (Babi dialog Task 6 and Frame), contextual language model (Switchboard and Penn Discourse Tree Bank) and question answering (TrecQA) show that our novel CARNN-based architectures outperform previous methods.

Authors:

  • Quan Hung Tran – Monash University, Clayton, Australia; Adobe Research, San Jose, CA
  • Tuan Manh Lai – Adobe Research, San Jose, CA
  • Gholamreza Haffari – Monash University, Clayton, Australia
  • Ingrid Zukerman – Monash University, Clayton, Australia
  • Trung Bui – Adobe Research, San Jose, CA
  • Hung Bui – DeepMind, Mountain View, CA; Monash University, Clayton, Australia

See more HERE.

Related posts: