Seminar on Natural Language Processing and Neural Network Architecture

February 26, 2018

Yangfeng Ji will present a seminar on deep learning, a learning technique used in natural language processing, on Thursday, March 1, at 11am, in Rice Hall 242. His talk, "Bringing Structural Information into Neural Network Design," will look at examples of using structural information from natural language to improve the design of neural network models. Ji is visiting UVA as a candidate for a faculty position in the Department of Computer Science.

Abstract: Deep learning is one of most popular learning techniques used in natural language processing (NLP). A central question in deep learning for NLP is how to design a neural network that can fully utilize the information from training data and make accurate predictions. A key to solve this problem is to design a better network architecture. In this talk, I will present two examples from my work on how structural information from natural language helps design better neural network models. The first example shows adding coreference structures of entities not only helps different aspects of text modeling, but also improves the performance of language generation; the second example demonstrates structures of organizing sentences into coherent texts can help neural networks build better representations for various text classification tasks. Along the lines of this topic, I will also propose some ideas for future work and discuss the potential challenges.