Skip to content

Latest commit

 

History

History
184 lines (145 loc) · 11.7 KB

NLP.md

File metadata and controls

184 lines (145 loc) · 11.7 KB

NLP

Since 2018, pre-training has without a doubt become one of the hottest research topics in Natural Language Processing (NLP). By leveraging generalized language models like the BERT, GPT and XLNet, great breakthroughs have been achieved in natural language understanding. However, in sequence to sequence based language generation tasks, the popular pre-training methods have not achieved significant improvements. Now, researchers from Microsoft Research Asia have introduced MASS—a new pre-training method that achieves better results than BERT and GPT.

The general building blocks of their model, however, are still found in all current neural language and word embedding models. These are:

  1. Embedding Layer: a layer that generates word embeddings by multiplying an index vector with a word embedding matrix;
  2. Intermediate Layer(s): one or more layers that produce an intermediate representation of the input, e.g. a fully-connected layer that applies a non-linearity to the concatenation of word embeddings of $n$ previous words;
  3. Softmax Layer: the final layer that produces a probability distribution over words in $V$.

The softmax layer is a core part of many current neural network architectures. When the number of output classes is very large, such as in the case of language modelling, computing the softmax becomes very expensive.

Word Embedding and Language Model

Word are always in the string data structure in computer.

Language is made of discrete structures, yet neural networks operate on continuous data: vectors in high-dimensional space. A successful language-processing network must translate this symbolic information into some kind of geometric representation—but in what form? Word embeddings provide two well-known examples: distance encodes semantic similarity, while certain directions correspond to polarities (e.g. male vs. female). There is no arithmetic operation on this data structure. We need an embedding that maps the strings into vectors.

Probabilistic Language Model

Language Modeling (LM) estimates the probability of a word given the previous words in a sentence: $P(x_t\mid x_1,\cdots, x_{t-1},\theta)$. Formally, the model is trained with inputs $(x_1,\cdots, x_{t-1})$ and outputs $Y(x_t)$, where $x_t$ is the output label predicted from the final (i.e. top-layer) representation of a token $x_{t-1}$.

Neural Language Model

The Neural Probabilistic Language Model can be summarized as follows:

  1. associate with each word in the vocabulary a distributed word feature vector (a real valued vector in $\mathbb{R}^m$),
  2. express the joint probability function of word sequences in terms of the feature vectors of these words in the sequence, and
  3. learn simultaneously the word feature vectors and the parameters of that probability function.

When a word $w$ appears in a text, its context is the set of words that appear nearby (within a fixed-size window). We can use the many contexts of $w$ to build up a representation of $w$.

Attention

Attention mechanisms in neural networks, otherwise known as neural attention or just attention, have recently attracted a lot of attention (pun intended). In this post, I will try to find a common denominator for different mechanisms and use-cases and I will describe (and implement!) two mechanisms of soft visual attention.

An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.

Graph Attention Networks

Transformer

Representation learning forms the foundation of today’s natural language processing system; Transformer models have been extremely effective at producing word- and sentence-level contextualized representations, achieving state-of-the-art results in many NLP tasks. However, applying these models to produce contextualized representations of the entire documents faces challenges. These challenges include lack of inter-document relatedness information, decreased performance in low-resource settings, and computational inefficiency when scaling to long documents.In this talk, I will describe 3 recent works on developing Transformer-based models that target document-level natural language tasks.

BERT

GPT

DeepSpeed

DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.