Hello from The Everyday Series!
If you like our work, please consider supporting us so we can keep doing what we do. And as a current subscriber, enjoy this nice discount!

Also: if you haven’t yet, follow us on Twitter, TikTok, or YouTube!


GloVe is a neural network model for learning word vectors. It was created by Google research scientist Jeffrey Pennington, Richard Socher and Stanford University professor Christopher Manning. GloVe extracts vector representations for words from a large corpus of text data, and these representations can be used in natural language processing applications like machine translation and text understanding.

The model is an unsupervised learning algorithm that obtains vector representations of words. As a result, words are mapped into a meaningful space, where the distance between words reflects their semantic similarity. The resulting representations display interesting linear substructures in the word vector space based on aggregated global cooccurrence statistics obtained from a corpus.

Watch this video explaining the paper well.

The key advantage of GloVe over other word vector models is its size: it can learn vector representations for every word in the English language with just 300 million parameters, compared to 1.2 billion parameters required by similar models such as Word2Vec. This makes it possible to train GloVe on very large corpora (such as the entire Google Books corpus) without requiring prohibitive amounts of computation time or memory space

GloVe: Global Vectors for Word Representation
GloVe: Global Vectors for Word Representation

Do you like our work?
Consider becoming a paying subscriber to support us!