πŸ“• Node [[embeddings]]
πŸ“„ Embeddings.md by @KGBicheno

embeddings

Go back to the [[AI Glossary]]

A categorical feature represented as a continuous-valued feature. Typically, an embedding is a translation of a high-dimensional vector into a low-dimensional space. For example, you can represent the words in an English sentence in either of the following two ways:

As a million-element (high-dimensional) sparse vector in which all elements are integers. Each cell in the vector represents a separate English word; the value in a cell represents the number of times that word appears in a sentence. Since a single English sentence is unlikely to contain more than 50 words, nearly every cell in the vector will contain a 0. The few cells that aren't 0 will contain a low integer (usually 1) representing the number of times that word appeared in the sentence.
As a several-hundred-element (low-dimensional) dense vector in which each element holds a floating-point value between 0 and 1. This is an embedding.

In TensorFlow, embeddings are trained by backpropagating loss just like any other parameter in a neural network.

πŸ“„ Embeddings.md by @an_agora@twitter.com
πŸ“„ Embeddings.md by @kausch@twitter.com

RT @heikopaulheim: There’s #KnowledgeGraph #Embeddings for #LinkPrediction (Trans* etc.) and KGE for Data Mining (RDF2vec etc.) - are they…


Loading pushes...

Rendering context...