keras.layers.embeddings.Embedding(input_dim, output_dim, init='uniform', weights=None, W_regularizer=None, W_constraint=None)
Turn positive integers (indexes) into denses vectors of fixed size,
eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]]
Input shape: 2D tensor with shape: (nb_samples, maxlen)
.
Output shape: 3D tensor with shape: (nb_samples, maxlen, output_dim)
.
Arguments:
weights
argument.(input_dim, output_dim)
.keras.layers.embeddings.WordContextProduct(input_dim, proj_dim=128,
init='uniform', activation='sigmoid', weights=None)
This layer turns a pair of words (a pivot word + a context word, ie. a word from the same context as a pivot, or a random, out-of-context word), indentified by their indices in a vocabulary, into two dense reprensentations (word representation and context representation).
Then it returns activation(dot(pivot_embedding, context_embedding))
, which can be trained to encode the probability of finding the context word in the context of the pivot word (or reciprocally depending on your training procedure).
For more context, see Mikolov et al.: Efficient Estimation of Word reprensentations in Vector Space
Input shape: 2D tensor with shape: (nb_samples, 2)
.
Output shape: 2D tensor with shape: (nb_samples, 1)
.
Arguments:
weights
argument.(input_dim, proj_dim)
. The first element is the word embedding weights, the second one is the context embedding weights.