keras.layers.core.Layer()
Methods:
connect(previous_layer)
Connect the input of the current layer to the output of the argument layer.
Return: None.
Arguments:
output(train)
Get the output of the layer.
Return: Theano tensor.
Arguments:
Dropout
layers in the network. get_input(train)
Get the input of the layer.
Return: Theano tensor.
Arguments:
Dropout
layers in the network. get_weights()
Get the weights of the parameters of the layer.
set_weights(weights)
Set the weights of the parameters of the layer.
get_weights(self)
returns.keras.layers.core.Dense(input_dim, output_dim, init='glorot_uniform', activation='linear', weights=None \
W_regularizer=None, b_regularizer=None, W_constraint=None, b_constraint=None)
Standard 1D fully-connect layer.
Input shape: 2D tensor with shape: (nb_samples, input_dim)
.
Output shape: 2D tensor with shape: (nb_samples, output_dim)
.
Arguments:
weights
argument.(input_dim, output_dim)
.keras.layers.core.TimeDistributedDense(input_dim, output_dim, init='glorot_uniform', activation='linear', weights=None \
W_regularizer=None, b_regularizer=None, W_constraint=None, b_constraint=None)
Fully-connected layer distributed over the time dimension. Useful after a recurrent network set to return_sequences=True
.
Input shape: 3D tensor with shape: (nb_samples, nb_timesteps, input_dim)
.
Arguments:
weights
argument.(input_dim, output_dim)
.Example:
# input shape: (nb_samples, nb_timesteps, 10)
model.add(LSTM(10, 5, return_sequences=True)) # output shape: (nb_samples, nb_timesteps, 5)
model.add(TimeDistributedDense(5, 10)) # output shape: (nb_samples, nb_timesteps, 10)
keras.layers.core.AutoEncoder(encoder, decoder, output_reconstruction=True, tie_weights=False, weights=None):
A customizable autoencoder model. If output_reconstruction = True
then dim(input) = dim(output) else dim(output) = dim(hidden)
Input shape: The layer shape is defined by the encoder definitions
Output shape: The layer shape is defined by the decoder definitions
Arguments:
encoder: A layer or layer container.
decoder: A layer or layer container.
output_reconstruction: If this is False the when .predict() is called the output is the deepest hidden layer's activation. Otherwise the output of the final decoder layer is presented. Be sure your validation data confirms to this logic if you decide to use any.
tie_weights: If True then the encoder bias is tied to the decoder bias. Note: This required the encoder layer corresponding to this decoder layer to be of the same time, eg: Dense:Dense
weights: list of numpy arrays to set as initial weights. The list should have 1 element, of shape (input_dim, output_dim)
.
Example: ```python from keras.layers import containers
encoder = containers.Sequential([Dense(32, 16), Dense(16, 8)]) decoder = containers.Sequential([Dense(8, 16), Dense(16, 32)]) autoencoder.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=False, tie_weights=True))
---
## DenoisingAutoEncoder
```python
keras.layers.core.AutoEncoder(encoder, decoder, output_reconstruction=True, tie_weights=False, weights=None, corruption_level=0.3):
A denoising autoencoder model that inherits the base features from autoencoder. Since this layer uses similar logic to Dropout it cannot be the first layer in a pipeline.
Input shape: The layer shape is defined by the encoder definitions
Output shape: The layer shape is defined by the decoder definitions
Arguments:
encoder: A layer or layer container.
decoder: A layer or layer container.
output_reconstruction: If this is False the when .predict() is called the output is the deepest hidden layer's activation. Otherwise the output of the final decoder layer is presented. Be sure your validation data confirms to this logic if you decide to use any.
tie_weights: If True then the encoder bias is tied to the decoder bias. Note: This required the encoder layer corresponding to this decoder layer to be of the same time, eg: Dense:Dense
weights: list of numpy arrays to set as initial weights. The list should have 1 element, of shape (input_dim, output_dim)
.
corruption_level: the amount of binomial noise added to the input layer of the model.
Example:
# input shape: (nb_samples, 32)
autoencoder.add(Dense(32, 32))
autoencoder.add(DenoisingAutoEncoder(encoder=Dense(32, 16),
decoder=Dense(16, 32),
output_reconstruction=False, tie_weights=True,
corruption_level=0.3))
keras.layers.core.Activation(activation)
Apply an activation function to the input.
Input shape: This layer does not assume a specific input shape. As a result, it cannot be used as the first layer in a model.
Output shape: Same as input.
Arguments:
keras.layers.core.Dropout(p)
Apply dropout to the input. Dropout consists in randomly setting a fraction p
of input units to 0 at each update during training time, which helps prevent overfitting. Reference: Dropout: A Simple Way to Prevent Neural Networks from Overfitting
Input shape: This layer does not assume a specific input shape.
Output shape: Same as input.
Arguments:
keras.layers.core.Reshape(*dims)
Reshape the input to a new shape containing the same number of units.
Input shape: This layer does not assume a specific input shape.
Output shape: (nb_samples, *dims)
.
Arguments:
Example:
# input shape: (nb_samples, 10)
model.add(Dense(10, 100)) # output shape: (nb_samples, 100)
model.add(Reshape(10, 10)) # output shape: (nb_samples, 10, 10)
keras.layers.core.Flatten()
Convert a nD input to 1D.
Input shape: (nb_samples, *). This layer cannot be used as the first layer in a model.
Output shape: (nb_samples, nb_input_units)
.
keras.layers.core.RepeatVector(n)
Repeat the 1D input n times. Dimensions of input are assumed to be (nb_samples, dim). Output will have the shape (nb_samples, n, dim).
Input shape: This layer does not assume a specific input shape. This layer cannot be used as the first layer in a model.
Output shape: (nb_samples, n, input_dims)
.
Arguments:
Example:
keras.layers.core.MaxoutDense(input_dim, output_dim, nb_feature=4, init='glorot_uniform', weights=None, \
W_regularizer=None, b_regularizer=None, W_constraint=None, b_constraint=None)
A dense maxout layer. A MaxoutDense
layer takes the element-wise maximum of nb_feature
Dense(input_dim, output_dim)
linear layers. This allows the layer to learn a convex, piecewise linear activation function over the inputs. See this paper for more details. Note that this is a linear layer -- if you wish to apply activation function (you shouldn't need to -- they are universal function approximators), an Activation
layer must be added after.
Input shape: 2D tensor with shape: (nb_samples, input_dim)
.
Output shape: 2D tensor with shape: (nb_samples, output_dim)
.
Arguments:
weights
argument.(input_dim, output_dim)
.# input shape: (nb_samples, 10)
model.add(Dense(10, 100)) # output shape: (nb_samples, 100)
model.add(MaxoutDense(100, 100, nb_feature=10)) # output shape: (nb_samples, 100)
model.add(RepeatVector(2)) # output shape: (nb_samples, 2, 10)
keras.layers.core.Merge(models, mode='sum')
Merge the output of a list of models into a single tensor, following one of two modes: sum
or concat
.
Arguments:
Sequential
models.{'sum', 'concat'}
. sum
will simply sum the outputs of the models (therefore all models should have an output with the same shape). concat
will concatenate the outputs along the last dimension (therefore all models should have an output that only differ along the last dimension). Example:
left = Sequential()
left.add(Dense(784, 50))
left.add(Activation('relu'))
right = Sequential()
right.add(Dense(784, 50))
right.add(Activation('relu'))
model = Sequential()
model.add(Merge([left, right], mode='sum'))
model.add(Dense(50, 10))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='rmsprop')
model.fit([X_train, X_train], Y_train, batch_size=128, nb_epoch=20, validation_data=([X_test, X_test], Y_test))