keras

Base class

keras.layers.core.Layer()

Methods:

connect(previous_layer)

Connect the input of the current layer to the output of the argument layer.

  • Return: None.

  • Arguments:

    • previous_layer: Layer object.
output(train)

Get the output of the layer.

  • Return: Theano tensor.

  • Arguments:

    • train: Boolean. Specifies whether output is computed in training mode or in testing mode, which can change the logic, for instance in there are any Dropout layers in the network.
get_input(train)

Get the input of the layer.

  • Return: Theano tensor.

  • Arguments:

    • train: Boolean. Specifies whether output is computed in training mode or in testing mode, which can change the logic, for instance in there are any Dropout layers in the network.
get_weights()

Get the weights of the parameters of the layer.

  • Return: List of numpy arrays (one per layer parameter).
set_weights(weights)

Set the weights of the parameters of the layer.

  • Arguments:
    • weights: List of numpy arrays (one per layer parameter). Should be in the same order as what get_weights(self) returns.

Dense

keras.layers.core.Dense(input_dim, output_dim, init='glorot_uniform', activation='linear', weights=None \
W_regularizer=None, b_regularizer=None, W_constraint=None, b_constraint=None)

Standard 1D fully-connect layer.

  • Input shape: 2D tensor with shape: (nb_samples, input_dim).

  • Output shape: 2D tensor with shape: (nb_samples, output_dim).

  • Arguments:

    • input_dim: int >= 0.
    • output_dim: int >= 0.
    • init: name of initialization function for the weights of the layer (see: initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don't pass a weights argument.
    • activation: name of activation function to use (see: activations), or alternatively, elementwise Theano function. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
    • weights: list of numpy arrays to set as initial weights. The list should have 1 element, of shape (input_dim, output_dim).
    • W_regularizer: instance of the regularizers module (eg. L1 or L2 regularization), applied to the main weights matrix.
    • b_regularizer: instance of the regularizers module, applied to the bias.
    • W_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.
    • b_constraint: instance of the constraints module, applied to the bias.

TimeDistributedDense

keras.layers.core.TimeDistributedDense(input_dim, output_dim, init='glorot_uniform', activation='linear', weights=None \
W_regularizer=None, b_regularizer=None, W_constraint=None, b_constraint=None)

Fully-connected layer distributed over the time dimension. Useful after a recurrent network set to return_sequences=True.

  • Input shape: 3D tensor with shape: (nb_samples, nb_timesteps, input_dim).

  • Arguments:

    • input_dim: int >= 0.
    • output_dim: int >= 0.
    • init: name of initialization function for the weights of the layer (see: initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don't pass a weights argument.
    • activation: name of activation function to use (see: activations), or alternatively, elementwise Theano function. If you don't specify anything, no activation is applied (ie. "linear" activation: a(x) = x).
    • weights: list of numpy arrays to set as initial weights. The list should have 1 element, of shape (input_dim, output_dim).
    • W_regularizer: instance of the regularizers module (eg. L1 or L2 regularization), applied to the main weights matrix.
    • b_regularizer: instance of the regularizers module, applied to the bias.
    • W_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.
    • b_constraint: instance of the constraints module, applied to the bias.
  • Example:

    # input shape: (nb_samples, nb_timesteps, 10)
    model.add(LSTM(10, 5, return_sequences=True)) # output shape: (nb_samples, nb_timesteps, 5)
    model.add(TimeDistributedDense(5, 10)) # output shape: (nb_samples, nb_timesteps, 10)
    

AutoEncoder

keras.layers.core.AutoEncoder(encoder, decoder, output_reconstruction=True, tie_weights=False, weights=None):

A customizable autoencoder model. If output_reconstruction = True then dim(input) = dim(output) else dim(output) = dim(hidden)

  • Input shape: The layer shape is defined by the encoder definitions

  • Output shape: The layer shape is defined by the decoder definitions

  • Arguments:

    • encoder: A layer or layer container.

    • decoder: A layer or layer container.

    • output_reconstruction: If this is False the when .predict() is called the output is the deepest hidden layer's activation. Otherwise the output of the final decoder layer is presented. Be sure your validation data confirms to this logic if you decide to use any.

    • tie_weights: If True then the encoder bias is tied to the decoder bias. Note: This required the encoder layer corresponding to this decoder layer to be of the same time, eg: Dense:Dense

    • weights: list of numpy arrays to set as initial weights. The list should have 1 element, of shape (input_dim, output_dim).

  • Example: ```python from keras.layers import containers

input shape: (nb_samples, 32)

encoder = containers.Sequential([Dense(32, 16), Dense(16, 8)]) decoder = containers.Sequential([Dense(8, 16), Dense(16, 32)]) autoencoder.add(AutoEncoder(encoder=encoder, decoder=decoder, output_reconstruction=False, tie_weights=True))


---

## DenoisingAutoEncoder
```python
keras.layers.core.AutoEncoder(encoder, decoder, output_reconstruction=True, tie_weights=False, weights=None, corruption_level=0.3):

A denoising autoencoder model that inherits the base features from autoencoder. Since this layer uses similar logic to Dropout it cannot be the first layer in a pipeline.

  • Input shape: The layer shape is defined by the encoder definitions

  • Output shape: The layer shape is defined by the decoder definitions

  • Arguments:

    • encoder: A layer or layer container.

    • decoder: A layer or layer container.

    • output_reconstruction: If this is False the when .predict() is called the output is the deepest hidden layer's activation. Otherwise the output of the final decoder layer is presented. Be sure your validation data confirms to this logic if you decide to use any.

    • tie_weights: If True then the encoder bias is tied to the decoder bias. Note: This required the encoder layer corresponding to this decoder layer to be of the same time, eg: Dense:Dense

    • weights: list of numpy arrays to set as initial weights. The list should have 1 element, of shape (input_dim, output_dim).

    • corruption_level: the amount of binomial noise added to the input layer of the model.

  • Example:

    # input shape: (nb_samples, 32)
    autoencoder.add(Dense(32, 32))
    autoencoder.add(DenoisingAutoEncoder(encoder=Dense(32, 16),
                                       decoder=Dense(16, 32),
                                       output_reconstruction=False, tie_weights=True,
                                       corruption_level=0.3))
    

Activation

keras.layers.core.Activation(activation)

Apply an activation function to the input.

  • Input shape: This layer does not assume a specific input shape. As a result, it cannot be used as the first layer in a model.

  • Output shape: Same as input.

  • Arguments:

    • activation: name of activation function to use (see: activations), or alternatively, elementwise Theano function.

Dropout

keras.layers.core.Dropout(p)

Apply dropout to the input. Dropout consists in randomly setting a fraction p of input units to 0 at each update during training time, which helps prevent overfitting. Reference: Dropout: A Simple Way to Prevent Neural Networks from Overfitting

  • Input shape: This layer does not assume a specific input shape.

  • Output shape: Same as input.

  • Arguments:

    • p: float (0 <= p < 1). Fraction of the input that gets dropped out at training time.

Reshape

keras.layers.core.Reshape(*dims)

Reshape the input to a new shape containing the same number of units.

  • Input shape: This layer does not assume a specific input shape.

  • Output shape: (nb_samples, *dims).

  • Arguments:

    • *dims: integers. Dimensions of the new shape.
  • Example:

    # input shape: (nb_samples, 10)
    model.add(Dense(10, 100)) # output shape: (nb_samples, 100)
    model.add(Reshape(10, 10))  # output shape: (nb_samples, 10, 10)
    

Flatten

keras.layers.core.Flatten()

Convert a nD input to 1D.

  • Input shape: (nb_samples, *). This layer cannot be used as the first layer in a model.

  • Output shape: (nb_samples, nb_input_units).


RepeatVector

keras.layers.core.RepeatVector(n)

Repeat the 1D input n times. Dimensions of input are assumed to be (nb_samples, dim). Output will have the shape (nb_samples, n, dim).

  • Input shape: This layer does not assume a specific input shape. This layer cannot be used as the first layer in a model.

  • Output shape: (nb_samples, n, input_dims).

  • Arguments:

    • n: int.
  • Example:


MaxoutDense

keras.layers.core.MaxoutDense(input_dim, output_dim, nb_feature=4, init='glorot_uniform', weights=None, \
        W_regularizer=None, b_regularizer=None, W_constraint=None, b_constraint=None)

A dense maxout layer. A MaxoutDense layer takes the element-wise maximum of nb_feature Dense(input_dim, output_dim) linear layers. This allows the layer to learn a convex, piecewise linear activation function over the inputs. See this paper for more details. Note that this is a linear layer -- if you wish to apply activation function (you shouldn't need to -- they are universal function approximators), an Activation layer must be added after.

  • Input shape: 2D tensor with shape: (nb_samples, input_dim).

  • Output shape: 2D tensor with shape: (nb_samples, output_dim).

  • Arguments:

    • input_dim: int >= 0.
    • output_dim: int >= 0.
    • nb_feature: int >= 0. the number of features to create for the maxout. This is equivalent to the number of piecewise elements to be allowed for the activation function.
    • init: name of initialization function for the weights of the layer (see: initializations), or alternatively, Theano function to use for weights initialization. This parameter is only relevant if you don't pass a weights argument.
    • weights: list of numpy arrays to set as initial weights. The list should have 1 element, of shape (input_dim, output_dim).
    • W_regularizer: instance of the regularizers module (eg. L1 or L2 regularization), applied to the main weights matrix.
    • b_regularizer: instance of the regularizers module, applied to the bias.
    • W_constraint: instance of the constraints module (eg. maxnorm, nonneg), applied to the main weights matrix.
    • b_constraint: instance of the constraints module, applied to the bias.
# input shape: (nb_samples, 10)
model.add(Dense(10, 100)) # output shape: (nb_samples, 100)
model.add(MaxoutDense(100, 100, nb_feature=10)) # output shape: (nb_samples, 100)
model.add(RepeatVector(2))  # output shape: (nb_samples, 2, 10)

Merge

keras.layers.core.Merge(models, mode='sum')

Merge the output of a list of models into a single tensor, following one of two modes: sum or concat.

  • Arguments:

    • models: List of Sequential models.
    • mode: String, one of {'sum', 'concat'}. sum will simply sum the outputs of the models (therefore all models should have an output with the same shape). concat will concatenate the outputs along the last dimension (therefore all models should have an output that only differ along the last dimension).
  • Example:

left = Sequential()
left.add(Dense(784, 50))
left.add(Activation('relu'))

right = Sequential()
right.add(Dense(784, 50))
right.add(Activation('relu'))

model = Sequential()
model.add(Merge([left, right], mode='sum'))

model.add(Dense(50, 10))
model.add(Activation('softmax'))

model.compile(loss='categorical_crossentropy', optimizer='rmsprop')

model.fit([X_train, X_train], Y_train, batch_size=128, nb_epoch=20, validation_data=([X_test, X_test], Y_test))