keras.layers.advanced_activations.LeakyReLU(alpha=0.3)
Special version of a Rectified Linear Unit that allows a small gradient when the unit is not active (f(x) = alpha*x for x < 0
).
Input shape: This layer does not assume a specific input shape. As a result, it cannot be used as the first layer in a model.
Output shape: Same as input.
Arguments:
keras.layers.advanced_activations.PReLU(input_shape)
Parametrized linear unit. Similar to a LeakyReLU, where each input unit has its alpha coefficient, and where these coefficients are learned during training.
Input shape: Same as input_shape
. This layer cannot be used as first layer in a model.
Output shape: Same as input.
Arguments:
References: