Path: blob/master/templates/api/layers/regularizers.md
5294 views
Layer weight regularizers
Regularizers allow you to apply penalties on layer parameters or layer activity during optimization. These penalties are summed into the loss function that the network optimizes.
Regularization penalties are applied on a per-layer basis. The exact API will depend on the layer, but many layers (e.g. Dense, Conv1D, Conv2D and Conv3D) have a unified API.
These layers expose 3 keyword arguments:
kernel_regularizer: Regularizer to apply a penalty on the layer's kernelbias_regularizer: Regularizer to apply a penalty on the layer's biasactivity_regularizer: Regularizer to apply a penalty on the layer's output
The value returned by the activity_regularizer object gets divided by the input batch size so that the relative weighting between the weight regularizers and the activity regularizers does not change with the batch size.
You can access a layer's regularization penalties by calling layer.losses after calling the layer on inputs:
Available regularizers
The following built-in regularizers are available as part of the keras.regularizers module:
{{autogenerated}}
Creating custom regularizers
Simple callables
A weight regularizer can be any callable that takes as input a weight tensor (e.g. the kernel of a Conv2D layer), and returns a scalar loss. Like this:
Regularizer subclasses
If you need to configure your regularizer via various arguments (e.g. l1 and l2 arguments in l1_l2), you should implement it as a subclass of keras.regularizers.Regularizer.
Here's a simple example:
Optionally, you can also implement the method get_config and the class method from_config in order to support serialization -- just like with any Keras object. Example: