minerva.models.nets.mlp ======================= .. py:module:: minerva.models.nets.mlp Classes ------- .. autoapisummary:: minerva.models.nets.mlp.MLP Module Contents --------------- .. py:class:: MLP(layer_sizes, activation_cls = nn.ReLU, *args, **kwargs) Bases: :py:obj:`torch.nn.Sequential` A multilayer perceptron (MLP) implemented as a subclass of nn.Sequential. This MLP is composed of a sequence of linear layers interleaved with ReLU activation functions, except for the final layer which remains purely linear. Example ------- >>> mlp = MLP(10, 20, 30, 40) >>> print(mlp) MLP( (0): Linear(in_features=10, out_features=20, bias=True) (1): ReLU() (2): Linear(in_features=20, out_features=30, bias=True) (3): ReLU() (4): Linear(in_features=30, out_features=40, bias=True) ) Initializes the MLP with specified layer sizes. Parameters ---------- layer_sizes : Sequence[int] A sequence of positive integers indicating the size of each layer. At least two integers are required, representing the input and output layers. activation_cls : type The class of the activation function to use between layers. Default is nn.ReLU. *args Additional arguments passed to the activation function. **kwargs Additional keyword arguments passed to the activation function. Raises ------ AssertionError If fewer than two layer sizes are provided or if any layer size is not a positive integer. AssertionError If activation_cls does not inherit from torch.nn.Module.