minerva.models.nets.time_series.cnns ==================================== .. py:module:: minerva.models.nets.time_series.cnns Classes ------- .. autoapisummary:: minerva.models.nets.time_series.cnns.CNN_HaEtAl_1D minerva.models.nets.time_series.cnns.CNN_HaEtAl_1D_Backbone minerva.models.nets.time_series.cnns.CNN_HaEtAl_2D minerva.models.nets.time_series.cnns.CNN_HaEtAl_2D_Backbone minerva.models.nets.time_series.cnns.CNN_PFF_2D minerva.models.nets.time_series.cnns.CNN_PF_2D minerva.models.nets.time_series.cnns.CNN_PF_Backbone minerva.models.nets.time_series.cnns.ZeroPadder2D Module Contents --------------- .. py:class:: CNN_HaEtAl_1D(input_shape = (6, 60), num_classes = 6, learning_rate = 0.001, *args, **kwargs) Bases: :py:obj:`minerva.models.nets.base.SimpleSupervisedModel` Simple pipeline for supervised models. This class implements a very common deep learning pipeline, which is composed by the following steps: 1. Make a forward pass with the input data on the backbone model; 2. Make a forward pass with the input data on the fc model; 3. Compute the loss between the output and the label data; 4. Optimize the model (backbone and FC) parameters with respect to the loss. This reduces the code duplication for autoencoder models, and makes it easier to implement new models by only changing the backbone model. More complex models, that does not follow this pipeline, should not inherit from this class. Note that, for this class the input data is a tuple of tensors, where the first tensor is the input data and the second tensor is the mask or label. Initialize the model with the backbone, fc, loss function and metrics. Metrics are used to evaluate the model during training, validation, testing or prediction. It will be logged using lightning logger at the end of each epoch. Metrics should implement the `torchmetrics.Metric` interface. Parameters ---------- backbone : torch.nn.Module The backbone model. Usually the encoder/decoder part of the model. fc : torch.nn.Module The fully connected model, usually used to classification tasks. Use `torch.nn.Identity()` if no FC model is needed. loss_fn : torch.nn.Module The function used to compute the loss. learning_rate : float, optional The learning rate to Adam optimizer, by default 1e-3 flatten : bool, optional If `True` the input data will be flattened before passing through the fc model, by default True train_metrics : Dict[str, Metric], optional The metrics to be used during training, by default None val_metrics : Dict[str, Metric], optional The metrics to be used during validation, by default None test_metrics : Dict[str, Metric], optional The metrics to be used during testing, by default None predict_metrics : Dict[str, Metric], optional The metrics to be used during prediction, by default None .. py:method:: _calculate_fc_input_features(backbone, input_shape) Run a single forward pass with a random input to get the number of features after the convolutional layers. Parameters ---------- backbone : torch.nn.Module The backbone of the network input_shape : Tuple[int, int, int] The input shape of the network. Returns ------- int The number of features after the convolutional layers. .. py:method:: _create_fc(input_features, num_classes) .. py:attribute:: fc_input_channels .. py:attribute:: input_shape :value: (6, 60) .. py:attribute:: num_classes :value: 6 .. py:class:: CNN_HaEtAl_1D_Backbone(in_channels = 1) Bases: :py:obj:`torch.nn.Module` Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:: import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:`to`, etc. .. note:: As per the example above, an ``__init__()`` call to the parent class must be made before assignment on the child. :ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool Initialize internal Module state, shared by both nn.Module and ScriptModule. .. py:attribute:: backbone .. py:method:: forward(x) .. py:class:: CNN_HaEtAl_2D(pad_at = (3, ), input_shape = (6, 60), num_classes = 6, learning_rate = 0.001, *args, **kwargs) Bases: :py:obj:`minerva.models.nets.base.SimpleSupervisedModel` Simple pipeline for supervised models. This class implements a very common deep learning pipeline, which is composed by the following steps: 1. Make a forward pass with the input data on the backbone model; 2. Make a forward pass with the input data on the fc model; 3. Compute the loss between the output and the label data; 4. Optimize the model (backbone and FC) parameters with respect to the loss. This reduces the code duplication for autoencoder models, and makes it easier to implement new models by only changing the backbone model. More complex models, that does not follow this pipeline, should not inherit from this class. Note that, for this class the input data is a tuple of tensors, where the first tensor is the input data and the second tensor is the mask or label. Initialize the model with the backbone, fc, loss function and metrics. Metrics are used to evaluate the model during training, validation, testing or prediction. It will be logged using lightning logger at the end of each epoch. Metrics should implement the `torchmetrics.Metric` interface. Parameters ---------- backbone : torch.nn.Module The backbone model. Usually the encoder/decoder part of the model. fc : torch.nn.Module The fully connected model, usually used to classification tasks. Use `torch.nn.Identity()` if no FC model is needed. loss_fn : torch.nn.Module The function used to compute the loss. learning_rate : float, optional The learning rate to Adam optimizer, by default 1e-3 flatten : bool, optional If `True` the input data will be flattened before passing through the fc model, by default True train_metrics : Dict[str, Metric], optional The metrics to be used during training, by default None val_metrics : Dict[str, Metric], optional The metrics to be used during validation, by default None test_metrics : Dict[str, Metric], optional The metrics to be used during testing, by default None predict_metrics : Dict[str, Metric], optional The metrics to be used during prediction, by default None .. py:method:: _calculate_fc_input_features(backbone, input_shape) Run a single forward pass with a random input to get the number of features after the convolutional layers. Parameters ---------- backbone : torch.nn.Module The backbone of the network input_shape : Tuple[int, int, int] The input shape of the network. Returns ------- int The number of features after the convolutional layers. .. py:method:: _create_fc(input_features, num_classes) .. py:attribute:: fc_input_channels .. py:attribute:: input_shape :value: (6, 60) .. py:attribute:: num_classes :value: 6 .. py:attribute:: pad_at :value: (3,) .. py:class:: CNN_HaEtAl_2D_Backbone(input_shape, pad_at = 3, first_kernel_size = 4) Bases: :py:obj:`torch.nn.Module` Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:: import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:`to`, etc. .. note:: As per the example above, an ``__init__()`` call to the parent class must be made before assignment on the child. :ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool Initialize internal Module state, shared by both nn.Module and ScriptModule. .. py:attribute:: backbone .. py:method:: forward(x) .. py:class:: CNN_PFF_2D(*args, **kwargs) Bases: :py:obj:`CNN_PF_2D` Simple pipeline for supervised models. This class implements a very common deep learning pipeline, which is composed by the following steps: 1. Make a forward pass with the input data on the backbone model; 2. Make a forward pass with the input data on the fc model; 3. Compute the loss between the output and the label data; 4. Optimize the model (backbone and FC) parameters with respect to the loss. This reduces the code duplication for autoencoder models, and makes it easier to implement new models by only changing the backbone model. More complex models, that does not follow this pipeline, should not inherit from this class. Note that, for this class the input data is a tuple of tensors, where the first tensor is the input data and the second tensor is the mask or label. Initialize the model with the backbone, fc, loss function and metrics. Metrics are used to evaluate the model during training, validation, testing or prediction. It will be logged using lightning logger at the end of each epoch. Metrics should implement the `torchmetrics.Metric` interface. Parameters ---------- backbone : torch.nn.Module The backbone model. Usually the encoder/decoder part of the model. fc : torch.nn.Module The fully connected model, usually used to classification tasks. Use `torch.nn.Identity()` if no FC model is needed. loss_fn : torch.nn.Module The function used to compute the loss. learning_rate : float, optional The learning rate to Adam optimizer, by default 1e-3 flatten : bool, optional If `True` the input data will be flattened before passing through the fc model, by default True train_metrics : Dict[str, Metric], optional The metrics to be used during training, by default None val_metrics : Dict[str, Metric], optional The metrics to be used during validation, by default None test_metrics : Dict[str, Metric], optional The metrics to be used during testing, by default None predict_metrics : Dict[str, Metric], optional The metrics to be used during prediction, by default None .. py:class:: CNN_PF_2D(pad_at = 3, input_shape = (1, 6, 60), out_channels = 16, num_classes = 6, learning_rate = 0.001, include_middle = False, *args, **kwargs) Bases: :py:obj:`minerva.models.nets.base.SimpleSupervisedModel` Simple pipeline for supervised models. This class implements a very common deep learning pipeline, which is composed by the following steps: 1. Make a forward pass with the input data on the backbone model; 2. Make a forward pass with the input data on the fc model; 3. Compute the loss between the output and the label data; 4. Optimize the model (backbone and FC) parameters with respect to the loss. This reduces the code duplication for autoencoder models, and makes it easier to implement new models by only changing the backbone model. More complex models, that does not follow this pipeline, should not inherit from this class. Note that, for this class the input data is a tuple of tensors, where the first tensor is the input data and the second tensor is the mask or label. Initialize the model with the backbone, fc, loss function and metrics. Metrics are used to evaluate the model during training, validation, testing or prediction. It will be logged using lightning logger at the end of each epoch. Metrics should implement the `torchmetrics.Metric` interface. Parameters ---------- backbone : torch.nn.Module The backbone model. Usually the encoder/decoder part of the model. fc : torch.nn.Module The fully connected model, usually used to classification tasks. Use `torch.nn.Identity()` if no FC model is needed. loss_fn : torch.nn.Module The function used to compute the loss. learning_rate : float, optional The learning rate to Adam optimizer, by default 1e-3 flatten : bool, optional If `True` the input data will be flattened before passing through the fc model, by default True train_metrics : Dict[str, Metric], optional The metrics to be used during training, by default None val_metrics : Dict[str, Metric], optional The metrics to be used during validation, by default None test_metrics : Dict[str, Metric], optional The metrics to be used during testing, by default None predict_metrics : Dict[str, Metric], optional The metrics to be used during prediction, by default None .. py:method:: _calculate_fc_input_features(backbone, input_shape) Run a single forward pass with a random input to get the number of features after the convolutional layers. Parameters ---------- backbone : torch.nn.Module The backbone of the network input_shape : Tuple[int, int, int] The input shape of the network. Returns ------- int The number of features after the convolutional layers. .. py:method:: _create_fc(input_features, num_classes) .. py:attribute:: fc_input_channels .. py:attribute:: input_shape :value: (1, 6, 60) .. py:attribute:: num_classes :value: 6 .. py:attribute:: out_channels :value: 16 .. py:attribute:: pad_at :value: 3 .. py:class:: CNN_PF_Backbone(in_channels = 1, pad_at = 3, out_channels = 16, include_middle = False, permute = False, flatten = False) Bases: :py:obj:`torch.nn.Module` Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:: import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:`to`, etc. .. note:: As per the example above, an ``__init__()`` call to the parent class must be made before assignment on the child. :ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool Initialize internal Module state, shared by both nn.Module and ScriptModule. .. py:attribute:: first_pad_size :value: 2 .. py:attribute:: first_padder .. py:attribute:: flatten :value: False .. py:method:: forward(x) .. py:attribute:: in_channels :value: 1 .. py:attribute:: include_middle :value: False .. py:attribute:: lower_part .. py:attribute:: out_channels :value: 16 .. py:attribute:: pad_at :value: 3 .. py:attribute:: permute :value: False .. py:attribute:: shared_part .. py:attribute:: upper_part .. py:class:: ZeroPadder2D(pad_at, padding_size) Bases: :py:obj:`torch.nn.Module` Base class for all neural network modules. Your models should also subclass this class. Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:: import torch.nn as nn import torch.nn.functional as F class Model(nn.Module): def __init__(self) -> None: super().__init__() self.conv1 = nn.Conv2d(1, 20, 5) self.conv2 = nn.Conv2d(20, 20, 5) def forward(self, x): x = F.relu(self.conv1(x)) return F.relu(self.conv2(x)) Submodules assigned in this way will be registered, and will also have their parameters converted when you call :meth:`to`, etc. .. note:: As per the example above, an ``__init__()`` call to the parent class must be made before assignment on the child. :ivar training: Boolean represents whether this module is in training or evaluation mode. :vartype training: bool Initialize internal Module state, shared by both nn.Module and ScriptModule. .. py:method:: __repr__() .. py:method:: __str__() .. py:method:: forward(x) .. py:attribute:: pad_at .. py:attribute:: padding_size