minerva.models.nets.time_series.cnns

Classes

CNN_HaEtAl_1D

A modular Lightning model wrapper for supervised learning tasks.

CNN_HaEtAl_1D_Backbone

Base class for all neural network modules.

CNN_HaEtAl_2D

A modular Lightning model wrapper for supervised learning tasks.

CNN_HaEtAl_2D_Backbone

Base class for all neural network modules.

CNN_PFF_2D

A modular Lightning model wrapper for supervised learning tasks.

CNN_PF_2D

A modular Lightning model wrapper for supervised learning tasks.

CNN_PF_Backbone

Base class for all neural network modules.

ZeroPadder2D

Base class for all neural network modules.

Module Contents

class minerva.models.nets.time_series.cnns.CNN_HaEtAl_1D(input_shape=(6, 60), num_classes=6, learning_rate=0.001, *args, **kwargs)[source]

Bases: minerva.models.nets.base.SimpleSupervisedModel

A modular Lightning model wrapper for supervised learning tasks.

This class enables the construction of supervised models by combining a backbone (feature extractor), an optional adapter, and a fully connected (FC) head. It provides a clean interface for setting up custom training, validation, and testing pipelines with pluggable loss functions, metrics, optimizers, and learning rate schedulers.

The architecture is structured as follows:

Backbone Model


v

Adapter (Optional)


(Flatten if needed)

v

Fully Connected Head


v

Loss Function

Training and validation steps comprise the following steps:

  1. Forward pass input through the backbone.

  2. Pass through adapter (if provided).

  3. Flatten the output (if flatten is True) before the FC head.

  4. Forward through the FC head.

  5. Compute loss with respect to targets.

  6. Backpropagate and update parameters.

  7. Compute metrics and log them.

  8. Return loss. train_loss, val_loss, and test_loss are always logged, along with any additional metrics specified in the train_metrics, val_metrics, and test_metrics dictionaries.

This wrapper is especially useful to quickly set up supervised models for various tasks, such as image classification, object detection, and segmentation. It is designed to be flexible and extensible, allowing users to easily swap out components like the backbone, adapter, and FC head as needed. The model is built with a focus on simplicity and modularity, making it easy to adapt to different use cases and requirements. The model is designed to be used with PyTorch Lightning and is compatible with its training loop.

Note: For more complex architectures that does not follow the above structure should not inherit from this class.

Note: Input batches must be tuples (input_tensor, target_tensor).

Initializes the supervised model with training components and configs.

Parameters

backbonetorch.nn.Module or LoadableModule

The backbone (feature extractor) model.

fctorch.nn.Module or LoadableModule

The fully connected head. Use nn.Identity() if not required.

loss_fntorch.nn.Module

Loss function to optimize during training.

adapterCallable, optional

Function to transform backbone outputs before feeding into fc.

learning_ratefloat, default=1e-3

Learning rate used for optimization.

flattenbool, default=True

If True, flattens backbone outputs before fc.

train_metricsdict, optional

TorchMetrics dictionary for training evaluation.

val_metricsdict, optional

TorchMetrics dictionary for validation evaluation.

test_metricsdict, optional

TorchMetrics dictionary for test evaluation.

freeze_backbonebool, default=False

If True, backbone parameters are frozen during training.

optimizer: type

Optimizer class to be instantiated. By default, it is set to torch.optim.Adam. Should be a subclass of torch.optim.Optimizer (e.g., torch.optim.SGD).

optimizer_kwargsdict, optional

Additional kwargs passed to the optimizer constructor.

lr_schedulertype, optional

Learning rate scheduler class to be instantiated. By default, it is set to None, which means no scheduler will be used. Should be a subclass of torch.optim.lr_scheduler.LRScheduler (e.g., torch.optim.lr_scheduler.StepLR).

lr_scheduler_kwargsdict, optional

Additional kwargs passed to the scheduler constructor.

_calculate_fc_input_features(backbone, input_shape)[source]

Run a single forward pass with a random input to get the number of features after the convolutional layers.

Parameters

backbonetorch.nn.Module

The backbone of the network

input_shapeTuple[int, int, int]

The input shape of the network.

Returns

int

The number of features after the convolutional layers.

Parameters:
  • backbone (torch.nn.Module)

  • input_shape (Tuple[int, int, int])

Return type:

int

_create_fc(input_features, num_classes)[source]
Parameters:
  • input_features (int)

  • num_classes (int)

Return type:

torch.nn.Module

fc_input_channels
input_shape = (6, 60)
num_classes = 6
Parameters:
  • input_shape (Union[Tuple[int, int], Tuple[int, int, int]])

  • num_classes (int)

  • learning_rate (float)

class minerva.models.nets.time_series.cnns.CNN_HaEtAl_1D_Backbone(in_channels=1)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:

in_channels (int)

Initialize internal Module state, shared by both nn.Module and ScriptModule.

backbone
forward(x)[source]
class minerva.models.nets.time_series.cnns.CNN_HaEtAl_2D(pad_at=(3,), input_shape=(6, 60), num_classes=6, learning_rate=0.001, *args, **kwargs)[source]

Bases: minerva.models.nets.base.SimpleSupervisedModel

A modular Lightning model wrapper for supervised learning tasks.

This class enables the construction of supervised models by combining a backbone (feature extractor), an optional adapter, and a fully connected (FC) head. It provides a clean interface for setting up custom training, validation, and testing pipelines with pluggable loss functions, metrics, optimizers, and learning rate schedulers.

The architecture is structured as follows:

Backbone Model


v

Adapter (Optional)


(Flatten if needed)

v

Fully Connected Head


v

Loss Function

Training and validation steps comprise the following steps:

  1. Forward pass input through the backbone.

  2. Pass through adapter (if provided).

  3. Flatten the output (if flatten is True) before the FC head.

  4. Forward through the FC head.

  5. Compute loss with respect to targets.

  6. Backpropagate and update parameters.

  7. Compute metrics and log them.

  8. Return loss. train_loss, val_loss, and test_loss are always logged, along with any additional metrics specified in the train_metrics, val_metrics, and test_metrics dictionaries.

This wrapper is especially useful to quickly set up supervised models for various tasks, such as image classification, object detection, and segmentation. It is designed to be flexible and extensible, allowing users to easily swap out components like the backbone, adapter, and FC head as needed. The model is built with a focus on simplicity and modularity, making it easy to adapt to different use cases and requirements. The model is designed to be used with PyTorch Lightning and is compatible with its training loop.

Note: For more complex architectures that does not follow the above structure should not inherit from this class.

Note: Input batches must be tuples (input_tensor, target_tensor).

Initializes the supervised model with training components and configs.

Parameters

backbonetorch.nn.Module or LoadableModule

The backbone (feature extractor) model.

fctorch.nn.Module or LoadableModule

The fully connected head. Use nn.Identity() if not required.

loss_fntorch.nn.Module

Loss function to optimize during training.

adapterCallable, optional

Function to transform backbone outputs before feeding into fc.

learning_ratefloat, default=1e-3

Learning rate used for optimization.

flattenbool, default=True

If True, flattens backbone outputs before fc.

train_metricsdict, optional

TorchMetrics dictionary for training evaluation.

val_metricsdict, optional

TorchMetrics dictionary for validation evaluation.

test_metricsdict, optional

TorchMetrics dictionary for test evaluation.

freeze_backbonebool, default=False

If True, backbone parameters are frozen during training.

optimizer: type

Optimizer class to be instantiated. By default, it is set to torch.optim.Adam. Should be a subclass of torch.optim.Optimizer (e.g., torch.optim.SGD).

optimizer_kwargsdict, optional

Additional kwargs passed to the optimizer constructor.

lr_schedulertype, optional

Learning rate scheduler class to be instantiated. By default, it is set to None, which means no scheduler will be used. Should be a subclass of torch.optim.lr_scheduler.LRScheduler (e.g., torch.optim.lr_scheduler.StepLR).

lr_scheduler_kwargsdict, optional

Additional kwargs passed to the scheduler constructor.

_calculate_fc_input_features(backbone, input_shape)[source]

Run a single forward pass with a random input to get the number of features after the convolutional layers.

Parameters

backbonetorch.nn.Module

The backbone of the network

input_shapeTuple[int, int, int]

The input shape of the network.

Returns

int

The number of features after the convolutional layers.

Parameters:
  • backbone (torch.nn.Module)

  • input_shape (Tuple[int, int, int])

Return type:

int

_create_fc(input_features, num_classes)[source]
Parameters:
  • input_features (int)

  • num_classes (int)

Return type:

torch.nn.Module

fc_input_channels
input_shape = (6, 60)
num_classes = 6
pad_at = (3,)
Parameters:
  • pad_at (Union[int, Tuple[int]])

  • input_shape (Union[Tuple[int, int], Tuple[int, int, int]])

  • num_classes (int)

  • learning_rate (float)

class minerva.models.nets.time_series.cnns.CNN_HaEtAl_2D_Backbone(input_shape, pad_at=3, first_kernel_size=4)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:
  • input_shape (Tuple[int, int, int])

  • pad_at (int)

  • first_kernel_size (int)

Initialize internal Module state, shared by both nn.Module and ScriptModule.

backbone
forward(x)[source]
class minerva.models.nets.time_series.cnns.CNN_PFF_2D(*args, **kwargs)[source]

Bases: CNN_PF_2D

A modular Lightning model wrapper for supervised learning tasks.

This class enables the construction of supervised models by combining a backbone (feature extractor), an optional adapter, and a fully connected (FC) head. It provides a clean interface for setting up custom training, validation, and testing pipelines with pluggable loss functions, metrics, optimizers, and learning rate schedulers.

The architecture is structured as follows:

Backbone Model


v

Adapter (Optional)


(Flatten if needed)

v

Fully Connected Head


v

Loss Function

Training and validation steps comprise the following steps:

  1. Forward pass input through the backbone.

  2. Pass through adapter (if provided).

  3. Flatten the output (if flatten is True) before the FC head.

  4. Forward through the FC head.

  5. Compute loss with respect to targets.

  6. Backpropagate and update parameters.

  7. Compute metrics and log them.

  8. Return loss. train_loss, val_loss, and test_loss are always logged, along with any additional metrics specified in the train_metrics, val_metrics, and test_metrics dictionaries.

This wrapper is especially useful to quickly set up supervised models for various tasks, such as image classification, object detection, and segmentation. It is designed to be flexible and extensible, allowing users to easily swap out components like the backbone, adapter, and FC head as needed. The model is built with a focus on simplicity and modularity, making it easy to adapt to different use cases and requirements. The model is designed to be used with PyTorch Lightning and is compatible with its training loop.

Note: For more complex architectures that does not follow the above structure should not inherit from this class.

Note: Input batches must be tuples (input_tensor, target_tensor).

Initializes the supervised model with training components and configs.

Parameters

backbonetorch.nn.Module or LoadableModule

The backbone (feature extractor) model.

fctorch.nn.Module or LoadableModule

The fully connected head. Use nn.Identity() if not required.

loss_fntorch.nn.Module

Loss function to optimize during training.

adapterCallable, optional

Function to transform backbone outputs before feeding into fc.

learning_ratefloat, default=1e-3

Learning rate used for optimization.

flattenbool, default=True

If True, flattens backbone outputs before fc.

train_metricsdict, optional

TorchMetrics dictionary for training evaluation.

val_metricsdict, optional

TorchMetrics dictionary for validation evaluation.

test_metricsdict, optional

TorchMetrics dictionary for test evaluation.

freeze_backbonebool, default=False

If True, backbone parameters are frozen during training.

optimizer: type

Optimizer class to be instantiated. By default, it is set to torch.optim.Adam. Should be a subclass of torch.optim.Optimizer (e.g., torch.optim.SGD).

optimizer_kwargsdict, optional

Additional kwargs passed to the optimizer constructor.

lr_schedulertype, optional

Learning rate scheduler class to be instantiated. By default, it is set to None, which means no scheduler will be used. Should be a subclass of torch.optim.lr_scheduler.LRScheduler (e.g., torch.optim.lr_scheduler.StepLR).

lr_scheduler_kwargsdict, optional

Additional kwargs passed to the scheduler constructor.

class minerva.models.nets.time_series.cnns.CNN_PF_2D(pad_at=3, input_shape=(1, 6, 60), out_channels=16, num_classes=6, learning_rate=0.001, include_middle=False, *args, **kwargs)[source]

Bases: minerva.models.nets.base.SimpleSupervisedModel

A modular Lightning model wrapper for supervised learning tasks.

This class enables the construction of supervised models by combining a backbone (feature extractor), an optional adapter, and a fully connected (FC) head. It provides a clean interface for setting up custom training, validation, and testing pipelines with pluggable loss functions, metrics, optimizers, and learning rate schedulers.

The architecture is structured as follows:

Backbone Model


v

Adapter (Optional)


(Flatten if needed)

v

Fully Connected Head


v

Loss Function

Training and validation steps comprise the following steps:

  1. Forward pass input through the backbone.

  2. Pass through adapter (if provided).

  3. Flatten the output (if flatten is True) before the FC head.

  4. Forward through the FC head.

  5. Compute loss with respect to targets.

  6. Backpropagate and update parameters.

  7. Compute metrics and log them.

  8. Return loss. train_loss, val_loss, and test_loss are always logged, along with any additional metrics specified in the train_metrics, val_metrics, and test_metrics dictionaries.

This wrapper is especially useful to quickly set up supervised models for various tasks, such as image classification, object detection, and segmentation. It is designed to be flexible and extensible, allowing users to easily swap out components like the backbone, adapter, and FC head as needed. The model is built with a focus on simplicity and modularity, making it easy to adapt to different use cases and requirements. The model is designed to be used with PyTorch Lightning and is compatible with its training loop.

Note: For more complex architectures that does not follow the above structure should not inherit from this class.

Note: Input batches must be tuples (input_tensor, target_tensor).

Initializes the supervised model with training components and configs.

Parameters

backbonetorch.nn.Module or LoadableModule

The backbone (feature extractor) model.

fctorch.nn.Module or LoadableModule

The fully connected head. Use nn.Identity() if not required.

loss_fntorch.nn.Module

Loss function to optimize during training.

adapterCallable, optional

Function to transform backbone outputs before feeding into fc.

learning_ratefloat, default=1e-3

Learning rate used for optimization.

flattenbool, default=True

If True, flattens backbone outputs before fc.

train_metricsdict, optional

TorchMetrics dictionary for training evaluation.

val_metricsdict, optional

TorchMetrics dictionary for validation evaluation.

test_metricsdict, optional

TorchMetrics dictionary for test evaluation.

freeze_backbonebool, default=False

If True, backbone parameters are frozen during training.

optimizer: type

Optimizer class to be instantiated. By default, it is set to torch.optim.Adam. Should be a subclass of torch.optim.Optimizer (e.g., torch.optim.SGD).

optimizer_kwargsdict, optional

Additional kwargs passed to the optimizer constructor.

lr_schedulertype, optional

Learning rate scheduler class to be instantiated. By default, it is set to None, which means no scheduler will be used. Should be a subclass of torch.optim.lr_scheduler.LRScheduler (e.g., torch.optim.lr_scheduler.StepLR).

lr_scheduler_kwargsdict, optional

Additional kwargs passed to the scheduler constructor.

_calculate_fc_input_features(backbone, input_shape)[source]

Run a single forward pass with a random input to get the number of features after the convolutional layers.

Parameters

backbonetorch.nn.Module

The backbone of the network

input_shapeTuple[int, int, int]

The input shape of the network.

Returns

int

The number of features after the convolutional layers.

Parameters:
  • backbone (torch.nn.Module)

  • input_shape (Tuple[int, int, int])

Return type:

int

_create_fc(input_features, num_classes)[source]
Parameters:
  • input_features (int)

  • num_classes (int)

Return type:

torch.nn.Module

fc_input_channels
input_shape = (1, 6, 60)
num_classes = 6
out_channels = 16
pad_at = 3
Parameters:
  • pad_at (int)

  • input_shape (Tuple[int, int, int])

  • out_channels (int)

  • num_classes (int)

  • learning_rate (float)

  • include_middle (bool)

class minerva.models.nets.time_series.cnns.CNN_PF_Backbone(in_channels=1, pad_at=3, out_channels=16, include_middle=False, permute=False, flatten=False)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:
  • in_channels (int)

  • pad_at (int)

  • out_channels (int)

  • include_middle (bool)

  • permute (bool)

  • flatten (bool)

Initialize internal Module state, shared by both nn.Module and ScriptModule.

first_pad_size = 2
first_padder
flatten = False
forward(x)[source]
in_channels = 1
include_middle = False
lower_part
out_channels = 16
pad_at = 3
permute = False
shared_part
upper_part
class minerva.models.nets.time_series.cnns.ZeroPadder2D(pad_at, padding_size)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:
  • pad_at (Tuple[int])

  • padding_size (int)

Initialize internal Module state, shared by both nn.Module and ScriptModule.

__repr__()[source]
Return type:

str

__str__()[source]
Return type:

str

forward(x)[source]
pad_at
padding_size