minerva.models.nets.resnet_1d

Classes

ConvolutionalBlock

ResNet1DBase

Simple pipeline for supervised models.

ResNet1D_8

Simple pipeline for supervised models.

ResNetBlock

ResNetSE1D_5

Simple pipeline for supervised models.

ResNetSE1D_8

Simple pipeline for supervised models.

ResNetSEBlock

SqueezeAndExcitation1D

_ResNet1D

Module Contents

class minerva.models.nets.resnet_1d.ConvolutionalBlock(in_channels, activation_cls=None)

Bases: torch.nn.Module

Parameters:
  • in_channels (int)

  • activation_cls (torch.nn.Module)

forward(x)
class minerva.models.nets.resnet_1d.ResNet1DBase(resnet_block_cls=ResNetBlock, activation_cls=torch.nn.ReLU, input_shape=(6, 60), num_classes=6, num_residual_blocks=5, reduction_ratio=2, learning_rate=0.001)

Bases: minerva.models.nets.base.SimpleSupervisedModel

Simple pipeline for supervised models.

This class implements a very common deep learning pipeline, which is composed by the following steps:

  1. Make a forward pass with the input data on the backbone model;

  2. Make a forward pass with the input data on the fc model;

  3. Compute the loss between the output and the label data;

  4. Optimize the model (backbone and FC) parameters with respect to the loss.

This reduces the code duplication for autoencoder models, and makes it easier to implement new models by only changing the backbone model. More complex models, that does not follow this pipeline, should not inherit from this class. Note that, for this class the input data is a tuple of tensors, where the first tensor is the input data and the second tensor is the mask or label.

Initialize the model with the backbone, fc, loss function and metrics. Metrics are used to evaluate the model during training, validation, testing or prediction. It will be logged using lightning logger at the end of each epoch. Metrics should implement the torchmetrics.Metric interface.

Parameters

backbonetorch.nn.Module

The backbone model. Usually the encoder/decoder part of the model.

fctorch.nn.Module

The fully connected model, usually used to classification tasks. Use torch.nn.Identity() if no FC model is needed.

loss_fntorch.nn.Module

The function used to compute the loss.

learning_ratefloat, optional

The learning rate to Adam optimizer, by default 1e-3

flattenbool, optional

If True the input data will be flattened before passing through the fc model, by default True

train_metricsDict[str, Metric], optional

The metrics to be used during training, by default None

val_metricsDict[str, Metric], optional

The metrics to be used during validation, by default None

test_metricsDict[str, Metric], optional

The metrics to be used during testing, by default None

predict_metricsDict[str, Metric], optional

The metrics to be used during prediction, by default None

_calculate_fc_input_features(backbone, input_shape)

Run a single forward pass with a random input to get the number of features after the convolutional layers.

Parameters

backbonetorch.nn.Module

The backbone of the network

input_shapeTuple[int, int, int]

The input shape of the network.

Returns

int

The number of features after the convolutional layers.

Parameters:
  • backbone (torch.nn.Module)

  • input_shape (Tuple[int, int, int])

Return type:

int

Parameters:
  • resnet_block_cls (type)

  • activation_cls (type)

  • input_shape (Tuple[int, int])

  • num_classes (int)

  • num_residual_blocks (int)

  • learning_rate (float)

class minerva.models.nets.resnet_1d.ResNet1D_8(*args, **kwargs)

Bases: ResNet1DBase

Simple pipeline for supervised models.

This class implements a very common deep learning pipeline, which is composed by the following steps:

  1. Make a forward pass with the input data on the backbone model;

  2. Make a forward pass with the input data on the fc model;

  3. Compute the loss between the output and the label data;

  4. Optimize the model (backbone and FC) parameters with respect to the loss.

This reduces the code duplication for autoencoder models, and makes it easier to implement new models by only changing the backbone model. More complex models, that does not follow this pipeline, should not inherit from this class. Note that, for this class the input data is a tuple of tensors, where the first tensor is the input data and the second tensor is the mask or label.

Initialize the model with the backbone, fc, loss function and metrics. Metrics are used to evaluate the model during training, validation, testing or prediction. It will be logged using lightning logger at the end of each epoch. Metrics should implement the torchmetrics.Metric interface.

Parameters

backbonetorch.nn.Module

The backbone model. Usually the encoder/decoder part of the model.

fctorch.nn.Module

The fully connected model, usually used to classification tasks. Use torch.nn.Identity() if no FC model is needed.

loss_fntorch.nn.Module

The function used to compute the loss.

learning_ratefloat, optional

The learning rate to Adam optimizer, by default 1e-3

flattenbool, optional

If True the input data will be flattened before passing through the fc model, by default True

train_metricsDict[str, Metric], optional

The metrics to be used during training, by default None

val_metricsDict[str, Metric], optional

The metrics to be used during validation, by default None

test_metricsDict[str, Metric], optional

The metrics to be used during testing, by default None

predict_metricsDict[str, Metric], optional

The metrics to be used during prediction, by default None

class minerva.models.nets.resnet_1d.ResNetBlock(in_channels=64, activation_cls=torch.nn.ReLU)

Bases: torch.nn.Module

Parameters:
  • in_channels (int)

  • activation_cls (torch.nn.Module)

forward(x)
class minerva.models.nets.resnet_1d.ResNetSE1D_5(*args, **kwargs)

Bases: ResNet1DBase

Simple pipeline for supervised models.

This class implements a very common deep learning pipeline, which is composed by the following steps:

  1. Make a forward pass with the input data on the backbone model;

  2. Make a forward pass with the input data on the fc model;

  3. Compute the loss between the output and the label data;

  4. Optimize the model (backbone and FC) parameters with respect to the loss.

This reduces the code duplication for autoencoder models, and makes it easier to implement new models by only changing the backbone model. More complex models, that does not follow this pipeline, should not inherit from this class. Note that, for this class the input data is a tuple of tensors, where the first tensor is the input data and the second tensor is the mask or label.

Initialize the model with the backbone, fc, loss function and metrics. Metrics are used to evaluate the model during training, validation, testing or prediction. It will be logged using lightning logger at the end of each epoch. Metrics should implement the torchmetrics.Metric interface.

Parameters

backbonetorch.nn.Module

The backbone model. Usually the encoder/decoder part of the model.

fctorch.nn.Module

The fully connected model, usually used to classification tasks. Use torch.nn.Identity() if no FC model is needed.

loss_fntorch.nn.Module

The function used to compute the loss.

learning_ratefloat, optional

The learning rate to Adam optimizer, by default 1e-3

flattenbool, optional

If True the input data will be flattened before passing through the fc model, by default True

train_metricsDict[str, Metric], optional

The metrics to be used during training, by default None

val_metricsDict[str, Metric], optional

The metrics to be used during validation, by default None

test_metricsDict[str, Metric], optional

The metrics to be used during testing, by default None

predict_metricsDict[str, Metric], optional

The metrics to be used during prediction, by default None

class minerva.models.nets.resnet_1d.ResNetSE1D_8(*args, **kwargs)

Bases: ResNet1DBase

Simple pipeline for supervised models.

This class implements a very common deep learning pipeline, which is composed by the following steps:

  1. Make a forward pass with the input data on the backbone model;

  2. Make a forward pass with the input data on the fc model;

  3. Compute the loss between the output and the label data;

  4. Optimize the model (backbone and FC) parameters with respect to the loss.

This reduces the code duplication for autoencoder models, and makes it easier to implement new models by only changing the backbone model. More complex models, that does not follow this pipeline, should not inherit from this class. Note that, for this class the input data is a tuple of tensors, where the first tensor is the input data and the second tensor is the mask or label.

Initialize the model with the backbone, fc, loss function and metrics. Metrics are used to evaluate the model during training, validation, testing or prediction. It will be logged using lightning logger at the end of each epoch. Metrics should implement the torchmetrics.Metric interface.

Parameters

backbonetorch.nn.Module

The backbone model. Usually the encoder/decoder part of the model.

fctorch.nn.Module

The fully connected model, usually used to classification tasks. Use torch.nn.Identity() if no FC model is needed.

loss_fntorch.nn.Module

The function used to compute the loss.

learning_ratefloat, optional

The learning rate to Adam optimizer, by default 1e-3

flattenbool, optional

If True the input data will be flattened before passing through the fc model, by default True

train_metricsDict[str, Metric], optional

The metrics to be used during training, by default None

val_metricsDict[str, Metric], optional

The metrics to be used during validation, by default None

test_metricsDict[str, Metric], optional

The metrics to be used during testing, by default None

predict_metricsDict[str, Metric], optional

The metrics to be used during prediction, by default None

class minerva.models.nets.resnet_1d.ResNetSEBlock(*args, **kwargs)

Bases: ResNetBlock

class minerva.models.nets.resnet_1d.SqueezeAndExcitation1D(in_channels, reduction_ratio=2)

Bases: torch.nn.Module

Parameters:
  • in_channels (int)

  • reduction_ratio (int)

forward(input_tensor)
class minerva.models.nets.resnet_1d._ResNet1D(input_shape, residual_block_cls=ResNetBlock, activation_cls=torch.nn.ReLU, num_residual_blocks=5, reduction_ratio=2)

Bases: torch.nn.Module

Parameters:
  • input_shape (Tuple[int, int])

  • activation_cls (torch.nn.Module)

  • num_residual_blocks (int)

forward(x)