minerva.models.nets.image.setr

Classes

SETR_PUP

SET-R model with PUP head for image segmentation.

_SETRMLAHead

Multi level feature aggretation head of SETR (as in

_SETRUPHead

Naive upsampling head and Progressive upsampling head of SETR

_SetR_PUP

Base class for all neural network modules.

Module Contents

class minerva.models.nets.image.setr.SETR_PUP(image_size=512, patch_size=16, num_layers=24, num_heads=16, hidden_dim=1024, mlp_dim=4096, encoder_dropout=0.1, num_classes=1000, norm_layer=None, decoder_channels=256, num_convs=4, up_scale=2, kernel_size=3, align_corners=False, decoder_dropout=0.1, conv_norm=None, conv_act=None, interpolate_mode='bilinear', loss_fn=None, optimizer_type=None, optimizer_params=None, train_metrics=None, val_metrics=None, test_metrics=None, aux_output=True, aux_output_layers=None, aux_weights=None, load_backbone_path=None, freeze_backbone_on_load=True, learning_rate=0.001, loss_weights=None, original_resolution=None, head_lr_factor=1.0, test_engine=None)[source]

Bases: lightning.pytorch.LightningModule

SET-R model with PUP head for image segmentation.

Methods

forward(x: torch.Tensor) -> torch.Tensor

Forward pass of the model.

_compute_metrics(y_hat: torch.Tensor, y: torch.Tensor, step_name: str)

Compute metrics for the given step.

_loss_func(y_hat: Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]], y: torch.Tensor) -> torch.Tensor

Calculate the loss between the output and the input data.

_single_step(batch: torch.Tensor, batch_idx: int, step_name: str)

Perform a single step of the training/validation loop.

training_step(batch: torch.Tensor, batch_idx: int)

Perform a single training step.

validation_step(batch: torch.Tensor, batch_idx: int)

Perform a single validation step.

test_step(batch: torch.Tensor, batch_idx: int)

Perform a single test step.

predict_step(batch: torch.Tensor, batch_idx: int, dataloader_idx: Optional[int] = None)

Perform a single prediction step.

load_backbone(path: str, freeze: bool = False)

Load a pre-trained backbone.

configure_optimizers()

Configure the optimizer for the model.

create_from_dict(config: Dict) -> “SETR_PUP”

Create an instance of SETR_PUP from a configuration dictionary.

Initialize the SETR model with Progressive Upsampling Head.

Parameters

image_sizeUnion[int, Tuple[int, int]], optional

Size of the input image, by default 512.

patch_sizeint, optional

Size of the patches to be extracted from the input image, by default 16.

num_layersint, optional

Number of transformer layers, by default 24.

num_headsint, optional

Number of attention heads, by default 16.

hidden_dimint, optional

Dimension of the hidden layer, by default 1024.

mlp_dimint, optional

Dimension of the MLP layer, by default 4096.

encoder_dropoutfloat, optional

Dropout rate for the encoder, by default 0.1.

num_classesint, optional

Number of output classes, by default 1000.

norm_layerOptional[nn.Module], optional

Normalization layer, by default None.

decoder_channelsint, optional

Number of channels in the decoder, by default 256.

num_convsint, optional

Number of convolutional layers in the decoder, by default 4.

up_scaleint, optional

Upscaling factor for the decoder, by default 2.

kernel_sizeint, optional

Kernel size for the convolutional layers, by default 3.

align_cornersbool, optional

Whether to align corners when interpolating, by default False.

decoder_dropoutfloat, optional

Dropout rate for the decoder, by default 0.1.

conv_normOptional[nn.Module], optional

Normalization layer for the convolutional layers, by default None.

conv_actOptional[nn.Module], optional

Activation function for the convolutional layers, by default None.

interpolate_modestr, optional

Interpolation mode, by default “bilinear”.

loss_fnOptional[nn.Module], optional

Loss function, when None defaults to nn.CrossEntropyLoss, by default None.

optimizer_typeOptional[type], optional

Type of optimizer, by default None.

optimizer_paramsOptional[Dict], optional

Parameters for the optimizer, by default None.

train_metricsOptional[Dict[str, Metric]], optional

Metrics for training, by default None.

val_metricsOptional[Dict[str, Metric]], optional

Metrics for validation, by default None.

test_metricsOptional[Dict[str, Metric]], optional

Metrics for testing, by default None.

aux_outputbool, optional

Whether to use auxiliary outputs, by default True.

aux_output_layerslist[int], optional

Layers for auxiliary outputs, when None it defaults to [9, 14, 19].

aux_weightslist[float], optional

Weights for auxiliary outputs, when None it defaults [0.3, 0.3, 0.3].

load_backbone_pathOptional[str], optional

Path to load the backbone model, by default None.

freeze_backbone_on_loadbool, optional

Whether to freeze the backbone model on load, by default True.

learning_ratefloat, optional

Learning rate, by default 1e-3.

loss_weightsOptional[list[float]], optional

Weights for the loss function, by default None.

original_resolutionOptional[Tuple[int, int]], optional

The original resolution of the input image in the pre-training weights. When None, positional embeddings will not be interpolated. Defaults to None.

head_lr_factorfloat, optional

Learning rate factor for the head. used if you need different learning rates for backbone and prediction head, by default 1.0.

test_engineOptional[_Engine], optional

Engine used for test and validation steps. When None, behavior of all steps, training, testing and validation is the same, by default None.

_compute_metrics(y_hat, y, step_name)[source]
Parameters:
  • y_hat (torch.Tensor)

  • y (torch.Tensor)

  • step_name (str)

_loss_func(y_hat, y)[source]

Calculate the loss between the output and the input data.

Parameters

y_hattorch.Tensor

The output data from the forward pass.

ytorch.Tensor

The input data/label.

Returns

torch.Tensor

The loss value.

Parameters:
  • y_hat (Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]])

  • y (torch.Tensor)

Return type:

torch.Tensor

_single_step(batch, batch_idx, step_name)[source]

Perform a single step of the training/validation loop.

Parameters

batchtorch.Tensor

The input data.

batch_idxint

The index of the batch.

step_namestr

The name of the step, either “train” or “val”.

Returns

torch.Tensor

The loss value.

Parameters:
  • batch (torch.Tensor)

  • batch_idx (int)

  • step_name (str)

aux_weights = None
configure_optimizers()[source]

Choose what optimizers and learning-rate schedulers to use in your optimization. Normally you’d need one. But in the case of GANs or similar you might have multiple. Optimization with multiple optimizers only works in the manual optimization mode.

Return:

Any of these 6 options.

  • Single optimizer.

  • List or Tuple of optimizers.

  • Two lists - The first list has multiple optimizers, and the second has multiple LR schedulers (or multiple lr_scheduler_config).

  • Dictionary, with an "optimizer" key, and (optionally) a "lr_scheduler" key whose value is a single LR scheduler or lr_scheduler_config.

  • None - Fit will run without any optimizer.

The lr_scheduler_config is a dictionary which contains the scheduler and its associated configuration. The default configuration is shown below.

lr_scheduler_config = {
    # REQUIRED: The scheduler instance
    "scheduler": lr_scheduler,
    # The unit of the scheduler's step size, could also be 'step'.
    # 'epoch' updates the scheduler on epoch end whereas 'step'
    # updates it after a optimizer update.
    "interval": "epoch",
    # How many epochs/steps should pass between calls to
    # `scheduler.step()`. 1 corresponds to updating the learning
    # rate after every epoch/step.
    "frequency": 1,
    # Metric to monitor for schedulers like `ReduceLROnPlateau`
    "monitor": "val_loss",
    # If set to `True`, will enforce that the value specified 'monitor'
    # is available when the scheduler is updated, thus stopping
    # training if not found. If set to `False`, it will only produce a warning
    "strict": True,
    # If using the `LearningRateMonitor` callback to monitor the
    # learning rate progress, this keyword can be used to specify
    # a custom logged name
    "name": None,
}

When there are schedulers in which the .step() method is conditioned on a value, such as the torch.optim.lr_scheduler.ReduceLROnPlateau scheduler, Lightning requires that the lr_scheduler_config contains the keyword "monitor" set to the metric name that the scheduler should be conditioned on.

Metrics can be made available to monitor by simply logging it using self.log('metric_to_track', metric_val) in your LightningModule.

Note:

Some things to know:

  • Lightning calls .backward() and .step() automatically in case of automatic optimization.

  • If a learning rate scheduler is specified in configure_optimizers() with key "interval" (default “epoch”) in the scheduler configuration, Lightning will call the scheduler’s .step() method automatically in case of automatic optimization.

  • If you use 16-bit precision (precision=16), Lightning will automatically handle the optimizer.

  • If you use torch.optim.LBFGS, Lightning handles the closure function automatically for you.

  • If you use multiple optimizers, you will have to switch to ‘manual optimization’ mode and step them yourself.

  • If you need to control how often the optimizer steps, override the optimizer_step() hook.

static create_from_dict(config)[source]
Parameters:

config (Dict)

Return type:

SETR_PUP

forward(x)[source]

Same as torch.nn.Module.forward().

Args:

*args: Whatever you decide to pass into the forward method. **kwargs: Keyword arguments are also possible.

Return:

Your model’s output

Parameters:

x (torch.Tensor)

Return type:

torch.Tensor

head_lr_factor = 1.0
learning_rate = 0.001
load_backbone(path, freeze=False)[source]
Parameters:
  • path (str)

  • freeze (bool)

loss_fn = None
metrics
model
num_classes = 1000
optimizer_type = None
predict_step(batch, batch_idx, dataloader_idx=None)[source]

Step function called during predict(). By default, it calls forward(). Override to add any processing logic.

The predict_step() is used to scale inference on multi-devices.

To prevent an OOM error, it is possible to use BasePredictionWriter callback to write the predictions to disk or database after each batch or on epoch end.

The BasePredictionWriter should be used while using a spawn based accelerator. This happens for Trainer(strategy="ddp_spawn") or training on 8 TPU cores with Trainer(accelerator="tpu", devices=8) as predictions won’t be returned.

Args:

batch: The output of your data iterable, normally a DataLoader. batch_idx: The index of this batch. dataloader_idx: The index of the dataloader that produced this batch.

(only if multiple dataloaders used)

Return:

Predicted output (optional).

Example

class MyModel(LightningModule):

    def predict_step(self, batch, batch_idx, dataloader_idx=0):
        return self(batch)

dm = ...
model = MyModel()
trainer = Trainer(accelerator="gpu", devices=2)
predictions = trainer.predict(model, dm)
Parameters:
  • batch (torch.Tensor)

  • batch_idx (int)

  • dataloader_idx (Optional[int])

test_engine = None
test_step(batch, batch_idx)[source]

Operates on a single batch of data from the test set. In this step you’d normally generate examples or calculate anything of interest such as accuracy.

Args:

batch: The output of your data iterable, normally a DataLoader. batch_idx: The index of this batch. dataloader_idx: The index of the dataloader that produced this batch.

(only if multiple dataloaders used)

Return:
  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'.

  • None - Skip to the next batch.

# if you have one test dataloader:
def test_step(self, batch, batch_idx): ...


# if you have multiple test dataloaders:
def test_step(self, batch, batch_idx, dataloader_idx=0): ...

Examples:

# CASE 1: A single test dataset
def test_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    test_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'test_loss': loss, 'test_acc': test_acc})

If you pass in multiple test dataloaders, test_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple test dataloaders
def test_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...
Note:

If you don’t need to test you don’t need to implement this method.

Note:

When the test_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of the test epoch, the model goes back to training mode and gradients are enabled.

Parameters:
  • batch (torch.Tensor)

  • batch_idx (int)

training_step(batch, batch_idx)[source]

Here you compute and return the training loss and some additional metrics for e.g. the progress bar or logger.

Args:

batch: The output of your data iterable, normally a DataLoader. batch_idx: The index of this batch. dataloader_idx: The index of the dataloader that produced this batch.

(only if multiple dataloaders used)

Return:
  • Tensor - The loss tensor

  • dict - A dictionary which can include any keys, but must include the key 'loss' in the case of automatic optimization.

  • None - In automatic optimization, this will skip to the next batch (but is not supported for multi-GPU, TPU, or DeepSpeed). For manual optimization, this has no special meaning, as returning the loss is not required.

In this step you’d normally do the forward pass and calculate the loss for a batch. You can also do fancier things like multiple forward passes or something model specific.

Example:

def training_step(self, batch, batch_idx):
    x, y, z = batch
    out = self.encoder(x)
    loss = self.loss(out, x)
    return loss

To use multiple optimizers, you can switch to ‘manual optimization’ and control their stepping:

def __init__(self):
    super().__init__()
    self.automatic_optimization = False


# Multiple optimizers (e.g.: GANs)
def training_step(self, batch, batch_idx):
    opt1, opt2 = self.optimizers()

    # do training_step with encoder
    ...
    opt1.step()
    # do training_step with decoder
    ...
    opt2.step()
Note:

When accumulate_grad_batches > 1, the loss returned here will be automatically normalized by accumulate_grad_batches internally.

Parameters:
  • batch (torch.Tensor)

  • batch_idx (int)

validation_step(batch, batch_idx)[source]

Operates on a single batch of data from the validation set. In this step you’d might generate examples or calculate anything of interest like accuracy.

Args:

batch: The output of your data iterable, normally a DataLoader. batch_idx: The index of this batch. dataloader_idx: The index of the dataloader that produced this batch.

(only if multiple dataloaders used)

Return:
  • Tensor - The loss tensor

  • dict - A dictionary. Can include any keys, but must include the key 'loss'.

  • None - Skip to the next batch.

# if you have one val dataloader:
def validation_step(self, batch, batch_idx): ...


# if you have multiple val dataloaders:
def validation_step(self, batch, batch_idx, dataloader_idx=0): ...

Examples:

# CASE 1: A single validation dataset
def validation_step(self, batch, batch_idx):
    x, y = batch

    # implement your own
    out = self(x)
    loss = self.loss(out, y)

    # log 6 example images
    # or generated text... or whatever
    sample_imgs = x[:6]
    grid = torchvision.utils.make_grid(sample_imgs)
    self.logger.experiment.add_image('example_images', grid, 0)

    # calculate acc
    labels_hat = torch.argmax(out, dim=1)
    val_acc = torch.sum(y == labels_hat).item() / (len(y) * 1.0)

    # log the outputs!
    self.log_dict({'val_loss': loss, 'val_acc': val_acc})

If you pass in multiple val dataloaders, validation_step() will have an additional argument. We recommend setting the default value of 0 so that you can quickly switch between single and multiple dataloaders.

# CASE 2: multiple validation dataloaders
def validation_step(self, batch, batch_idx, dataloader_idx=0):
    # dataloader_idx tells you which dataset this is.
    ...
Note:

If you don’t need to validate you don’t need to implement this method.

Note:

When the validation_step() is called, the model has been put in eval mode and PyTorch gradients have been disabled. At the end of validation, the model goes back to training mode and gradients are enabled.

Parameters:
  • batch (torch.Tensor)

  • batch_idx (int)

Parameters:
  • image_size (Union[int, Tuple[int, int]])

  • patch_size (int)

  • num_layers (int)

  • num_heads (int)

  • hidden_dim (int)

  • mlp_dim (int)

  • encoder_dropout (float)

  • num_classes (int)

  • norm_layer (Optional[torch.nn.Module])

  • decoder_channels (int)

  • num_convs (int)

  • up_scale (int)

  • kernel_size (int)

  • align_corners (bool)

  • decoder_dropout (float)

  • conv_norm (Optional[torch.nn.Module])

  • conv_act (Optional[torch.nn.Module])

  • interpolate_mode (str)

  • loss_fn (Optional[torch.nn.Module])

  • optimizer_type (Optional[type])

  • optimizer_params (Optional[Dict])

  • train_metrics (Optional[Dict[str, torchmetrics.Metric]])

  • val_metrics (Optional[Dict[str, torchmetrics.Metric]])

  • test_metrics (Optional[Dict[str, torchmetrics.Metric]])

  • aux_output (bool)

  • aux_output_layers (Optional[list[int]])

  • aux_weights (Optional[list[float]])

  • load_backbone_path (Optional[str])

  • freeze_backbone_on_load (bool)

  • learning_rate (float)

  • loss_weights (Optional[list[float]])

  • original_resolution (Optional[Tuple[int, int]])

  • head_lr_factor (float)

  • test_engine (Optional[minerva.engines.engine._Engine])

class minerva.models.nets.image.setr._SETRMLAHead(channels, conv_norm, conv_act, in_channels, out_channels, num_classes, mla_channels=128, up_scale=4, kernel_size=3, align_corners=True, dropout=0.1, threshold=None)[source]

Bases: torch.nn.Module

Multi level feature aggretation head of SETR (as in https://arxiv.org/pdf/2012.15840.pdf)

Note: This has not been tested yet!

Initialize internal Module state, shared by both nn.Module and ScriptModule.

Parameters:
  • channels (int)

  • conv_norm (Optional[torch.nn.Module])

  • conv_act (Optional[torch.nn.Module])

  • in_channels (List[int])

  • out_channels (int)

  • num_classes (int)

  • mla_channels (int)

  • up_scale (int)

  • kernel_size (int)

  • align_corners (bool)

  • dropout (float)

  • threshold (Optional[float])

cls_seg
dropout
forward(x)[source]
num_classes
out_channels
threshold = None
up_convs
class minerva.models.nets.image.setr._SETRUPHead(channels, in_channels, num_classes, norm_layer, conv_norm, conv_act, num_convs, up_scale, kernel_size, align_corners, dropout, interpolate_mode)[source]

Bases: torch.nn.Module

Naive upsampling head and Progressive upsampling head of SETR (as in https://arxiv.org/pdf/2012.15840.pdf).

The SETR PUP Head.

Parameters

channelsint

Number of output channels.

in_channelsint

Number of input channels.

num_classesint

Number of output classes.

norm_layernn.Module

Normalization layer.

conv_normnn.Module

Convolutional normalization layer.

conv_actnn.Module

Convolutional activation layer.

num_convsint

Number of convolutional layers.

up_scaleint

Upsampling scale factor.

kernel_sizeint

Kernel size for convolutional layers.

align_cornersbool

Whether to align corners during upsampling.

dropoutfloat

Dropout rate.

interpolate_modestr

Interpolation mode for upsampling.

Raises

AssertionError

If kernel_size is not 1 or 3.

cls_seg
dropout
forward(x)[source]
norm
num_classes
out_channels
up_convs
Parameters:
  • channels (int)

  • in_channels (int)

  • num_classes (int)

  • norm_layer (torch.nn.Module)

  • conv_norm (torch.nn.Module)

  • conv_act (torch.nn.Module)

  • num_convs (int)

  • up_scale (int)

  • kernel_size (int)

  • align_corners (bool)

  • dropout (float)

  • interpolate_mode (str)

class minerva.models.nets.image.setr._SetR_PUP(image_size, patch_size, num_layers, num_heads, hidden_dim, mlp_dim, num_convs, num_classes, decoder_channels, up_scale, encoder_dropout, kernel_size, decoder_dropout, norm_layer, interpolate_mode, conv_norm, conv_act, align_corners, aux_output, aux_output_layers, original_resolution)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:
  • image_size (Union[int, Tuple[int, int]])

  • patch_size (int)

  • num_layers (int)

  • num_heads (int)

  • hidden_dim (int)

  • mlp_dim (int)

  • num_convs (int)

  • num_classes (int)

  • decoder_channels (int)

  • up_scale (int)

  • encoder_dropout (float)

  • kernel_size (int)

  • decoder_dropout (float)

  • norm_layer (torch.nn.Module)

  • interpolate_mode (str)

  • conv_norm (torch.nn.Module)

  • conv_act (torch.nn.Module)

  • align_corners (bool)

  • aux_output (bool)

  • aux_output_layers (Optional[List[int]])

  • original_resolution (Optional[Tuple[int, int]])

Initializes the SETR PUP head.

Parameters

image_sizeint or Tuple[int, int]

The size of the input image.

patch_sizeint

The size of each patch in the input image.

num_layersint

The number of layers in the transformer encoder.

num_headsint

The number of attention heads in the transformer encoder.

hidden_dimint

The hidden dimension of the transformer encoder.

mlp_dimint

The dimension of the feed-forward network in the transformer encoder

num_convsint

The number of convolutional layers in the decoder.

num_classesint

The number of output classes.

decoder_channelsint

The number of channels in the decoder.

up_scaleint

The scale factor for upsampling in the decoder.

encoder_dropoutfloat

The dropout rate for the transformer encoder.

kernel_sizeint

The kernel size for the convolutional layers in the decoder.

decoder_dropoutfloat

The dropout rate for the decoder.

norm_layernn.Module

The normalization layer to be used.

interpolate_modestr

The mode for interpolation during upsampling.

conv_normnn.Module

The normalization layer to be used in the decoder convolutional layers.

conv_actnn.Module

The activation function to be used in the decoder convolutional layers.

align_cornersbool

Whether to align corners during upsampling.

aux_output: bool

Whether to use auxiliary outputs. If True, aux_output_layers must be provided.

aux_output_layers: List[int], optional

The layers to use for auxiliary outputs. Must have exacly 3 values.

original_resolution: Tuple[int, int], optional

The original resolution of the input image in the pre-training weights. When None, positional embeddings will not be interpolated.

aux_head1
aux_head2
aux_head3
aux_output
aux_output_layers
decoder
encoder
forward(x)[source]
Parameters:

x (torch.Tensor)

load_backbone(path, freeze=False)[source]
Parameters:
  • path (str)

  • freeze (bool)