minerva.models.nets.conv_autoencoders_encoders

Classes

ConvTAEDecoder

Base class for all neural network modules.

ConvTAEEncoder

Base class for all neural network modules.

Module Contents

class minerva.models.nets.conv_autoencoders_encoders.ConvTAEDecoder(target_channels=6, target_time_steps=60, encoding_size=256, fc_num_layers=3, conv_num_layers=3, conv_mid_channels=12, conv_kernel=5, conv_padding=0)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:
  • target_channels (int)

  • target_time_steps (int)

  • encoding_size (int)

  • fc_num_layers (int)

  • conv_num_layers (int)

  • conv_mid_channels (int)

  • conv_kernel (int)

  • conv_padding (int)

A decoder for a simple convolutional autodecoder.

Parameters

target_channelsint, optional

Number of channels of the output that the model should target to, by default 6

target_time_stepsint, optional

Number of time steps of the output that the model should target to, by default 60

encoding_sizeint, optional

Size of the data representation received by the model, by default 256

fc_num_layersint, optional

Number of fully connected layers, by default 3

conv_num_layersint, optional

Number of convolutional layers, by default 3

conv_mid_channelsint, optional

Number of channels used for in_channels and out_channels in the convolutional layers, except in the last, by default 12

conv_kernelint, optional

Size of the convolutional kernel, by default 5

conv_paddingint, optional

Padding used in the convolutional layers, by default 0

conv_kernel = 5
conv_mid_channels = 12
conv_num_layers = 3
conv_padding = 0
encoding_size = 256
fc_num_layers = 3
forward(x)[source]
model
target_channels = 6
target_time_steps = 60
class minerva.models.nets.conv_autoencoders_encoders.ConvTAEEncoder(in_channels=6, time_steps=60, encoding_size=256, fc_num_layers=3, conv_num_layers=3, conv_mid_channels=12, conv_kernel=5, conv_padding=0)[source]

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing them to be nested in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self) -> None:
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will also have their parameters converted when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:

training (bool) – Boolean represents whether this module is in training or evaluation mode.

Parameters:
  • in_channels (int)

  • time_steps (int)

  • encoding_size (int)

  • fc_num_layers (int)

  • conv_num_layers (int)

  • conv_mid_channels (int)

  • conv_kernel (int)

  • conv_padding (int)

An encoder for a simple convolutional autoencoder.

Parameters

in_channelsint, optional

Number of channels of the input that the model receives, by default 2

time_stepsint, optional

Number of time steps of the input that the model receives, by default 60

encoding_sizeint, optional

Size of the data representation generated by the model, by default 256

fc_num_layersint, optional

Number of fully connected layers, by default 3

conv_num_layersint, optional

Number of convolutional layers, by default 3

conv_mid_channelsint, optional

Number of channels used for in_channels and out_channels in the convolutional layers, except in the first, by default 12

conv_kernelint, optional

Size of the convolutional kernel, by default 5

conv_paddingint, optional

Padding used in the convolutional layers, by default 0

conv_kernel = 5
conv_mid_channels = 12
conv_num_layers = 3
conv_padding = 0
encoding_size = 256
fc_num_layers = 3
forward(x)[source]
in_channels = 6
model
time_steps = 60