minerva.models.adapters
Classes
This class takes a 3D tensor and performs max pooling along the time dimension. |
|
Module Contents
- class minerva.models.adapters.MaxPoolingTransposingSqueezingAdapter(kernel_size=128)[source]
This class takes a 3D tensor and performs max pooling along the time dimension. The tensor is first transposed, then max pooling is applied, and finally, the tensor is transposed back and squeezed to remove the singleton dimension. This operation helps in reducing the dimensionality of the tensor while retaining the most significant features. It comes from rebar repository https://github.com/maxxu05/rebar , also mentioned at the paper https://arxiv.org/pdf/2311.00519 : “At the end of the encoder, we utilize a global max pooling layer to pool over time.”
Parameters
- kernel_sizeint, optional (default=128)
The size of the window over which the max pooling operation is applied.
Examples
>>> import torch >>> from minerva.models.adapters import MaxPoolingTransposingSqueezingAdapter >>> tensor = torch.randn(10, 128, 64) # Example input tensor with shape (batch_size, time_steps, features) >>> adapter = MaxPoolingTransposingSqueezingAdapter(kernel_size=128) >>> result = adapter(tensor) >>> print(result.shape) torch.Size([10, 64])
Notes
This class is designed to be used as an adapter in deep learning models where dimensionality reduction is required. It is particularly useful in scenarios involving time-series data or sequential data processing.
- kernel_size = 128
- max_pooling_adapter(tensor)[source]
Applies transposing, max polling and squeezing to the input tensor.
Parameters
- tensortorch.Tensor
The input tensor to be processed. The expected shape of the tensor is (batch_size, time_steps, features).
Returns
- torch.Tensor
The processed tensor after applying max pooling. The shape of the tensor will be (batch_size, features).
- Parameters:
tensor (torch.Tensor)
- Return type:
torch.Tensor
- Parameters:
kernel_size (int)