dasf.datasets

Init module for Datasets objects.

Submodules

Classes

Dataset

Class representing a generic dataset based on a TargeteredTransform

DatasetArray

Class representing an dataset wich is defined as an array of a defined

DatasetZarr

Class representing an dataset wich is defined as a Zarr array of a

DatasetHDF5

Class representing an dataset wich is defined as a HDF5 dataset of a

DatasetXarray

Class representing an dataset wich is defined as a Xarray dataset of a

DatasetLabeled

A class representing a labeled dataset. Each item is a 2-element tuple,

DatasetDataFrame

Class representing an dataset wich is defined as a dataframe.

DatasetParquet

Class representing an dataset wich is defined as a Parquet.

make_blobs

Generate isotropic Gaussian blobs for clustering.

make_classification

Generate a random n-class classification problem.

Package Contents

class dasf.datasets.Dataset(name, download=False, root=None, *args, **kwargs)[source]

Bases: dasf.transforms.base.TargeteredTransform

Class representing a generic dataset based on a TargeteredTransform object.

Parameters

namestr

Symbolic name of the dataset.

downloadbool

If the dataset must be downloaded (the default is False).

rootstr

Root download directory (the default is None).

*argstype

Additional arguments without keys.

**kwargstype

Additional keyworkded arguments.

Constructor of the object Dataset.

__set_dataset_cache_dir()

Generate cached directory in $HOME to store dataset(s).

download()[source]

Skeleton of the download method.

__len__()[source]

Return internal data length.

Return type:

int

__getitem__(idx)[source]

Generic __getitem__() function based on internal data.

Parameters

idxAny

Key of the fetched data. It can be an integer or a tuple.

Parameters:
  • name (str)

  • download (bool)

  • root (str)

class dasf.datasets.DatasetArray(name, download=False, root=None, chunks='auto')[source]

Bases: Dataset

Class representing an dataset wich is defined as an array of a defined shape.

Parameters

namestr

Symbolic name of the dataset.

downloadbool

If the dataset must be downloaded (the default is False).

rootstr

Root download directory (the default is None).

chunksAny

Number of blocks of the array (the default is “auto”).

Constructor of the object DatasetArray.

__operator_check__(other)[source]

Check what type of the data we are handling

Examples:

DatasetArray with array-like; or DatasetArray with DatasetArray

Parameters

otherAny

array-like of DatasetArray for the operation.

Returns

dataAny

A data representing the internal array or the class itself.

__repr__()[source]

Return a class representation based on internal array.

__array__(dtype=None)[source]

Array interface is required to support most of the array functions.

Parameters

dtypeAny

Type of the internal array, default=None (not used)

Returns

dataAny

A data representing the internal array or the class itself.

__array_ufunc__(ufunc, method, *inputs, **kwargs)[source]
__check_op_input(in_data)

Return the proper type of data for operation

>>> Result = DatasetArray + Numpy; or
>>> Result = DatasetArray + DatasetArray

Parameters

in_dataAny

Input data to be analyzed.

Returns

dataAny

A data representing the internal array or the class itself.

__add__(other)[source]

Internal function of adding two array datasets.

Parameters

otherAny

A data representing an array or a DatasetArray.

Returns

DatasetArry

A sum with two arrays.

__sub__(other)[source]

Internal function of subtracting two array datasets.

Parameters

otherAny

A data representing an array or a DatasetArray.

Returns

DatasetArry

A subtraction of two arrays.

__mul__(other)[source]

Internal function of multiplication two array datasets.

Parameters

otherAny

A data representing an array or a DatasetArray.

Returns

DatasetArry

A multiplication of two arrays.

__div__(other)[source]

Internal function of division two array datasets.

Parameters

otherAny

A data representing an array or a DatasetArray.

Returns

DatasetArry

A division of two arrays.

__copy_attrs_from_data()

Extends metadata to new transformed object (after operations).

__npy_header()

Read an array header from a filelike object.

_lazy_load(xp, **kwargs)[source]

Lazy load the dataset using an CPU dask container.

Parameters

xptype

Library used to load the file. It must follow numpy library.

**kwargstype

Additional keyworkded arguments to the load.

Returns

Any

The data (or a Future load object, for _lazy operations).

_load(xp, **kwargs)[source]

Load data using CPU container.

Parameters

xpModule

A module that load data (implement load function)

**kwargstype

Additional kwargs to xp.load function.

_load_meta()[source]

Load metadata to inspect.

Returns

dict

A dictionary with metadata information.

Return type:

dict

_lazy_load_gpu()[source]

Load data with GPU container + DASK. (It does not load immediattly)

_lazy_load_cpu()[source]

Load data with CPU container + DASK. (It does not load immediattly)

_load_gpu()[source]

Load data with GPU container (e.g. cupy).

_load_cpu()[source]

Load data with CPU container (e.g. numpy).

from_array(array)[source]

Load data from an existing array.

Parameters

arrayarray-like

Input data to be initialized.

load()[source]

Placeholder for load function.

property shape: tuple

Returns the shape of an array.

Returns

tuple

A tuple with the shape.

Return type:

tuple

inspect_metadata()[source]

Return a dictionary with all metadata information from data.

Returns

dict

A dictionary with metadata information.

Return type:

dict

Parameters:
  • name (str)

  • download (bool)

  • root (str)

class dasf.datasets.DatasetZarr(name, download=False, root=None, backend=None, chunks=None)[source]

Bases: Dataset

Class representing an dataset wich is defined as a Zarr array of a defined shape.

Parameters

namestr

Symbolic name of the dataset.

downloadbool

If the dataset must be downloaded (the default is False).

rootstr

Root download directory (the default is None).

chunksAny

Number of blocks of the array (the default is “auto”).

Constructor of the object DatasetZarr.

_lazy_load(xp, **kwargs)[source]

Lazy load the dataset using an CPU dask container.

Parameters

xptype

Library used to load the file. It must follow numpy library.

**kwargstype

Additional keyworkded arguments to the load.

Returns

Any

The data (or a Future load object, for _lazy operations).

_load(xp, **kwargs)[source]

Load data using CPU container.

Parameters

xpModule

A module that load data (implement load function)

**kwargstype

Additional kwargs to xp.load function.

_lazy_load_cpu()[source]

Load data with CPU container + DASK. (It does not load immediattly)

_lazy_load_gpu()[source]

Load data with GPU container + DASK. (It does not load immediattly)

_load_cpu()[source]

Load data with CPU container (e.g. numpy).

_load_gpu()[source]

Load data with GPU container (e.g. cupy).

load()[source]

Placeholder for load function.

_load_meta()[source]

Load metadata to inspect.

Returns

dict

A dictionary with metadata information.

Return type:

dict

__read_zarray(key)

Returns the value of ZArray JSON metadata.

property shape: tuple

Returns the shape of an array.

Returns

tuple

A tuple with the shape.

Return type:

tuple

property chunksize
Returns the chunksize of an array.

Returns

tuple

A tuple with the chunksize.

inspect_metadata()[source]

Return a dictionary with all metadata information from data.

Returns

dict

A dictionary with metadata information.

Return type:

dict

__repr__()[source]

Return a class representation based on internal array.

__check_op_input(in_data)

Return the proper type of data for operation

>>> Result = DatasetZarr + Numpy; or
>>> Result = DatasetZarr + DatasetZarr

Parameters

in_dataAny

Input data to be analyzed.

Returns

dataAny

A data representing the internal array or the class itself.

__add__(other)[source]

Internal function of adding two array datasets.

Parameters

otherAny

A data representing an array or a DatasetArray.

Returns

DatasetArry

A sum with two arrays.

__sub__(other)[source]

Internal function of subtracting two array datasets.

Parameters

otherAny

A data representing an array or a DatasetArray.

Returns

DatasetArry

A subtraction of two arrays.

__mul__(other)[source]

Internal function of multiplication two array datasets.

Parameters

otherAny

A data representing an array or a DatasetArray.

Returns

DatasetArry

A multiplication of two arrays.

__div__(other)[source]

Internal function of division two array datasets.

Parameters

otherAny

A data representing an array or a DatasetArray.

Returns

DatasetArry

A division of two arrays.

__copy_attrs_from_data()

Extends metadata to new transformed object (after operations).

Parameters:
  • name (str)

  • download (bool)

  • root (str)

  • backend (str)

class dasf.datasets.DatasetHDF5(name, download=False, root=None, chunks='auto', dataset_path=None)[source]

Bases: Dataset

Class representing an dataset wich is defined as a HDF5 dataset of a defined shape.

Parameters

namestr

Symbolic name of the dataset.

downloadbool

If the dataset must be downloaded (the default is False).

rootstr

Root download directory (the default is None).

chunksAny

Number of blocks of the array (the default is “auto”).

dataset_pathstr

Relative path of the internal HDF5 dataset (the default is None).

Constructor of the object DatasetHDF5.

_lazy_load(xp, **kwargs)[source]

Lazy load the dataset using an CPU dask container.

Parameters

xptype

Library used to load the file. It must follow numpy library.

**kwargstype

Additional keyworkded arguments to the load.

Returns

Any

The data (or a Future load object, for _lazy operations).

_load(xp=None, **kwargs)[source]

Load data using CPU container.

Parameters

xpModule

A module that load data (implement load function) (placeholder).

**kwargstype

Additional kwargs to xp.load function.

_lazy_load_cpu()[source]

Load data with CPU container + DASK. (It does not load immediattly)

_lazy_load_gpu()[source]

Load data with GPU container + DASK. (It does not load immediattly)

_load_cpu()[source]

Load data with CPU container (e.g. numpy).

_load_gpu()[source]

Load data with GPU container (e.g. cupy).

load()[source]

Placeholder for load function.

_load_meta()[source]

Load metadata to inspect.

Returns

dict

A dictionary with metadata information.

Return type:

dict

inspect_metadata()[source]

Return a dictionary with all metadata information from data.

Returns

dict

A dictionary with metadata information.

Return type:

dict

Parameters:
  • name (str)

  • download (str)

  • root (str)

  • dataset_path (str)

class dasf.datasets.DatasetXarray(name, download=False, root=None, chunks=None, data_var=None)[source]

Bases: Dataset

Class representing an dataset wich is defined as a Xarray dataset of a defined shape.

Parameters

namestr

Symbolic name of the dataset.

downloadbool

If the dataset must be downloaded (the default is False).

rootstr

Root download directory (the default is None).

chunksAny

Number of blocks of the array (the default is “auto”).

data_varAny

Key (or index) of the internal Xarray dataset (the default is None).

Constructor of the object DatasetXarray.

_lazy_load_cpu()[source]

Load data with CPU container + DASK. (It does not load immediattly)

_lazy_load_gpu()[source]

Load data with GPU container + DASK. (It does not load immediattly)

_load_cpu()[source]

Load data with CPU container (e.g. numpy).

_load_gpu()[source]

Load data with GPU container (e.g. cupy).

load()[source]

Placeholder for load function.

_load_meta()[source]

Load metadata to inspect.

Returns

dict

A dictionary with metadata information.

Return type:

dict

inspect_metadata()[source]

Return a dictionary with all metadata information from data.

Returns

dict

A dictionary with metadata information.

Return type:

dict

__len__()[source]

Return internal data length.

Return type:

int

__getitem__(idx)[source]

A __getitem__() function based on internal Xarray data.

Parameters

idxAny

Key of the fetched data. It can be an integer or a tuple.

Parameters:
  • name (str)

  • download (bool)

  • root (str)

class dasf.datasets.DatasetLabeled(name, download=False, root=None, chunks='auto')[source]

Bases: Dataset

A class representing a labeled dataset. Each item is a 2-element tuple, where the first element is a array of data and the second element is the respective label. The items can be accessed from dataset[x].

Parameters

namestr

Symbolic name of the dataset.

downloadbool

If the dataset must be downloaded (the default is False).

rootstr

Root download directory (the default is None).

chunksAny

Number of blocks of the array (the default is “auto”).

Attributes

__chunkstype

Description of attribute __chunks.

Constructor of the object DatasetLabeled.

download()[source]

Download the dataset.

inspect_metadata()[source]

Return a dictionary with all metadata information from data (train and labels).

Returns

dict

A dictionary with metadata information.

Return type:

dict

_lazy_load(xp, **kwargs)[source]

Lazy load the dataset using an CPU dask container.

Parameters

xptype

Library used to load the file. It must follow numpy library.

**kwargstype

Additional keyworkded arguments to the load.

Returns

Tuple

A Future object that will return a tuple: (data, label).

Return type:

tuple

_load(xp, **kwargs)[source]

Load data using CPU container.

Parameters

xpModule

A module that load data (implement load function)

**kwargstype

Additional kwargs to xp.load function.

Returns

Tuple

A 2-element tuple: (data, label)

Return type:

tuple

_load_meta()[source]

Load metadata to inspect.

Returns

dict

A dictionary with metadata information.

Return type:

dict

_lazy_load_gpu()[source]

Load data with GPU container + DASK. (It does not load immediattly)

_lazy_load_cpu()[source]

Load data with CPU container + DASK. (It does not load immediattly)

_load_gpu()[source]

Load data with GPU container (e.g. cupy).

_load_cpu()[source]

Load data with CPU container (e.g. numpy).

load()[source]

Placeholder for load function.

__getitem__(idx)[source]

A __getitem__() function for data and labeled data together.

Parameters

idxAny

Key of the fetched data. It can be an integer or a tuple.

Parameters:
  • name (str)

  • download (bool)

  • root (str)

class dasf.datasets.DatasetDataFrame(name, download=True, root=None, chunks='auto')[source]

Bases: Dataset

Class representing an dataset wich is defined as a dataframe.

Parameters

namestr

Symbolic name of the dataset.

downloadbool

If the dataset must be downloaded (the default is False).

rootstr

Root download directory (the default is None).

chunksAny

Number of blocks of the array (the default is “auto”).

Constructor of the object DatasetDataFrame.

_load_meta()[source]

Load metadata to inspect.

Returns

dict

A dictionary with metadata information.

Return type:

dict

inspect_metadata()[source]

Return a dictionary with all metadata information from data.

Returns

dict

A dictionary with metadata information.

Return type:

dict

_lazy_load_gpu()[source]

Load data with GPU container + DASK. (It does not load immediattly)

_lazy_load_cpu()[source]

Load data with CPU container + DASK. (It does not load immediattly)

_load_gpu()[source]

Load data with GPU container (e.g. CuDF).

_load_cpu()[source]

Load data with CPU container (e.g. pandas).

load()[source]

Placeholder for load function.

property shape: tuple

Returns the shape of an array.

Returns

tuple

A tuple with the shape.

Return type:

tuple

__len__()[source]

Return internal data length.

Return type:

int

__getitem__(idx)[source]

A __getitem__() function based on internal dataframe.

Parameters

idxAny

Key of the fetched data. It can be an integer or a tuple.

Parameters:
  • name (str)

  • download (bool)

  • root (str)

class dasf.datasets.DatasetParquet(name, download=True, root=None, chunks='auto')[source]

Bases: DatasetDataFrame

Class representing an dataset wich is defined as a Parquet.

Parameters

namestr

Symbolic name of the dataset.

downloadbool

If the dataset must be downloaded (the default is False).

rootstr

Root download directory (the default is None).

chunksAny

Number of blocks of the array (the default is “auto”).

Constructor of the object DatasetParquet.

_lazy_load_gpu()[source]

Load data with GPU container + DASK. (It does not load immediattly)

_lazy_load_cpu()[source]

Load data with CPU container + DASK. (It does not load immediattly)

_load_gpu()[source]

Load data with GPU container (e.g. CuDF).

_load_cpu()[source]

Load data with CPU container (e.g. pandas).

Parameters:
  • name (str)

  • download (bool)

  • root (str)

class dasf.datasets.make_blobs[source]

Generate isotropic Gaussian blobs for clustering.

For an example of usage, see sphx_glr_auto_examples_datasets_plot_random_dataset.py.

Read more in the User Guide.

Parameters

n_samplesint or array-like, default=100

If int, it is the total number of points equally divided among clusters. If array-like, each element of the sequence indicates the number of samples per cluster.

Changed in version v0.20: one can now pass an array-like to the n_samples parameter

n_featuresint, default=2

The number of features for each sample.

centersint or array-like of shape (n_centers, n_features), default=None

The number of centers to generate, or the fixed center locations. If n_samples is an int and centers is None, 3 centers are generated. If n_samples is array-like, centers must be either None or an array of length equal to the length of n_samples.

cluster_stdfloat or array-like of float, default=1.0

The standard deviation of the clusters.

center_boxtuple of float (min, max), default=(-10.0, 10.0)

The bounding box for each cluster center when centers are generated at random.

shufflebool, default=True

Shuffle the samples.

random_stateint, RandomState instance or None, default=None

Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary.

return_centersbool, default=False

If True, then return the centers of each cluster.

Added in version 0.23.

Returns

Xndarray of shape (n_samples, n_features)

The generated samples.

yndarray of shape (n_samples,)

The integer labels for cluster membership of each sample.

centersndarray of shape (n_centers, n_features)

The centers of each cluster. Only returned if return_centers=True.

See Also

make_classification : A more intricate variant.

Examples

>>> from sklearn.datasets import make_blobs
>>> X, y = make_blobs(n_samples=10, centers=3, n_features=2,
...                   random_state=0)
>>> print(X.shape)
(10, 2)
>>> y
array([0, 0, 1, 0, 2, 2, 2, 1, 1, 0])
>>> X, y = make_blobs(n_samples=[3, 3, 4], centers=None, n_features=2,
...                   random_state=0)
>>> print(X.shape)
(10, 2)
>>> y
array([0, 1, 2, 0, 2, 2, 2, 1, 1, 0])
_lazy_make_blobs_cpu(**kwargs)[source]
_lazy_make_blobs_gpu(**kwargs)[source]
_make_blobs_cpu(**kwargs)[source]
_make_blobs_gpu(**kwargs)[source]
__call__(**kwargs)[source]
class dasf.datasets.make_classification[source]

Generate a random n-class classification problem.

This initially creates clusters of points normally distributed (std=1) about vertices of an n_informative-dimensional hypercube with sides of length 2*class_sep and assigns an equal number of clusters to each class. It introduces interdependence between these features and adds various types of further noise to the data.

Without shuffling, X horizontally stacks features in the following order: the primary n_informative features, followed by n_redundant linear combinations of the informative features, followed by n_repeated duplicates, drawn randomly with replacement from the informative and redundant features. The remaining features are filled with random noise. Thus, without shuffling, all useful features are contained in the columns X[:, :n_informative + n_redundant + n_repeated].

For an example of usage, see sphx_glr_auto_examples_datasets_plot_random_dataset.py.

Read more in the User Guide.

Parameters

n_samplesint, default=100

The number of samples.

n_featuresint, default=20

The total number of features. These comprise n_informative informative features, n_redundant redundant features, n_repeated duplicated features and n_features-n_informative-n_redundant-n_repeated useless features drawn at random.

n_informativeint, default=2

The number of informative features. Each class is composed of a number of gaussian clusters each located around the vertices of a hypercube in a subspace of dimension n_informative. For each cluster, informative features are drawn independently from N(0, 1) and then randomly linearly combined within each cluster in order to add covariance. The clusters are then placed on the vertices of the hypercube.

n_redundantint, default=2

The number of redundant features. These features are generated as random linear combinations of the informative features.

n_repeatedint, default=0

The number of duplicated features, drawn randomly from the informative and the redundant features.

n_classesint, default=2

The number of classes (or labels) of the classification problem.

n_clusters_per_classint, default=2

The number of clusters per class.

weightsarray-like of shape (n_classes,) or (n_classes - 1,), default=None

The proportions of samples assigned to each class. If None, then classes are balanced. Note that if len(weights) == n_classes - 1, then the last class weight is automatically inferred. More than n_samples samples may be returned if the sum of weights exceeds 1. Note that the actual class proportions will not exactly match weights when flip_y isn’t 0.

flip_yfloat, default=0.01

The fraction of samples whose class is assigned randomly. Larger values introduce noise in the labels and make the classification task harder. Note that the default setting flip_y > 0 might lead to less than n_classes in y in some cases.

class_sepfloat, default=1.0

The factor multiplying the hypercube size. Larger values spread out the clusters/classes and make the classification task easier.

hypercubebool, default=True

If True, the clusters are put on the vertices of a hypercube. If False, the clusters are put on the vertices of a random polytope.

shiftfloat, ndarray of shape (n_features,) or None, default=0.0

Shift features by the specified value. If None, then features are shifted by a random value drawn in [-class_sep, class_sep].

scalefloat, ndarray of shape (n_features,) or None, default=1.0

Multiply features by the specified value. If None, then features are scaled by a random value drawn in [1, 100]. Note that scaling happens after shifting.

shufflebool, default=True

Shuffle the samples and the features.

random_stateint, RandomState instance or None, default=None

Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary.

Returns

Xndarray of shape (n_samples, n_features)

The generated samples.

yndarray of shape (n_samples,)

The integer labels for class membership of each sample.

See Also

make_blobs : Simplified variant. make_multilabel_classification : Unrelated generator for multilabel tasks.

Notes

The algorithm is adapted from Guyon [1] and was designed to generate the “Madelon” dataset.

References

Examples

>>> from sklearn.datasets import make_classification
>>> X, y = make_classification(random_state=42)
>>> X.shape
(100, 20)
>>> y.shape
(100,)
>>> list(y[:5])
[0, 0, 1, 1, 0]
_lazy_make_classification_cpu(**kwargs)[source]
_lazy_make_classification_gpu(**kwargs)[source]
_make_classification_cpu(**kwargs)[source]
_make_classification_gpu(**kwargs)[source]
__call__(**kwargs)[source]