dasf.datasets
Init module for Datasets objects.
Submodules
Classes
Class representing a generic dataset based on a TargeteredTransform |
|
Class representing an dataset wich is defined as an array of a defined |
|
Class representing an dataset wich is defined as a Zarr array of a |
|
Class representing an dataset wich is defined as a HDF5 dataset of a |
|
Class representing an dataset wich is defined as a Xarray dataset of a |
|
A class representing a labeled dataset. Each item is a 2-element tuple, |
|
Class representing an dataset wich is defined as a dataframe. |
|
Class representing an dataset wich is defined as a Parquet. |
|
Generate isotropic Gaussian blobs for clustering. |
|
Generate a random n-class classification problem. |
Package Contents
- class dasf.datasets.Dataset(name, download=False, root=None, *args, **kwargs)[source]
Bases:
dasf.transforms.base.TargeteredTransform
Class representing a generic dataset based on a TargeteredTransform
object.
Parameters
- namestr
Symbolic name of the dataset.
- downloadbool
If the dataset must be downloaded (the default is False).
- rootstr
Root download directory (the default is None).
- *argstype
Additional arguments without keys.
- **kwargstype
Additional keyworkded arguments.
Constructor of the object Dataset.
- _name
- _download
- _root
- _metadata
- _data = None
- _chunks = None
- __set_dataset_cache_dir()
Generate cached directory in $HOME to store dataset(s).
- Parameters:
name (str)
download (bool)
root (str)
- class dasf.datasets.DatasetArray(name, download=False, root=None, chunks='auto')[source]
Bases:
Dataset
Class representing an dataset wich is defined as an array of a defined
shape.
Parameters
- namestr
Symbolic name of the dataset.
- downloadbool
If the dataset must be downloaded (the default is False).
- rootstr
Root download directory (the default is None).
- chunksAny
Number of blocks of the array (the default is “auto”).
Constructor of the object DatasetArray.
- _chunks
- _root_file
- __operator_check__(other)[source]
Check what type of the data we are handling
- Examples:
DatasetArray with array-like; or DatasetArray with DatasetArray
Parameters
- otherAny
array-like of DatasetArray for the operation.
Returns
- dataAny
A data representing the internal array or the class itself.
- __array__(dtype=None)[source]
Array interface is required to support most of the array functions.
Parameters
- dtypeAny
Type of the internal array, default=None (not used)
Returns
- dataAny
A data representing the internal array or the class itself.
- __array_ufunc__(ufunc, method, *inputs, **kwargs)[source]
Any class, array subclass or not, can define this method or set it to None in order to override the behavior of Arrays ufuncs.
Parameters
- ufuncCallable
The ufunc object that was called.
- methodStr
A string indicating which Ufunc method was called (one of “__call__”, “reduce”, “reduceat”, “accumulate”, “outer”, “inner”).
- inputsAny
A tuple of the input arguments to the ufunc.
- kwargsAny
A dictionary containing the optional input arguments of the ufunc. If given, any out arguments, both positional and keyword, are passed as a tuple in kwargs. See the discussion in Universal functions (ufunc) for details.
Returns
- arrayarray-like
The return either the result of the operation.
- __check_op_input(in_data)
Return the proper type of data for operation
>>> Result = DatasetArray + Numpy; or >>> Result = DatasetArray + DatasetArray
Parameters
- in_dataAny
Input data to be analyzed.
Returns
- dataAny
A data representing the internal array or the class itself.
- __add__(other)[source]
Internal function of adding two array datasets.
Parameters
- otherAny
A data representing an array or a DatasetArray.
Returns
- DatasetArray
A sum with two arrays.
- __sub__(other)[source]
Internal function of subtracting two array datasets.
Parameters
- otherAny
A data representing an array or a DatasetArray.
Returns
- DatasetArry
A subtraction of two arrays.
- __mul__(other)[source]
Internal function of multiplication two array datasets.
Parameters
- otherAny
A data representing an array or a DatasetArray.
Returns
- DatasetArry
A multiplication of two arrays.
- __div__(other)[source]
Internal function of division two array datasets.
Parameters
- otherAny
A data representing an array or a DatasetArray.
Returns
- DatasetArry
A division of two arrays.
- __copy_attrs_from_data()
Extends metadata to new transformed object (after operations).
- __npy_header()
Read an array header from a filelike object.
- _lazy_load(xp, **kwargs)[source]
Lazy load the dataset using an CPU dask container.
Parameters
- xptype
Library used to load the file. It must follow numpy library.
- **kwargstype
Additional keyworkded arguments to the load.
Returns
- Any
The data (or a Future load object, for _lazy operations).
- _load(xp, **kwargs)[source]
Load data using CPU container.
Parameters
- xpModule
A module that load data (implement load function)
- **kwargstype
Additional kwargs to xp.load function.
- _load_meta()[source]
Load metadata to inspect.
Returns
- dict
A dictionary with metadata information.
- Return type:
dict
- from_array(array)[source]
Load data from an existing array.
Parameters
- arrayarray-like
Input data to be initialized.
- Parameters:
name (str)
download (bool)
root (str)
- class dasf.datasets.DatasetZarr(name, download=False, root=None, backend=None, chunks=None)[source]
Bases:
Dataset
Class representing an dataset wich is defined as a Zarr array of a defined shape.
Parameters
- namestr
Symbolic name of the dataset.
- downloadbool
If the dataset must be downloaded (the default is False).
- rootstr
Root download directory (the default is None).
- chunksAny
Number of blocks of the array (the default is “auto”).
Constructor of the object DatasetZarr.
- _backend
- _chunks
- _root_file
- _lazy_load(xp, **kwargs)[source]
Lazy load the dataset using an CPU dask container.
Parameters
- xptype
Library used to load the file. It must follow numpy library.
- **kwargstype
Additional keyworkded arguments to the load.
Returns
- Any
The data (or a Future load object, for _lazy operations).
- _load(xp, **kwargs)[source]
Load data using CPU container.
Parameters
- xpModule
A module that load data (implement load function)
- **kwargstype
Additional kwargs to xp.load function.
- _load_meta()[source]
Load metadata to inspect.
Returns
- dict
A dictionary with metadata information.
- Return type:
dict
- __read_zarray(key)
Returns the value of ZArray JSON metadata.
- property shape: tuple
Returns the shape of an array.
Returns
- tuple
A tuple with the shape.
- Return type:
tuple
- metadata()[source]
Return a dictionary with all metadata information from data.
Returns
- dict
A dictionary with metadata information.
- Return type:
dict
- __check_op_input(in_data)
Return the proper type of data for operation
>>> Result = DatasetZarr + Numpy; or >>> Result = DatasetZarr + DatasetZarr
Parameters
- in_dataAny
Input data to be analyzed.
Returns
- dataAny
A data representing the internal array or the class itself.
- __add__(other)[source]
Internal function of adding two array datasets.
Parameters
- otherAny
A data representing an array or a DatasetArray.
Returns
- DatasetArry
A sum with two arrays.
- __sub__(other)[source]
Internal function of subtracting two array datasets.
Parameters
- otherAny
A data representing an array or a DatasetArray.
Returns
- DatasetArry
A subtraction of two arrays.
- __mul__(other)[source]
Internal function of multiplication two array datasets.
Parameters
- otherAny
A data representing an array or a DatasetArray.
Returns
- DatasetArry
A multiplication of two arrays.
- __div__(other)[source]
Internal function of division two array datasets.
Parameters
- otherAny
A data representing an array or a DatasetArray.
Returns
- DatasetArry
A division of two arrays.
- __copy_attrs_from_data()
Extends metadata to new transformed object (after operations).
- Parameters:
name (str)
download (bool)
root (str)
backend (str)
- class dasf.datasets.DatasetHDF5(name, download=False, root=None, chunks='auto', dataset_path=None)[source]
Bases:
Dataset
Class representing an dataset wich is defined as a HDF5 dataset of a defined shape.
Parameters
- namestr
Symbolic name of the dataset.
- downloadbool
If the dataset must be downloaded (the default is False).
- rootstr
Root download directory (the default is None).
- chunksAny
Number of blocks of the array (the default is “auto”).
- dataset_pathstr
Relative path of the internal HDF5 dataset (the default is None).
Constructor of the object DatasetHDF5.
- _chunks
- _root_file
- _dataset_path
- _lazy_load(xp, **kwargs)[source]
Lazy load the dataset using an CPU dask container.
Parameters
- xptype
Library used to load the file. It must follow numpy library.
- **kwargstype
Additional keyworkded arguments to the load.
Returns
- Any
The data (or a Future load object, for _lazy operations).
- _load(xp=None, **kwargs)[source]
Load data using CPU container.
Parameters
- xpModule
A module that load data (implement load function) (placeholder).
- **kwargstype
Additional kwargs to xp.load function.
- Parameters:
name (str)
download (str)
root (str)
dataset_path (str)
- class dasf.datasets.DatasetXarray(name, download=False, root=None, chunks=None, data_var=None)[source]
Bases:
Dataset
Class representing an dataset wich is defined as a Xarray dataset of a defined shape.
Parameters
- namestr
Symbolic name of the dataset.
- downloadbool
If the dataset must be downloaded (the default is False).
- rootstr
Root download directory (the default is None).
- chunksAny
Number of blocks of the array (the default is “auto”).
- data_varAny
Key (or index) of the internal Xarray dataset (the default is None).
Constructor of the object DatasetXarray.
- _chunks
- _root_file
- _data_var
- _load_meta()[source]
Load metadata to inspect.
Returns
- dict
A dictionary with metadata information.
- Return type:
dict
- Parameters:
name (str)
download (bool)
root (str)
- class dasf.datasets.DatasetLabeled(name, download=False, root=None, chunks='auto')[source]
Bases:
Dataset
A class representing a labeled dataset. Each item is a 2-element tuple, where the first element is a array of data and the second element is the respective label. The items can be accessed from dataset[x].
Parameters
- namestr
Symbolic name of the dataset.
- downloadbool
If the dataset must be downloaded (the default is False).
- rootstr
Root download directory (the default is None).
- chunksAny
Number of blocks of the array (the default is “auto”).
Attributes
- __chunkstype
Description of attribute __chunks.
Constructor of the object DatasetLabeled.
- _chunks
- metadata()[source]
Return a dictionary with all metadata information from data (train and labels).
Returns
- dict
A dictionary with metadata information.
- Return type:
dict
- _lazy_load(xp, **kwargs)[source]
Lazy load the dataset using an CPU dask container.
Parameters
- xptype
Library used to load the file. It must follow numpy library.
- **kwargstype
Additional keyworkded arguments to the load.
Returns
- Tuple
A Future object that will return a tuple: (data, label).
- Return type:
tuple
- _load(xp, **kwargs)[source]
Load data using CPU container.
Parameters
- xpModule
A module that load data (implement load function)
- **kwargstype
Additional kwargs to xp.load function.
Returns
- Tuple
A 2-element tuple: (data, label)
- Return type:
tuple
- Parameters:
name (str)
download (bool)
root (str)
- class dasf.datasets.DatasetDataFrame(name, download=True, root=None, chunks='auto')[source]
Bases:
Dataset
Class representing an dataset wich is defined as a dataframe.
Parameters
- namestr
Symbolic name of the dataset.
- downloadbool
If the dataset must be downloaded (the default is False).
- rootstr
Root download directory (the default is None).
- chunksAny
Number of blocks of the array (the default is “auto”).
Constructor of the object DatasetDataFrame.
- _chunks
- _root_file
- _load_meta()[source]
Load metadata to inspect.
Returns
- dict
A dictionary with metadata information.
- Return type:
dict
- metadata()[source]
Return a dictionary with all metadata information from data.
Returns
- dict
A dictionary with metadata information.
- Return type:
dict
- Parameters:
name (str)
download (bool)
root (str)
- class dasf.datasets.DatasetParquet(name, download=True, root=None, chunks='auto')[source]
Bases:
DatasetDataFrame
Class representing an dataset wich is defined as a Parquet.
Parameters
- namestr
Symbolic name of the dataset.
- downloadbool
If the dataset must be downloaded (the default is False).
- rootstr
Root download directory (the default is None).
- chunksAny
Number of blocks of the array (the default is “auto”).
Constructor of the object DatasetParquet.
- Parameters:
name (str)
download (bool)
root (str)
- class dasf.datasets.make_blobs[source]
Generate isotropic Gaussian blobs for clustering.
For an example of usage, see sphx_glr_auto_examples_datasets_plot_random_dataset.py.
Read more in the User Guide.
Parameters
- n_samplesint or array-like, default=100
If int, it is the total number of points equally divided among clusters. If array-like, each element of the sequence indicates the number of samples per cluster.
Changed in version v0.20: one can now pass an array-like to the
n_samples
parameter- n_featuresint, default=2
The number of features for each sample.
- centersint or array-like of shape (n_centers, n_features), default=None
The number of centers to generate, or the fixed center locations. If n_samples is an int and centers is None, 3 centers are generated. If n_samples is array-like, centers must be either None or an array of length equal to the length of n_samples.
- cluster_stdfloat or array-like of float, default=1.0
The standard deviation of the clusters.
- center_boxtuple of float (min, max), default=(-10.0, 10.0)
The bounding box for each cluster center when centers are generated at random.
- shufflebool, default=True
Shuffle the samples.
- random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary.
- return_centersbool, default=False
If True, then return the centers of each cluster.
Added in version 0.23.
Returns
- Xndarray of shape (n_samples, n_features)
The generated samples.
- yndarray of shape (n_samples,)
The integer labels for cluster membership of each sample.
- centersndarray of shape (n_centers, n_features)
The centers of each cluster. Only returned if
return_centers=True
.
See Also
make_classification : A more intricate variant.
Examples
>>> from sklearn.datasets import make_blobs >>> X, y = make_blobs(n_samples=10, centers=3, n_features=2, ... random_state=0) >>> print(X.shape) (10, 2) >>> y array([0, 0, 1, 0, 2, 2, 2, 1, 1, 0]) >>> X, y = make_blobs(n_samples=[3, 3, 4], centers=None, n_features=2, ... random_state=0) >>> print(X.shape) (10, 2) >>> y array([0, 1, 2, 0, 2, 2, 2, 1, 1, 0])
- class dasf.datasets.make_classification[source]
Generate a random n-class classification problem.
This initially creates clusters of points normally distributed (std=1) about vertices of an
n_informative
-dimensional hypercube with sides of length2*class_sep
and assigns an equal number of clusters to each class. It introduces interdependence between these features and adds various types of further noise to the data.Without shuffling,
X
horizontally stacks features in the following order: the primaryn_informative
features, followed byn_redundant
linear combinations of the informative features, followed byn_repeated
duplicates, drawn randomly with replacement from the informative and redundant features. The remaining features are filled with random noise. Thus, without shuffling, all useful features are contained in the columnsX[:, :n_informative + n_redundant + n_repeated]
.For an example of usage, see sphx_glr_auto_examples_datasets_plot_random_dataset.py.
Read more in the User Guide.
Parameters
- n_samplesint, default=100
The number of samples.
- n_featuresint, default=20
The total number of features. These comprise
n_informative
informative features,n_redundant
redundant features,n_repeated
duplicated features andn_features-n_informative-n_redundant-n_repeated
useless features drawn at random.- n_informativeint, default=2
The number of informative features. Each class is composed of a number of gaussian clusters each located around the vertices of a hypercube in a subspace of dimension
n_informative
. For each cluster, informative features are drawn independently from N(0, 1) and then randomly linearly combined within each cluster in order to add covariance. The clusters are then placed on the vertices of the hypercube.- n_redundantint, default=2
The number of redundant features. These features are generated as random linear combinations of the informative features.
- n_repeatedint, default=0
The number of duplicated features, drawn randomly from the informative and the redundant features.
- n_classesint, default=2
The number of classes (or labels) of the classification problem.
- n_clusters_per_classint, default=2
The number of clusters per class.
- weightsarray-like of shape (n_classes,) or (n_classes - 1,), default=None
The proportions of samples assigned to each class. If None, then classes are balanced. Note that if
len(weights) == n_classes - 1
, then the last class weight is automatically inferred. More thann_samples
samples may be returned if the sum ofweights
exceeds 1. Note that the actual class proportions will not exactly matchweights
whenflip_y
isn’t 0.- flip_yfloat, default=0.01
The fraction of samples whose class is assigned randomly. Larger values introduce noise in the labels and make the classification task harder. Note that the default setting flip_y > 0 might lead to less than
n_classes
in y in some cases.- class_sepfloat, default=1.0
The factor multiplying the hypercube size. Larger values spread out the clusters/classes and make the classification task easier.
- hypercubebool, default=True
If True, the clusters are put on the vertices of a hypercube. If False, the clusters are put on the vertices of a random polytope.
- shiftfloat, ndarray of shape (n_features,) or None, default=0.0
Shift features by the specified value. If None, then features are shifted by a random value drawn in [-class_sep, class_sep].
- scalefloat, ndarray of shape (n_features,) or None, default=1.0
Multiply features by the specified value. If None, then features are scaled by a random value drawn in [1, 100]. Note that scaling happens after shifting.
- shufflebool, default=True
Shuffle the samples and the features.
- random_stateint, RandomState instance or None, default=None
Determines random number generation for dataset creation. Pass an int for reproducible output across multiple function calls. See Glossary.
Returns
- Xndarray of shape (n_samples, n_features)
The generated samples.
- yndarray of shape (n_samples,)
The integer labels for class membership of each sample.
See Also
make_blobs : Simplified variant. make_multilabel_classification : Unrelated generator for multilabel tasks.
Notes
The algorithm is adapted from Guyon [1] and was designed to generate the “Madelon” dataset.
References
Examples
>>> from sklearn.datasets import make_classification >>> X, y = make_classification(random_state=42) >>> X.shape (100, 20) >>> y.shape (100,) >>> list(y[:5]) [0, 0, 1, 1, 0]