WrightTools.data.Variable

class WrightTools.data.Variable(parent, id, **kwargs)[source]

Variable.

__init__(parent, id, units=None, **kwargs)[source]

Variable.

Parameters:
  • parent (WrightTools.Data) – Parent data object.

  • id (h5py DatasetID) – Dataset ID.

  • units (string (optional)) – Variable units. Default is None.

  • kwargs – Additional keys and values to be written into dataset attrs.

Methods

__init__(parent, id[, units])

Variable.

argmax()

Index of the maximum, ignorning nans.

argmin()

Index of the minimum, ignoring nans.

asstr([encoding, errors])

Get a wrapper to read string data as Python strings:

astype(dtype)

Get a wrapper allowing you to perform reads to a different destination type, e.g.:

chunkwise(func, *args, **kwargs)

Execute a function for each chunk in the dataset.

clip([min, max, replace])

Clip values outside of a defined range.

convert(destination_units)

Convert units.

fields(names, *[, _prior_dtype])

Get a wrapper to read a subset of fields from a compound data type:

flush()

Flush the dataset data and metadata to the file.

iter_chunks([sel])

Return chunk iterator.

len()

The size of the first axis.

log([base, floor])

Take the log of the entire dataset.

log10([floor])

Take the log base 10 of the entire dataset.

log2([floor])

Take the log base 2 of the entire dataset.

make_scale([name])

Make this dataset an HDF5 dimension scale.

max()

Maximum, ignorning nans.

min()

Minimum, ignoring nans.

read_direct(dest[, source_sel, dest_sel])

Read data directly from HDF5 into an existing NumPy array.

refresh()

Refresh the dataset metadata by reloading from the file.

resize(size[, axis])

Resize the dataset, or the specified axis.

slices()

Returns a generator yielding tuple of slice objects.

symmetric_root([root])

virtual_sources()

Get a list of the data mappings for a virtual dataset

write_direct(source[, source_sel, dest_sel])

Write data directly to HDF5 from a NumPy array.

Attributes

attrs

Attributes attached to this object

chunks

Dataset chunks (or None)

class_name

compression

Compression strategy (or None)

compression_opts

Compression setting.

dims

Access dimension scales attached to this dataset.

dtype

Numpy dtype representing the datatype

external

External file settings.

file

Return a File instance associated with this object

fillvalue

Fill value for this dataset (0 by default)

fletcher32

Fletcher32 filter is present (T/F)

full

fullpath

file and internal structure.

id

Low-level identifier appropriate for this object

is_scale

Return True if this dataset is also a dimension scale.

is_virtual

Check if this is a virtual dataset

label

maxshape

Shape up to which this dataset can be resized.

name

Return the full name of this object.

natural_name

Natural name of the dataset.

nbytes

Numpy-style attribute giving the raw dataset size as the number of bytes

ndim

Numpy-style attribute giving the number of dimensions

parent

Parent.

points

Squeezed array.

ref

An (opaque) HDF5 reference to this object

regionref

Create a region reference (Datasets only).

scaleoffset

Scale/offset filter settings.

shape

Numpy-style shape tuple giving dataset dimensions

shuffle

Shuffle filter present (T/F)

size

Numpy-style attribute giving the total dataset size

units

Units.