WrightTools.data.Channel

class WrightTools.data.Channel(parent, id, *, units=None, null=None, signed=None, label=None, label_seed=None, **kwargs)[source]

Channel.

__init__(parent, id, *, units=None, null=None, signed=None, label=None, label_seed=None, **kwargs)[source]

Construct a channel object.

Parameters
  • values (array-like) – Values.

  • name (string) – Channel name.

  • units (string (optional)) – Channel units. Default is None.

  • null (number (optional)) – Channel null. Default is None (0).

  • signed (booelan (optional)) – Channel signed flag. Default is None (guess).

  • label (string.) – Label. Default is None.

  • label_seed (list of strings) – Label seed. Default is None.

  • **kwargs – Additional keyword arguments are added to the attrs dictionary and to the natural namespace of the object (if possible).

Methods

__init__(parent, id, *[, units, null, …])

Construct a channel object.

argmax()

Index of the maximum, ignorning nans.

argmin()

Index of the minimum, ignoring nans.

astype(dtype)

Get a context manager allowing you to perform reads to a different destination type, e.g.:

chunkwise(func, *args, **kwargs)

Execute a function for each chunk in the dataset.

clip([min, max, replace])

Clip values outside of a defined range.

convert(destination_units)

Convert units.

flush()

Flush the dataset data and metadata to the file.

len()

The size of the first axis.

log([base, floor])

Take the log of the entire dataset.

log10([floor])

Take the log base 10 of the entire dataset.

log2([floor])

Take the log base 2 of the entire dataset.

mag()

Channel magnitude (maximum deviation from null).

make_scale([name])

Make this dataset an HDF5 dimension scale.

max()

Maximum, ignorning nans.

min()

Minimum, ignoring nans.

normalize([mag])

Normalize a Channel, set null to 0 and the mag to given value.

read_direct(dest[, source_sel, dest_sel])

Read data directly from HDF5 into an existing NumPy array.

refresh()

Refresh the dataset metadata by reloading from the file.

resize(size[, axis])

Resize the dataset, or the specified axis.

slices()

Returns a generator yielding tuple of slice objects.

symmetric_root([root])

trim(neighborhood[, method, factor, …])

Remove outliers from the dataset.

virtual_sources()

write_direct(source[, source_sel, dest_sel])

Write data directly to HDF5 from a NumPy array.

Attributes

attrs

Attributes attached to this object

chunks

Dataset chunks (or None)

class_name

compression

Compression strategy (or None)

compression_opts

Compression setting.

dims

Access dimension scales attached to this dataset.

dtype

Numpy dtype representing the datatype

external

External file settings.

file

Return a File instance associated with this object

fillvalue

Fill value for this dataset (0 by default)

fletcher32

Fletcher32 filter is present (T/F)

full

fullpath

file and internal structure.

id

Low-level identifier appropriate for this object

is_virtual

major_extent

Maximum deviation from null.

maxshape

Shape up to which this dataset can be resized.

minor_extent

Minimum deviation from null.

name

Return the full name of this object.

natural_name

Natural name of the dataset.

ndim

Numpy-style attribute giving the number of dimensions

null

parent

Parent.

points

Squeezed array.

ref

An (opaque) HDF5 reference to this object

regionref

Create a region reference (Datasets only).

scaleoffset

Scale/offset filter settings.

shape

Numpy-style shape tuple giving dataset dimensions

shuffle

Shuffle filter present (T/F)

signed

size

Numpy-style attribute giving the total dataset size

units

Units.

value

Alias for dataset[()]