WrightTools.data.Channel

class WrightTools.data.Channel(parent, id, **kwargs)[source]

Channel.

__init__(parent, id, *, units=None, null=None, signed=None, label=None, label_seed=None, **kwargs)[source]

Construct a channel object.

Parameters:
  • values (array-like) – Values.

  • name (string) – Channel name.

  • units (string (optional)) – Channel units. Default is None.

  • null (number (optional)) – Channel null. Default is None (0).

  • signed (booelan (optional)) – Channel signed flag. Default is None (guess).

  • label (string.) – Label. Default is None.

  • label_seed (list of strings) – Label seed. Default is None.

  • **kwargs – Additional keyword arguments are added to the attrs dictionary and to the natural namespace of the object (if possible).

Methods

__init__(parent, id, *[, units, null, ...])

Construct a channel object.

argmax()

Index of the maximum, ignorning nans.

argmin()

Index of the minimum, ignoring nans.

asstr([encoding, errors])

Get a wrapper to read string data as Python strings:

astype(dtype)

Get a wrapper allowing you to perform reads to a different destination type, e.g.:

chunkwise(func, *args, **kwargs)

Execute a function for each chunk in the dataset.

clip([min, max, replace])

Clip values outside of a defined range.

convert(destination_units)

Convert units.

fields(names, *[, _prior_dtype])

Get a wrapper to read a subset of fields from a compound data type:

flush()

Flush the dataset data and metadata to the file.

iter_chunks([sel])

Return chunk iterator.

len()

The size of the first axis.

log([base, floor])

Take the log of the entire dataset.

log10([floor])

Take the log base 10 of the entire dataset.

log2([floor])

Take the log base 2 of the entire dataset.

mag()

Channel magnitude (maximum deviation from null).

make_scale([name])

Make this dataset an HDF5 dimension scale.

max()

Maximum, ignorning nans.

min()

Minimum, ignoring nans.

normalize([mag])

Normalize a Channel, set null to 0 and the mag to given value.

read_direct(dest[, source_sel, dest_sel])

Read data directly from HDF5 into an existing NumPy array.

refresh()

Refresh the dataset metadata by reloading from the file.

resize(size[, axis])

Resize the dataset, or the specified axis.

slices()

Returns a generator yielding tuple of slice objects.

symmetric_root([root])

trim(neighborhood[, method, factor, ...])

Remove outliers from the dataset.

virtual_sources()

Get a list of the data mappings for a virtual dataset

write_direct(source[, source_sel, dest_sel])

Write data directly to HDF5 from a NumPy array.

Attributes

attrs

Attributes attached to this object

chunks

Dataset chunks (or None)

class_name

compression

Compression strategy (or None)

compression_opts

Compression setting.

dims

Access dimension scales attached to this dataset.

dtype

Numpy dtype representing the datatype

external

External file settings.

file

Return a File instance associated with this object

fillvalue

Fill value for this dataset (0 by default)

fletcher32

Fletcher32 filter is present (T/F)

full

fullpath

file and internal structure.

id

Low-level identifier appropriate for this object

is_scale

Return True if this dataset is also a dimension scale.

is_virtual

Check if this is a virtual dataset

major_extent

Maximum deviation from null.

maxshape

Shape up to which this dataset can be resized.

minor_extent

Minimum deviation from null.

name

Return the full name of this object.

natural_name

Natural name of the dataset.

nbytes

Numpy-style attribute giving the raw dataset size as the number of bytes

ndim

Numpy-style attribute giving the number of dimensions

null

parent

Parent.

points

Squeezed array.

ref

An (opaque) HDF5 reference to this object

regionref

Create a region reference (Datasets only).

scaleoffset

Scale/offset filter settings.

shape

Numpy-style shape tuple giving dataset dimensions

shuffle

Shuffle filter present (T/F)

signed

size

Numpy-style attribute giving the total dataset size

units

Units.