Serialization¶
- metatensor.save(file: str | Path | BinaryIO, data: TensorMap | TensorBlock | Labels, use_numpy=False)[source]¶
Save the given data (one of
TensorMap,TensorBlock, orLabels) to the givenfile.TensorMapare serialized using numpy’sNPZformat, i.e. a ZIP file without compression (storage method isSTORED), where each file is stored as a.npyarray. See the C API documentation for more information on the format.The recomended file extension when saving data is
.mts, to prevent confusion with generic.npzfiles.- Parameters:
file (str | Path | BinaryIO) – where to save the data. This can be a string,
pathlib.Pathcontaining the path to the file to load, or a file-like object that should be opened in binary mode.data (TensorMap | TensorBlock | Labels) – data to serialize and save
use_numpy – should we use numpy or the native serializer implementation? Numpy should be able to process more dtypes than the native implementation, which is limited to float64, but the native implementation is usually faster than going through numpy. This is ignored when saving
Labels.
- metatensor.load(file: str | Path | BinaryIO, use_numpy=False) TensorMap[source]¶
Load a previously saved
TensorMapfrom the given file.TensorMapare serialized using numpy’s NPZ format, i.e. a ZIP file without compression (storage method isSTORED), where each file is stored as a.npyarray. See the C API documentation for more information on the format.- Parameters:
file (str | Path | BinaryIO) – file to load: this can be a string, a
pathlib.Pathcontaining the path to the file to load, or a file-like object that should be opened in binary mode.use_numpy – should we use numpy or the native implementation? Numpy should be able to process more dtypes than the native implementation, which is limited to float64, but the native implementation is usually faster than going through numpy.
- Return type:
- metatensor.load_block(file: str | Path | BinaryIO, use_numpy=False) TensorBlock[source]¶
Load a previously saved
TensorBlockfrom the given file.- Parameters:
file (str | Path | BinaryIO) – file to load: this can be a string, a
pathlib.Pathcontaining the path to the file to load, or a file-like object that should be opened in binary mode.use_numpy – should we use numpy or the native implementation? Numpy should be able to process more dtypes than the native implementation, which is limited to float64, but the native implementation is usually faster than going through numpy.
- Return type:
- metatensor.load_labels(file: str | Path | BinaryIO) Labels[source]¶
Load previously saved
Labelsfrom the given file.- Parameters:
file (str | Path | BinaryIO) – file to load: this can be a string, a
pathlib.Pathcontaining the path to the file to load, or a file-like object opened in binary mode.- Return type:
- metatensor.io.save_buffer(data: TensorMap | TensorBlock | Labels, use_numpy=False) memoryview[source]¶
Save the given data (one of
TensorMap,TensorBlock, orLabels) to an in-memory buffer.- Parameters:
data (TensorMap | TensorBlock | Labels) – data to serialize and save
use_numpy – should we use numpy or the native serializer implementation?
- Return type:
- metatensor.io.load_buffer(buffer: bytes | bytearray | memoryview, use_numpy=False) TensorMap[source]¶
Load a previously saved
TensorMapfrom an in-memory buffer.- Parameters:
buffer (bytes | bytearray | memoryview) – In-memory buffer containing a serialized
TensorMapuse_numpy – should we use numpy or the native implementation?
- Return type:
- metatensor.io.load_custom_array(path: str | Path, create_array: CreateArrayCallback) TensorMap[source]¶
Load a previously saved
TensorMapfrom the given path using a custom array creation callback.This is an advanced functionality, which should not be needed by most users.
This function allows to specify the kind of array to use when loading the data through the
create_arraycallback. This callback should take three arguments: a pointer to the shape, the number of elements in the shape, and a pointer to themts_array_tto be filled.metatensor.io.create_numpy_array()andmetatensor.io.create_torch_array()can be used to load data into numpy and torch arrays respectively.
- metatensor.io.load_buffer_custom_array(buffer: bytes | bytearray | memoryview, create_array: CreateArrayCallback) TensorMap[source]¶
Load a previously saved
TensorMapfrom the given buffer using a custom array creation callback.This is an advanced functionality, which should not be needed by most users.
This function allows to specify the kind of array to use when loading the data through the
create_arraycallback. This callback should take three arguments: a pointer to the shape, the number of elements in the shape, and a pointer to themts_array_tto be filled.metatensor.io.create_numpy_array()andmetatensor.io.create_torch_array()can be used to load data into numpy and torch arrays respectively.- Parameters:
buffer (bytes | bytearray | memoryview) – in-memory buffer containing a saved
TensorMapcreate_array (CreateArrayCallback) – callback used to create arrays as needed
- Return type:
- metatensor.io.load_block_buffer(buffer: bytes | bytearray | memoryview, use_numpy=False) TensorBlock[source]¶
Load a previously saved
TensorBlockfrom an in-memory buffer.- Parameters:
buffer (bytes | bytearray | memoryview) – In-memory buffer containing a serialized
TensorBlockuse_numpy – should we use numpy or the native implementation?
- Return type:
- metatensor.io.load_block_custom_array(path: str | Path, create_array: CreateArrayCallback) TensorBlock[source]¶
Load a previously saved
TensorBlockfrom the given path using a custom array creation callback.This is an advanced functionality, which should not be needed by most users.
This function allows to specify the kind of array to use when loading the data through the
create_arraycallback. This callback should take three arguments: a pointer to the shape, the number of elements in the shape, and a pointer to themts_array_tto be filled.metatensor.io.create_numpy_array()andmetatensor.io.create_torch_array()can be used to load data into numpy and torch arrays respectively.- Parameters:
path (str | Path) – path of the file to load
create_array (CreateArrayCallback) – callback used to create arrays as needed
- Return type:
- metatensor.io.load_block_buffer_custom_array(buffer: bytes | bytearray | memoryview, create_array: CreateArrayCallback) TensorBlock[source]¶
Load a previously saved
TensorBlockfrom the given buffer using a custom array creation callback.This is an advanced functionality, which should not be needed by most users.
This function allows to specify the kind of array to use when loading the data through the
create_arraycallback. This callback should take three arguments: a pointer to the shape, the number of elements in the shape, and a pointer to themts_array_tto be filled.metatensor.io.create_numpy_array()andmetatensor.io.create_torch_array()can be used to load data into numpy and torch arrays respectively.- Parameters:
buffer (bytes | bytearray | memoryview) – in-memory buffer containing a saved
TensorBlockcreate_array (CreateArrayCallback) – callback used to create arrays as needed
- Return type:
- metatensor.io.create_numpy_array()[source]¶
Callback function that can be used with
metatensor.io.load_custom_array()to load data in numpy arrays.
- metatensor.io.create_torch_array()[source]¶
Callback function that can be used with
metatensor.io.load_custom_array()to load data in torch tensors. The resulting tensors are stored on CPU, and their dtype istorch.float64.
- metatensor.io.load_labels_buffer(buffer: bytes | bytearray | memoryview) Labels[source]¶
Load previously saved
Labelsfrom an in-memory buffer.- Parameters:
buffer (bytes | bytearray | memoryview) – in-memory buffer containing saved
Labels- Return type: