TensorBlock¶
- class metatensor.torch.TensorBlock(values: Tensor, samples: Labels, components: List[Labels], properties: Labels)[source]¶
Basic building block for a
TensorMap
.A single block contains a n-dimensional
torch.Tensor
of values, and n sets ofLabels
(one for each dimension). The first dimension is the samples dimension, the last dimension is the properties dimension. Any intermediate dimension is called a component dimension.Samples should be used to describe what we are representing, while properties should contain information about how we are representing it. Finally, components should be used to describe vectorial or tensorial components of the data.
A block can also contain gradients of the values with respect to a variety of parameters. In this case, each gradient is a
TensorBlock
with a separate set of samples and possibly components, but which shares the same property labels as the originalTensorBlock
.See also
The pure Python version of this class
metatensor.TensorBlock
, and the differences between TorchScript and Python API for metatensor.- Parameters:
values (Tensor) – tensor containing the values for this block
samples (Labels) – labels describing the samples (first dimension of the array)
components (List[Labels]) – list of labels describing the components (intermediate dimensions of the array). This should be an empty list for scalar/invariant data.
properties (Labels) – labels describing the properties (last dimension of the array)
Warning
PyTorch can execute
static
functions (like this one) coming from a TorchScript extension, but fails when trying to save code calling this function withtorch.jit.save()
, giving the following error:Failed to downcast a Function to a GraphFunction
This issue is reported as PyTorch#115639. In the meantime, if you need to
torch.jit.save()
code containing this function, you can implement it manually in a few lines.- property shape¶
Get the shape of the values array in this block.
- property samples: Labels¶
Get the sample
Labels
for this block.The entries in these labels describe the first dimension of the
values
array.
- property components: List[Labels]¶
Get the component
Labels
for this block.The entries in these labels describe intermediate dimensions of the
values
array.
- property properties: Labels¶
Get the property
Labels
for this block.The entries in these labels describe the last dimension of the
values
array. The properties are guaranteed to be the same for values and gradients in the same block.
- copy() TensorBlock [source]¶
get a deep copy of this block, including all the data and metadata
- Return type:
- add_gradient(parameter: str, gradient: TensorBlock)[source]¶
Add gradient with respect to
parameter
in this block.- Parameters:
parameter (str) – add gradients with respect to this
parameter
(e.g.positions
,cell
, …)gradient (TensorBlock) –
a
TensorBlock
whose values contain the gradients of thisTensorBlock
values with respect toparameter
. The labels of the gradientTensorBlock
should be organized as follows:its samples must contain
"sample"
as the first dimension, with values containing the index of the corresponding samples in thisTensorBlock
, and arbitrary supplementary samples dimension;its components must contain at least the same components as this
TensorBlock
, with any additional components coming before those;its properties must match exactly those of this
TensorBlock
.
>>> import numpy as np >>> from metatensor.torch import TensorBlock, Labels >>> block = TensorBlock( ... values=torch.full((3, 1, 1), 1.0), ... samples=Labels(["system"], torch.tensor([[0], [2], [4]])), ... components=[Labels.range("component", 1)], ... properties=Labels.range("property", 1), ... ) >>> gradient = TensorBlock( ... values=torch.full((2, 1, 1), 11.0), ... samples=Labels( ... names=["sample", "parameter"], ... values=torch.tensor([[0, -2], [2, 3]]), ... ), ... components=[Labels.range("component", 1)], ... properties=Labels.range("property", 1), ... ) >>> block.add_gradient("parameter", gradient) >>> print(block) TensorBlock samples (3): ['system'] components (1): ['component'] properties (1): ['property'] gradients: ['parameter']
- gradient(parameter: str) TensorBlock [source]¶
Get the gradient of the block
values
with respect to the givenparameter
.- Parameters:
parameter (str) – check for gradients with respect to this
parameter
(e.g.positions
,cell
, …)- Return type:
>>> from metatensor.torch import TensorBlock, Labels >>> block = TensorBlock( ... values=torch.full((3, 1, 5), 1.0), ... samples=Labels(["system"], torch.tensor([[0], [2], [4]])), ... components=[Labels.range("component", 1)], ... properties=Labels.range("property", 5), ... )
>>> positions_gradient = TensorBlock( ... values=torch.full((2, 3, 1, 5), 11.0), ... samples=Labels(["sample", "atom"], torch.tensor([[0, 2], [2, 3]])), ... components=[ ... Labels.range("direction", 3), ... Labels.range("component", 1), ... ], ... properties=Labels.range("property", 5), ... ) >>> block.add_gradient("positions", positions_gradient)
>>> cell_gradient = TensorBlock( ... values=torch.full((2, 3, 3, 1, 5), 15.0), ... samples=Labels.range("sample", 2), ... components=[ ... Labels.range("direction_1", 3), ... Labels.range("direction_2", 3), ... Labels.range("component", 1), ... ], ... properties=Labels.range("property", 5), ... ) >>> block.add_gradient("cell", cell_gradient)
>>> positions_gradient = block.gradient("positions") >>> print(positions_gradient) Gradient TensorBlock ('positions') samples (2): ['sample', 'atom'] components (3, 1): ['direction', 'component'] properties (5): ['property'] gradients: None >>> cell_gradient = block.gradient("cell") >>> print(cell_gradient) Gradient TensorBlock ('cell') samples (2): ['sample'] components (3, 3, 1): ['direction_1', 'direction_2', 'component'] properties (5): ['property'] gradients: None
- has_gradient(parameter: str) bool [source]¶
Check if this block contains gradient information with respect to the given
parameter
.
- gradients() List[Tuple[str, TensorBlock]] [source]¶
Get a list of all (parameter, gradients) pairs defined in this block.
- Return type:
List[Tuple[str, TensorBlock]]
- property dtype: dtype¶
Get the dtype of all the values and gradient arrays stored inside this
TensorBlock
.Warning
This function will only work when running the code in TorchScript mode (i.e. after calling
torch.jit.script()
ortorch.jit.trace()
on your own code). Trying to use this property in Python mode will result inblock.dtype
being an integer, and comparing to false to any dtype:import torch from metatensor.torch import Labels, TensorBlock values = torch.tensor([[42.0]]) block = TensorBlock( values=values, samples=Labels.range("s", 1), components=[], properties=Labels.range("p", 1), ) print(block.dtype) # will output '6' print(block.dtype == values.dtype) # will output 'False' in Python, 'True' in TorchScript print(block.dtype == block.values.dtype) # will output 'False' in Python, 'True' in TorchScript
As a workaround, you can define a TorchScript function to do dtype manipulations:
@torch.jit.script def dtype_equal(block: TensorBlock, dtype: torch.dtype) -> bool: return block.dtype == dtype print(dtype_equal(block, torch.float32)) # will output 'True'
- property device: device¶
Get the device of all the values and gradient arrays stored inside this
TensorBlock
.
- to(dtype: dtype | None = None, device: device | None = None, arrays: str | None = None) TensorBlock [source]¶
Move all the arrays in this block (values, gradients and labels) to the given
dtype
,device
andarrays
backend.- Parameters:
dtype (dtype | None) – new dtype to use for all arrays. The dtype stays the same if this is set to
None
.device (device | None) – new device to use for all arrays. The device stays the same if this is set to
None
.arrays (str | None) – new backend to use for the arrays. This parameter is here for compatibility with the pure Python API, can only be set to
"torch"
orNone
and does nothing.
- Return type:
- static load(path: str) TensorBlock [source]¶
Load a serialized
TensorBlock
from the file atpath
, this is equivalent tometatensor.torch.load_block()
.- Parameters:
path (str) – Path of the file containing a saved
TensorBlock
- Return type:
Warning
PyTorch can execute
static
functions (like this one) coming from a TorchScript extension, but fails when trying to save code calling this function withtorch.jit.save()
, giving the following error:Failed to downcast a Function to a GraphFunction
This issue is reported as PyTorch#115639. In the mean time, you should use
metatensor.torch.load()
instead of this function to save your code to TorchScript.
- static load_buffer(buffer: Tensor) TensorBlock [source]¶
Load a serialized
TensorBlock
from an in-memorybuffer
, this is equivalent tometatensor.torch.load_block_buffer()
.- Parameters:
buffer (Tensor) – torch Tensor representing an in-memory buffer
- Return type:
Warning
PyTorch can execute
static
functions (like this one) coming from a TorchScript extension, but fails when trying to save code calling this function withtorch.jit.save()
, giving the following error:Failed to downcast a Function to a GraphFunction
This issue is reported as PyTorch#115639. In the mean time, you should use
metatensor.torch.load_buffer()
instead of this function to save your code to TorchScript.
- save(path: str)[source]¶
Save this
TensorBlock
to a file, this is equivalent tometatensor.torch.save()
.- Parameters:
path (str) – Path of the file. If the file already exists, it will be overwritten
- save_buffer() Tensor [source]¶
Save this
TensorBlock
to an in-memory buffer, this is equivalent tometatensor.torch.save_buffer()
.- Return type: