Labels¶
- class metatensor.torch.Labels(names: str | List[str] | Tuple[str, ...], values: Tensor)[source]¶
A set of labels carrying metadata associated with a
TensorMap
.The metadata can be though as a list of tuples, where each value in the tuple also has an associated dimension name. In practice, the dimensions
names
are stored separately from thevalues
, and the values are in a 2-dimensional array integers with the shape(n_entries, n_dimensions)
. Each row/entry in this array is unique, and they are often (but not always) sorted in lexicographic order.See also
The pure Python version of this class
metatensor.Labels
, and the differences between TorchScript and Python API for metatensor.>>> from metatensor.torch import Labels >>> labels = Labels( ... names=["system", "atom", "type"], ... values=torch.tensor([(0, 1, 8), (0, 2, 1), (0, 5, 1)]), ... ) >>> print(labels) Labels( system atom type 0 1 8 0 2 1 0 5 1 ) >>> labels.names ['system', 'atom', 'type'] >>> labels.values tensor([[0, 1, 8], [0, 2, 1], [0, 5, 1]], dtype=torch.int32)
It is possible to create a view inside a
Labels
, selecting only a subset of columns/dimensions:>>> # single dimension >>> view = labels.view("atom") >>> view.names ['atom'] >>> view.values tensor([[1], [2], [5]], dtype=torch.int32) >>> # multiple dimensions >>> view = labels.view(["atom", "system"]) >>> view.names ['atom', 'system'] >>> view.values tensor([[1, 0], [2, 0], [5, 0]], dtype=torch.int32) >>> view.is_view() True >>> # we can convert a view back to a full, owned Labels >>> owned_labels = view.to_owned() >>> owned_labels.is_view() False
One can also iterate over labels entries, or directly index the
Labels
to get a specific entry>>> entry = labels[0] # or labels.entry(0) >>> entry.names ['system', 'atom', 'type'] >>> entry.values tensor([0, 1, 8], dtype=torch.int32) >>> for entry in labels: ... print(entry) ... LabelsEntry(system=0, atom=1, type=8) LabelsEntry(system=0, atom=2, type=1) LabelsEntry(system=0, atom=5, type=1)
Or get all the values associated with a given dimension/column name
>>> labels.column("atom") tensor([1, 2, 5], dtype=torch.int32) >>> labels["atom"] # alternative syntax for the above tensor([1, 2, 5], dtype=torch.int32)
Labels can be checked for equality:
>>> owned_labels == labels False >>> labels == labels True
Finally, it is possible to check if a value is inside (non-view) labels, and get the corresponding position:
>>> labels.position([0, 2, 1]) 1 >>> print(labels.position([0, 2, 4])) None >>> (0, 2, 4) in labels False >>> labels[2] in labels True
- Parameters:
- property values: Tensor¶
Values associated with each dimensions of the
Labels
, stored as 2-dimensional tensor of 32-bit integers.Warning
The
values
should be treated as immutable/read-only (we would like to enforce this automatically, but PyTorch can not mark atorch.Tensor
as immutable)Any modification to this tensor can break the underlying data structure, or make it out of sync with the
values
.
- static single() Labels [source]¶
Create
Labels
to use when there is no relevant metadata and only one entry in the corresponding dimension (e.g. keys when a tensor map contains a single block).Warning
PyTorch can execute
static
functions (like this one) coming from a TorchScript extension, but fails when trying to save code calling this function withtorch.jit.save()
, giving the following error:Failed to downcast a Function to a GraphFunction
This issue is reported as PyTorch#115639. In the meantime, if you need to
torch.jit.save()
code containing this function, you can implement it manually in a few lines.- Return type:
- static empty(names: str | List[str] | Tuple[str, ...]) Labels [source]¶
Create
Labels
with givennames
but no values.
- static range(name: str, end: int) Labels [source]¶
Create
Labels
with a single dimension using the givenname
and values in the[0, end)
range.- Parameters:
- Return type:
Warning
PyTorch can execute
static
functions (like this one) coming from a TorchScript extension, but fails when trying to save code calling this function withtorch.jit.save()
, giving the following error:Failed to downcast a Function to a GraphFunction
This issue is reported as PyTorch#115639. In the meantime, if you need to
torch.jit.save()
code containing this function, you can implement it manually in a few lines.>>> from metatensor.torch import Labels >>> labels = Labels.range("dummy", 7) >>> labels.names ['dummy'] >>> labels.values tensor([[0], [1], [2], [3], [4], [5], [6]], dtype=torch.int32)
- __getitem__(dimension: str) Tensor [source]¶
- __getitem__(index: int) LabelsEntry
When indexing with a string, get the values for the corresponding dimension as a 1-dimensional array (i.e.
Labels.column()
).When indexing with an integer, get the corresponding row/labels entry (i.e.
Labels.entry()
).See also
Labels.view()
to extract the values associated with multiple columns/dimensions.
- __contains__(entry: LabelsEntry | Tensor | List[int] | Tuple[int, ...]) bool [source]¶
check if these
Labels
contain the givenentry
- __eq__(other: Labels) bool [source]¶
check if two set of labels are equal (same dimension names and same values)
- __ne__(other: Labels) bool [source]¶
check if two set of labels are not equal (different dimension names or different values)
- static load(path: str) Labels [source]¶
Load a serialized
Labels
from the file atpath
, this is equivalent tometatensor.torch.load_labels()
.Warning
PyTorch can execute
static
functions (like this one) coming from a TorchScript extension, but fails when trying to save code calling this function withtorch.jit.save()
, giving the following error:Failed to downcast a Function to a GraphFunction
This issue is reported as PyTorch#115639. In the mean time, you should use
metatensor.torch.load_labels()
instead of this function to save your code to TorchScript.
- static load_buffer(buffer: Tensor) Labels [source]¶
Load a serialized
Labels
from an in-memorybuffer
, this is equivalent tometatensor.torch.load_labels_buffer()
.Warning
PyTorch can execute
static
functions (like this one) coming from a TorchScript extension, but fails when trying to save code calling this function withtorch.jit.save()
, giving the following error:Failed to downcast a Function to a GraphFunction
This issue is reported as PyTorch#115639. In the mean time, you should use
metatensor.torch.load_labels_buffer()
instead of this function to save your code to TorchScript.
- save(path: str)[source]¶
Save these
Labels
to a file, this is equivalent tometatensor.torch.save()
.- Parameters:
path (str) – Path of the file. If the file already exists, it will be overwritten
- save_buffer() Tensor [source]¶
Save these
Labels
to an in-memory buffer, this is equivalent tometatensor.torch.save_buffer()
.- Return type:
- append(name: str, values: Tensor) Labels [source]¶
Append a new dimension to the end of the
Labels
.- Parameters:
- Return type:
>>> import torch >>> from metatensor.torch import Labels >>> label = Labels("foo", torch.tensor([[42]])) >>> print(label) Labels( foo 42 ) >>> print(label.append(name="bar", values=torch.tensor([10]))) Labels( foo bar 42 10 )
- insert(index: int, name: str, values: Tensor) Labels [source]¶
Insert a new dimension before
index
in theLabels
.- Parameters:
- Return type:
>>> import torch >>> from metatensor.torch import Labels >>> label = Labels("foo", torch.tensor([[42]])) >>> print(label) Labels( foo 42 ) >>> print(label.insert(0, name="bar", values=torch.tensor([10]))) Labels( bar foo 10 42 )
- permute(dimensions_indexes: List[int]) Labels [source]¶
Permute dimensions according to
dimensions_indexes
in theLabels
.- Parameters:
dimensions_indexes (List[int]) – desired ordering of the dimensions
- Raises:
ValueError – if length of
dimensions_indexes
does not match the Labels lengthValueError – if duplicate values are present in
dimensions_indexes
- Return type:
>>> import torch >>> from metatensor.torch import Labels >>> label = Labels(["foo", "bar", "baz"], torch.tensor([[42, 10, 3]])) >>> print(label) Labels( foo bar baz 42 10 3 ) >>> print(label.permute([2, 0, 1])) Labels( baz foo bar 3 42 10 )
- remove(name: str) Labels [source]¶
Remove
name
from the dimensions of theLabels
.Removal can only be performed if the resulting
Labels
instance will be unique.- Parameters:
name (str) – name to be removed
- Raises:
ValueError – if the name is not present.
- Return type:
>>> import torch >>> from metatensor.torch import Labels >>> label = Labels(["foo", "bar"], torch.tensor([[42, 10]])) >>> print(label) Labels( foo bar 42 10 ) >>> print(label.remove(name="bar")) Labels( foo 42 )
If the new
Labels
is not unique an error is raised.>>> label = Labels(["foo", "bar"], torch.tensor([[42, 10], [42, 11]])) >>> print(label) Labels( foo bar 42 10 42 11 ) >>> try: ... label.remove(name="bar") ... except RuntimeError as e: ... print(e) ... invalid parameter: can not have the same label entry multiple time: [42] is already present
- rename(old: str, new: str) Labels [source]¶
Rename the
old
dimension tonew
in theLabels
.- Parameters:
- Raises:
ValueError – if old is not present.
- Return type:
>>> import torch >>> from metatensor.torch import Labels >>> label = Labels("foo", torch.tensor([[42]])) >>> print(label) Labels( foo 42 ) >>> print(label.rename("foo", "bar")) Labels( bar 42 )
- position(entry: LabelsEntry | Tensor | List[int] | Tuple[int, ...]) int | None [source]¶
Get the position of the given
entry
in this set ofLabels
, orNone
if the entry is not present in the labels.
- union(other: Labels) Labels [source]¶
Take the union of these
Labels
withother
.If you want to know where entries in
self
andother
ends up in the union, you can useLabels.union_and_mapping()
.
- union_and_mapping(other: Labels) Tuple[Labels, Tensor, Tensor] [source]¶
Take the union of these
Labels
withother
.This function also returns the position in the union where each entry of the input :py:class::Labels ended up.
- Returns:
Tuple containing the union, a
torch.Tensor
containing the position in the union of the entries fromself
, and atorch.Tensor
containing the position in the union of the entries fromother
.- Parameters:
other (Labels)
- Return type:
- intersection(other: Labels) Labels [source]¶
Take the intersection of these
Labels
withother
.If you want to know where entries in
self
andother
ends up in the intersection, you can useLabels.intersection_and_mapping()
.
- intersection_and_mapping(other: Labels) Tuple[Labels, Tensor, Tensor] [source]¶
Take the intersection of these
Labels
withother
.This function also returns the position in the intersection where each entry of the input :py:class::Labels ended up.
- Returns:
Tuple containing the intersection, a
torch.Tensor
containing the position in the intersection of the entries fromself
, and atorch.Tensor
containing the position in the intersection of the entries fromother
. If entries inself
orother
are not used in the output, the mapping for them is set to-1
.- Parameters:
other (Labels)
- Return type:
- select(selection: Labels) Tensor [source]¶
Select entries in these
Labels
that match theselection
.The selection’s names must be a subset of the names of these labels.
All entries in these
Labels
that match one of the entry in theselection
for all the selection’s dimension will be picked. Any entry in theselection
but not in theseLabels
will be ignored.>>> import torch >>> from metatensor.torch import Labels >>> labels = Labels( ... names=["a", "b"], ... values=torch.tensor([[0, 1], [1, 2], [0, 3], [1, 1], [2, 4]]), ... ) >>> selection = Labels(names=["a"], values=torch.tensor([[0], [2], [5]])) >>> print(labels.select(selection)) tensor([0, 2, 4])
- entry(index: int) LabelsEntry [source]¶
get a single entry in these labels, see also
Labels.__getitem__()
- Parameters:
index (int)
- Return type:
- column(dimension: str) Tensor [source]¶
Get the values associated with a single dimension in these labels (i.e. a single column of
Labels.values
) as a 1-dimensional array.See also
Labels.__getitem__()
as the main way to use this functionLabels.view()
to access multiple columns simultaneously
- view(dimensions: str | List[str] | Tuple[str, ...]) Labels [source]¶
get a view for the specified columns in these labels, see also
Labels.__getitem__()
- is_view() bool [source]¶
are these labels a view inside another set of labels?
A view is created with
Labels.__getitem__()
orLabels.view()
, and does not implementLabels.position()
orLabels.__contains__()
.- Return type:
- class metatensor.torch.LabelsEntry[source]¶
A single entry (i.e. row) in a set of
Labels
.The main way to create a
LabelsEntry
is to index aLabels
or iterate over them.>>> from metatensor.torch import Labels >>> labels = Labels( ... names=["system", "atom", "type"], ... values=torch.tensor([(0, 1, 8), (0, 2, 1), (0, 5, 1)]), ... ) >>> entry = labels[0] # or labels.entry(0) >>> entry.names ['system', 'atom', 'type'] >>> entry.values tensor([0, 1, 8], dtype=torch.int32)
Warning
Due to limitations in TorchScript,
LabelsEntry
implementation of__hash__
will use the default Python one, returning theid()
of the object. If you want to useLabelsEntry
as keys in a dictionary, convert them to tuple first (tuple(entry)
) — or to string (str(entry)
) since TorchScript does not support tuple as dictionary keys anyway.- property values: Tensor¶
Values associated with each dimensions of this
LabelsEntry
, stored as 32-bit integers.Warning
The
values
should be treated as immutable/read-only (we would like to enforce this automatically, but PyTorch can not mark atorch.Tensor
as immutable)Any modification to this tensor can break the underlying data structure, or make it out of sync with the
values
.
- print() str [source]¶
print this entry as a named tuple (i.e.
(key_1=value_1, key_2=value_2)
)- Return type:
- __getitem__(dimension: str | int) int [source]¶
get the value associated with the dimension in this entry
- __eq__(other: LabelsEntry) bool [source]¶
check if
self
andother
are equal (same dimensions/names and same values)- Parameters:
other (LabelsEntry)
- Return type:
- __ne__(other: LabelsEntry) bool [source]¶
check if
self
andother
are not equal (different dimensions/names or different values)- Parameters:
other (LabelsEntry)
- Return type: