ones_like

metatensor.ones_like(tensor: TensorMap, gradients: List[str] | str | None = None, requires_grad: bool = False) TensorMap[source]

Return a new TensorMap with the same metadata as tensor, and all values equal to one.

Parameters:
  • tensor (TensorMap) – Input tensor from which the metadata is taken.

  • gradients (List[str] | str | None) – Which gradients should be present in the output. If this is None (default) all gradient of tensor are present in the new TensorMap. If this is an empty list [], no gradients information is copied.

  • requires_grad (bool) – If autograd should record operations for the returned tensor. This option is only relevant for torch.

Return type:

TensorMap

>>> import numpy as np
>>> import metatensor
>>> from metatensor import TensorBlock, TensorMap, Labels
>>> np.random.seed(1)

First we create a TensorMap with just one block with two gradients, named alpha and beta, containing random data:

>>> block = TensorBlock(
...     values=np.random.rand(4, 3),
...     samples=Labels.range("sample", 4),
...     components=[],
...     properties=Labels.range("property", 3),
... )
>>> block.add_gradient(
...     parameter="alpha",
...     gradient=TensorBlock(
...         values=np.random.rand(2, 3, 3),
...         samples=Labels(["sample", "atom"], np.array([[0, 0], [0, 2]])),
...         components=[Labels.range("component", 3)],
...         properties=block.properties,
...     ),
... )
>>> block.add_gradient(
...     parameter="beta",
...     gradient=TensorBlock(
...         values=np.random.rand(1, 3),
...         samples=Labels(["sample"], np.array([[0]])),
...         components=[],
...         properties=block.properties,
...     ),
... )
>>> keys = Labels(names=["key"], values=np.array([[0]]))
>>> tensor = TensorMap(keys, [block])
>>> print(tensor.block(0))
TensorBlock
    samples (4): ['sample']
    components (): []
    properties (3): ['property']
    gradients: ['alpha', 'beta']

Then we use ones_like to create a TensorMap with the same metadata as tensor, but with all values set to 1.

>>> tensor_ones = metatensor.ones_like(tensor)
>>> print(tensor_ones.block(0))
TensorBlock
    samples (4): ['sample']
    components (): []
    properties (3): ['property']
    gradients: ['alpha', 'beta']
>>> print(tensor_ones.block(0).values)
[[1. 1. 1.]
 [1. 1. 1.]
 [1. 1. 1.]
 [1. 1. 1.]]
>>> print(tensor_ones.block(0).gradient("alpha").values)
[[[1. 1. 1.]
  [1. 1. 1.]
  [1. 1. 1.]]

 [[1. 1. 1.]
  [1. 1. 1.]
  [1. 1. 1.]]]

Note that if we copy just the gradient alpha, beta is no longer available.

>>> tensor_ones = metatensor.ones_like(tensor, gradients="alpha")
>>> print(tensor_ones.block(0).gradients_list())
['alpha']
metatensor.ones_like_block(block: TensorBlock, gradients: List[str] | str | None = None, requires_grad: bool = False) TensorBlock[source]

Return a new TensorBlock with the same metadata as block, and all values equal to one.

Parameters:
  • block (TensorBlock) – Input block from which the metadata is taken.

  • gradients (List[str] | str | None) – Which gradients should be present in the output. If this is None (default) all gradient of block are present in the new TensorBlock. If this is an empty list [], no gradients information is copied.

  • requires_grad (bool) – If autograd should record operations for the returned tensor. This option is only relevant for torch.

Return type:

TensorBlock