metatensor.lstsq(X: TensorMap, Y: TensorMap, rcond: float | None, driver: str | None = None) TensorMap[source]

Solve a linear system using two TensorMap.

The least-squares solution w_b for the linear system \(X_b w_b = Y_b\) is solved for all blocks \(b\) in X and Y. X and Y must have the same keys. The returned TensorMap w has the same keys as X and Y, and stores in each block the least-squares solutions \(w_b\).

If a block has multiple components, they are moved to the “samples” axis before solving the linear system.

If gradients are present, they must be present in both X and Y. Gradients are concatenated with the block values along the “samples” axis, \(A_b = [X_b, {\nabla} X_b]\), \(B_b = [Y_b, {\nabla} Y_b]\), and the linear system \(A_b w_b = B_b\) is solved for \(w_b\) using least-squares.


The solutions \(w_b\) differ from the output of numpy or torch in that they are already transposed. Be aware of that if you want to manually access the values of blocks of w (see also the example below).

  • X (TensorMap) – a TensorMap containing the “coefficient” matrices

  • Y (TensorMap) – a TensorMap containing the “dependent variable” values

  • rcond (float | None) – Cut-off ratio for small singular values of a. The singular value \({\sigma}_i\) is treated as zero if smaller than \(r_{cond}{\sigma}_1\), where \({\sigma}_1\) is the biggest singular value of \(X_b\). None choses the default value for numpy or PyTorch.

  • driver (str | None) – Used only in torch (ignored if numpy is used), see for a full description


a TensorMap with the same keys of Y and X, and where each TensorBlock has: the sample equal to the properties of Y; and the properties equal to the properties of X.

Return type:


>>> import numpy as np
>>> from metatensor import Labels, TensorBlock, TensorMap
>>> import metatensor
>>> values_X = np.array(
...     [
...         [1.0, 2.0],
...         [3.0, 1.0],
...     ]
... )
>>> values_Y = np.array(
...     [
...         [1.0, 0.0],
...         [0.0, 1.0],
...     ]
... )
>>> samples = Labels("system", np.array([[0], [1]]))
>>> components = []
>>> properties = Labels("properties", np.array([[0], [1]]))
>>> keys = Labels(names="key", values=np.array([[0]]))
>>> block_X = TensorBlock(values_X, samples, components, properties)
>>> block_Y = TensorBlock(values_Y, samples, components, properties)
>>> X = TensorMap(keys, [block_X])
>>> Y = TensorMap(keys, [block_Y])
>>> w = metatensor.lstsq(X, Y, rcond=1e-10)

We take the transpose here

>>> y = X.block(0).values @ w.block(0).values.T

Set small entries in y to 0, they are numerical noise

>>> mask = np.abs(y) < 1e-15
>>> y[mask] = 0.0
>>> print(y)
[[1. 0.]
 [0. 1.]]