lstsq#
- metatensor.lstsq(X: TensorMap, Y: TensorMap, rcond: float | None, driver: str | None = None) TensorMap [source]#
Solve a linear system using two
TensorMap
.The least-squares solution
w_b
for the linear system is solved for all blocks inX
andY
.X
andY
must have the same keys. The returnedTensorMap
w
has the same keys asX
andY
, and stores in each block the least-squares solutions .If a block has multiple components, they are moved to the “samples” axis before solving the linear system.
If gradients are present, they must be present in both
X
andY
. Gradients are concatenated with the block values along the “samples” axis, , , and the linear system is solved for using least-squares.Note
The solutions
differ from the output of numpy or torch in that they are already transposed. Be aware of that if you want to manually access the values of blocks ofw
(see also the example below).- Parameters:
X (TensorMap) – a
TensorMap
containing the “coefficient” matricesY (TensorMap) – a
TensorMap
containing the “dependent variable” valuesrcond (float | None) – Cut-off ratio for small singular values of a. The singular value
is treated as zero if smaller than , where is the biggest singular value of .None
choses the default value for numpy or PyTorch.driver (str | None) – Used only in torch (ignored if numpy is used), see https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html for a full description
- Returns:
a
TensorMap
with the same keys ofY
andX
, and where eachTensorBlock
has: thesample
equal to theproperties
ofY
; and theproperties
equal to theproperties
ofX
.- Return type:
>>> import numpy as np >>> from metatensor import Labels, TensorBlock, TensorMap >>> import metatensor >>> values_X = np.array( ... [ ... [1.0, 2.0], ... [3.0, 1.0], ... ] ... ) >>> values_Y = np.array( ... [ ... [1.0, 0.0], ... [0.0, 1.0], ... ] ... ) >>> samples = Labels("structure", np.array([[0], [1]])) >>> components = [] >>> properties = Labels("properties", np.array([[0], [1]])) >>> keys = Labels(names="key", values=np.array([[0]])) >>> block_X = TensorBlock(values_X, samples, components, properties) >>> block_Y = TensorBlock(values_Y, samples, components, properties) >>> X = TensorMap(keys, [block_X]) >>> Y = TensorMap(keys, [block_Y]) >>> w = metatensor.lstsq(X, Y, rcond=1e-10)
We take the transpose here
>>> y = X.block(0).values @ w.block(0).values.T
Set small entries in y to 0, they are numerical noise
>>> mask = np.abs(y) < 1e-15 >>> y[mask] = 0.0 >>> print(y) [[1. 0.] [0. 1.]]