detach

metatensor.detach(tensor: TensorMap) TensorMap[source]

Detach all the arrays in this tensor from any computational graph.

This is useful for example when handling torch arrays, to be able to save them with metatensor.save() or metatensor.torch.save().

This function is related but different to metatensor.remove_gradients(). metatensor.remove_gradients() can be used to remove the explicit forward mode gradients stored inside the blocks, and this function detach the values (as well as any potential gradients) from the computational graph PyTorch uses for backward differentiation.

Parameters:

tensor (TensorMap) –

Return type:

TensorMap

metatensor.detach_block(block: TensorBlock) TensorBlock[source]

Detach all the values in this block and all of its gradient from any computational graph.

This function is related but different to metatensor.remove_gradients_block(). metatensor.remove_gradients_block() can be used to remove the explicit forward mode gradients stored inside the block, and this function detach the values (as well as any potential gradients) from the computational graph PyTorch uses for backward differentiation.

Parameters:

block (TensorBlock) –

Return type:

TensorBlock