Atomic Simulation Environment (ASE) integration

The code in metatensor.torch.atomistic.ase_calculator defines a class that allow using MetatensorAtomisticModel which predict the energy of a system as an ASE calculator; enabling the use of machine learning interatomic potentials to drive simulations inside ASE.

Additionally, it allow using arbitrary models with prediction targets which are not just the energy, through the ase_calculator.MetatensorCalculator.run_model() function.

class metatensor.torch.atomistic.ase_calculator.MetatensorCalculator(model: str | bytes | PurePath | MetatensorAtomisticModel, *, additional_outputs: Dict[str, ModelOutput] | None = None, extensions_directory=None, check_consistency=False, device=None)[source]

Bases: Calculator

The MetatensorCalculator class implements ASE’s ase.calculators.calculator.Calculator API using metatensor atomistic models to compute energy, forces and any other supported property.

This class can be initialized with any MetatensorAtomisticModel, and used to run simulations using ASE’s MD facilities.

Neighbor lists are computed using ASE’s neighbor list utilities, unless the faster vesin neighbor list library is installed, in which case it will be used instead.

Parameters:
  • model (str | bytes | PurePath | MetatensorAtomisticModel) – model to use for the calculation. This can be a file path, a Python instance of MetatensorAtomisticModel, or the output of torch.jit.script() on MetatensorAtomisticModel.

  • additional_outputs (Dict[str, TensorMap]) – Dictionary of additional outputs to be computed by the model. These outputs will always be computed whenever the calculate() function is called (e.g. by ase.Atoms.get_potential_energy(), ase.optimize.optimize.Dynamics.run(), etc.) and stored in the additional_outputs attribute. If you want more control over when and how to compute specific outputs, you should use run_model() instead.

  • extensions_directory – if the model uses extensions, we will try to load them from this directory

  • check_consistency – should we check the model for consistency when running, defaults to False.

  • device – torch device to use for the calculation. If None, we will try the options in the model’s supported_device in order.

additional_outputs: Dict[str, TensorMap]

Additional outputs computed by calculate() are stored in this dictionary.

The keys will match the keys of the additional_outputs parameters to the constructor; and the values will be the corresponding raw metatensor.torch.TensorMap produced by the model.

metadata() ModelMetadata[source]

Get the metadata of the underlying model

Return type:

ModelMetadata

run_model(atoms: Atoms, outputs: Dict[str, ModelOutput], selected_atoms: Labels | None = None) Dict[str, TensorMap][source]

Run the model on the given atoms, computing the requested outputs and only these.

The output of the model is returned directly, and as such the blocks’ values will be torch.Tensor.

This is intended as an easy way to run metatensor models on ase.Atoms when the model can compute outputs not supported by the standard ASE’s calculator interface.

All the parameters have the same meaning as the corresponding ones in metatensor.torch.atomistic.ModelInterface.forward().

Parameters:
  • atoms (Atoms) – system on which to run the model

  • outputs (Dict[str, ModelOutput]) – outputs of the model that should be predicted

  • selected_atoms (Labels | None) – subset of atoms on which to run the calculation

Return type:

Dict[str, TensorMap]

calculate(atoms: Atoms, properties: List[str], system_changes: List[str]) None[source]

Compute some properties with this calculator, and return them in the format expected by ASE.

This is not intended to be called directly by users, but to be an implementation detail of atoms.get_energy() and related functions. See ase.calculators.calculator.Calculator.calculate() for more information.

Parameters:
Return type:

None