Information about models

Here are the classes that are used to store and use information about the atomistic models.


class metatensor.torch.atomistic.ModelMetadata(name: str = '', description: str = '', authors: List[str] = [], references: Dict[str, List[str]] = {}, extra: Dict[str, str] = {})[source]

Metadata about a specific exported model

This class implements the __str__ and __repr__ methods, so its representation can be easily printed, logged, inserted into other strings, etc.

Parameters:
name: str

Name of this model

description: str

Description of this model

authors: List[str]

List of authors for this model

references: Dict[str, List[str]]

Academic references for this model. The top level dict can have three keys:

  • “implementation”: for reference to software used in the implementation of the model

  • “architecture”: for reference that introduced the general architecture used by this model

  • “model”: for reference specific to this exact model

extra: Dict[str, str]

Any additional metadata that is not contained in the other fields. There are no constraints on the keys or values of this dictionary. The extra metadata is intended to be used by models to store data they need.

class metatensor.torch.atomistic.ModelOutput(quantity: str = '', unit: str = '', per_atom: bool = False, explicit_gradients: List[str] = [])[source]

Description of one of the quantity a model can compute.

Parameters:
property quantity: str

Quantity of the output (e.g. energy, dipole, …). If this is an empty string, no unit conversion will be performed.

The list of possible quantities is available here.

property unit: str

Unit of the output. If this is an empty string, no unit conversion will be performed.

The list of possible units is available here.

per_atom: bool

Is the output defined per-atom or for the overall structure

explicit_gradients: List[str]

Which gradients should be computed eagerly and stored inside the output TensorMap.

class metatensor.torch.atomistic.ModelCapabilities(outputs: Dict[str, ModelOutput] = {}, atomic_types: List[int] = [], interaction_range: float = -1, length_unit: str = '', supported_devices: List[str] = [], dtype: str = '')[source]

Description of a model capabilities, i.e. everything a model can do.

Parameters:
property outputs: Dict[str, ModelOutput]

All possible outputs from this model and corresponding settings.

During a specific run, a model might be asked to only compute a subset of these outputs. Some outputs are standardized, and have additional constrains on how the associated metadata should look like, documented in the Standard model outputs section.

If you want to define a new output for your own usage, it name should looks like "<domain>::<output>", where <domain> indicates who defines this new output and <output> describes the output itself. For example, "my-package::foobar" for a foobar output defined in my-package.

atomic_types: List[int]

which atomic types the model can handle

interaction_range: float

How far a given atom needs to know about other atoms, in the length unit of the model.

For a short range model, this is the same as the largest neighbors list cutoff. For a message passing model, this is the cutoff of one environment times the number of message passing steps. For an explicit long range model, this should be set to infinity (float("inf")/math.inf/torch.inf in Python).

property length_unit: str

Unit used by the model for its inputs.

This applies to the interaction_range, any cutoff in neighbors lists, the atoms positions and the system cell.

The list of possible units is available here.

property dtype: str

The dtype of this model

This can be "float32" or "float64", and must be used by the engine as the dtype of all inputs and outputs for this model.

engine_interaction_range(engine_length_unit: str) float[source]

Same as interaction_range, but in the unit of length used by the engine.

Parameters:

engine_length_unit (str)

Return type:

float

supported_devices: List[str]

What devices can this model run on? This should only contain the device_type part of the device, and not the device number (i.e. this should be "cuda", not "cuda:0").

Devices should be ordered in order of preference: the first entry in this list should be the best device for this model, and so on.

class metatensor.torch.atomistic.ModelEvaluationOptions(length_unit: str = '', outputs: Dict[str, ModelOutput] = {}, selected_atoms: Labels | None = None)[source]

Options requested by the simulation engine/evaluation code when doing a single model evaluation.

Parameters:
property length_unit: str

Unit of lengths the engine uses for the model input.

The list of possible units is available here.

outputs: Dict[str, ModelOutput]

requested outputs for this run and corresponding settings

property selected_atoms: Labels | None

Only run the calculation for a selected subset of atoms.

If this is set to None, run the calculation on all atoms. If this is a set of metatensor.torch.Labels, it will have two dimensions named "system" and "atom", containing the 0-based indices of all the atoms in the selected subset.