Information about models#
Here are the classes that are used to store and use information about the atomistic models.
ModelMetadata
stores metadata about the model: name, authors, references, etc.ModelCapabilities
stores information about what a model can do. Part of that is the full set of outputs the model can produce, stored inModelOutput
;ModelEvaluationOptions
is used by the simulation engine to request the model to do some things. This is handled byMetatensorAtomisticModel
, and transformed into the arguments given toModelInterface.forward()
.
- class metatensor.torch.atomistic.ModelMetadata(name: str = '', description: str = '', authors: List[str] = [], references: Dict[str, List[str]] = {})[source]#
Metadata about a specific exported model
- Parameters:
- references: Dict[str, List[str]]#
Academic references for this model. The top level dict can have three keys:
“implementation”: for reference to software used in the implementation of the model
“architecture”: for reference that introduced the general architecture used by this model
“model”: for reference specific to this exact model
- class metatensor.torch.atomistic.ModelOutput(quantity: str = '', unit: str = '', per_atom: bool = False, explicit_gradients: List[str] = [])[source]#
Description of one of the quantity a model can compute.
- property quantity: str#
Quantity of the output (e.g. energy, dipole, …). If this is an empty string, no unit conversion will be performed.
The list of possible quantities is available here.
- class metatensor.torch.atomistic.ModelCapabilities(outputs: Dict[str, ModelOutput] = {}, atomic_types: List[int] = [], interaction_range: float = inf, length_unit: str = '', supported_devices: List[str] = [])[source]#
Description of a model capabilities, i.e. everything a model can do.
- Parameters:
- outputs: Dict[str, ModelOutput]#
All possible outputs from this model and corresponding settings.
During a specific run, a model might be asked to only compute a subset of these outputs.
- interaction_range: float#
How far a given atom needs to know about other atoms, in the length unit of the model.
For a short range model, this is the same as the largest neighbors list cutoff. For a message passing model, this is the cutoff of one environment times the number of message passing steps. For an explicit long range model, this should be set to infinity (
float("inf")
/math.inf
/torch.inf
in Python).
- property length_unit: str#
Unit used by the model for its inputs.
This applies to the
interaction_range
, any cutoff in neighbors lists, the atoms positions and the system cell.The list of possible units is available here.
- engine_interaction_range(engine_length_unit: str) float [source]#
Same as
interaction_range
, but in the unit of length used by the engine.
- supported_devices: List[str]#
What devices can this model run on? This should only contain the
device_type
part of the device, and not the device number (i.e. this should be"cuda"
, not"cuda:0"
).Devices should be ordered in order of preference: the first entry in this list should be the best device for this model, and so on.
- class metatensor.torch.atomistic.ModelEvaluationOptions(length_unit: str = '', outputs: Dict[str, ModelOutput] = {}, selected_atoms: Labels | None = None)[source]#
Options requested by the simulation engine/evaluation code when doing a single model evaluation.
- Parameters:
length_unit (str) –
outputs (Dict[str, ModelOutput]) –
selected_atoms (Labels | None) –
- property length_unit: str#
Unit of lengths the engine uses for the model input.
The list of possible units is available here.
- outputs: Dict[str, ModelOutput]#
requested outputs for this run and corresponding settings
- property selected_atoms: Labels | None#
Only run the calculation for a selected subset of atoms.
If this is set to
None
, run the calculation on all atoms. If this is a set ofmetatensor.torch.Labels
, it will have two dimensions named"system"
and"atom"
, containing the 0-based indices of all the atoms in the selected subset.