Models¶
-
torch::jit::Module metatensor_torch::load_atomistic_model(std::string path, c10::optional<std::string> extensions_directory = c10::nullopt)¶
Check and then load the metatensor atomistic model at the given
path
.This function calls
check_atomistic_model(path)
andload_model_extensions(path, extension_directory)
before attempting to load the model.
-
void metatensor_torch::check_atomistic_model(std::string path)¶
Check the exported metatensor atomistic model at the given
path
, and warn/error as required. This should be called afterload_model_extensions
-
void metatensor_torch::load_model_extensions(std::string path, c10::optional<std::string> extensions_directory)¶
Load all extensions and extensions dependencies for the model at the given
path
, trying to find extensions and dependencies in the givenextensions
. Users can set theMETATENSOR_DEBUG_EXTENSIONS_LOADING
environment variable to get more information when loading fails.
-
double metatensor_torch::unit_conversion_factor(const std::string &quantity, const std::string &from_unit, const std::string &to_unit)¶
Get the multiplicative conversion factor to use to convert from unit
from
to unitto
. Both should be units for the given physicalquantity
.
-
using metatensor_torch::ModelOutput = torch::intrusive_ptr<ModelOutputHolder>¶
TorchScript will always manipulate
ModelOutputHolder
through atorch::intrusive_ptr
-
class ModelOutputHolder : public CustomClassHolder¶
Description of one of the quantity a model can compute.
Public Functions
-
inline ModelOutputHolder(std::string quantity, std::string unit, bool per_atom_, std::vector<std::string> explicit_gradients_)¶
Initialize
ModelOutput
with the given data.
-
inline const std::string &quantity() const¶
quantity of the output (e.g. energy, dipole, …). If this is an empty string, no unit conversion will be performed.
-
void set_quantity(std::string quantity)¶
set the quantity of the output
-
inline const std::string &unit() const¶
unit of the output. If this is an empty string, no unit conversion will be performed.
-
void set_unit(std::string unit)¶
set the unit of the output
-
std::string to_json() const¶
Serialize a
ModelOutput
to a JSON string.
Public Members
-
bool per_atom = false¶
is the output defined per-atom or for the overall structure
-
std::vector<std::string> explicit_gradients¶
Which gradients should be computed eagerly and stored inside the output
TensorMap
Public Static Functions
-
static ModelOutput from_json(std::string_view json)¶
Load a serialized
ModelOutput
from a JSON string.
-
inline ModelOutputHolder(std::string quantity, std::string unit, bool per_atom_, std::vector<std::string> explicit_gradients_)¶
-
using metatensor_torch::ModelCapabilities = torch::intrusive_ptr<ModelCapabilitiesHolder>¶
TorchScript will always manipulate
ModelCapabilitiesHolder
through atorch::intrusive_ptr
-
class ModelCapabilitiesHolder : public CustomClassHolder¶
Description of a model’s capabilities, i.e. everything a model can do.
Public Functions
-
inline ModelCapabilitiesHolder(torch::Dict<std::string, ModelOutput> outputs, std::vector<int64_t> atomic_types_, double interaction_range_, std::string length_unit, std::vector<std::string> supported_devices_, std::string dtype)¶
Initialize
ModelCapabilities
with the given data.
-
inline torch::Dict<std::string, ModelOutput> outputs() const¶
all possible outputs from this model and corresponding settings
-
void set_outputs(torch::Dict<std::string, ModelOutput> outputs)¶
set the outputs for this model
-
inline const std::string &length_unit() const¶
unit of lengths the model expects as input
-
void set_length_unit(std::string unit)¶
set the unit of length for this model
-
double engine_interaction_range(const std::string &engine_length_unit) const¶
Get the
interaction_range
in the length unit of the engine.
-
inline const std::string &dtype() const¶
Get the dtype of this model. This can be “float32” or “float64”, and must be used by the engine as the dtype of all inputs and outputs for this model.
-
void set_dtype(std::string dtype)¶
Set the dtype of this model.
-
std::string to_json() const¶
Serialize a
ModelCapabilities
to a JSON string.
Public Members
-
std::vector<int64_t> atomic_types¶
which types the model can handle
-
double interaction_range = -1.0¶
How far a given atom needs to know about other atoms, in the length unit of the model.
This is used to properly implement domain decomposition with this model.
For a short range model, this is the same as the largest neighbor list cutoff. For a message passing model, this is the cutoff of one environment times the number of message passing steps. For an explicit long range model, this should be set to infinity.
This will default to -1 if not explicitly set by the user.
-
std::vector<std::string> supported_devices¶
What devices can this model run on? This should only contain the
device_type
part of the device, and not the device number (i.e. this should be"cuda"
, not"cuda:0"
).Devices should be ordered in order of preference: first one should be the best device for this model, and so on.
Public Static Functions
-
static ModelCapabilities from_json(std::string_view json)¶
Load a serialized
ModelCapabilities
from a JSON string.
-
inline ModelCapabilitiesHolder(torch::Dict<std::string, ModelOutput> outputs, std::vector<int64_t> atomic_types_, double interaction_range_, std::string length_unit, std::vector<std::string> supported_devices_, std::string dtype)¶
-
using metatensor_torch::ModelEvaluationOptions = torch::intrusive_ptr<ModelEvaluationOptionsHolder>¶
TorchScript will always manipulate
ModelEvaluationOptionsHolder
through atorch::intrusive_ptr
-
class ModelEvaluationOptionsHolder : public CustomClassHolder¶
Options requested by the simulation engine when running with a model.
Public Functions
-
ModelEvaluationOptionsHolder(std::string length_unit, torch::Dict<std::string, ModelOutput> outputs, torch::optional<TorchLabels> selected_atoms)¶
Initialize
ModelEvaluationOptions
with the given data.
-
inline const std::string &length_unit() const¶
unit of lengths the engine uses in the data it calls the model with
-
void set_length_unit(std::string unit)¶
set the unit of length used by the engine
-
inline torch::optional<TorchLabels> get_selected_atoms() const¶
Only run the calculation for a selected subset of atoms. If this is set to
None
, run the calculation on all atoms. If this is a set ofLabels
, it will have two dimensions named"system"
and"atom"
, containing the 0-based indices of all the atoms in the selected subset.
-
void set_selected_atoms(torch::optional<TorchLabels> selected_atoms)¶
Setter for
selected_atoms
-
std::string to_json() const¶
Serialize a
ModelEvaluationOptions
to a JSON string.
Public Members
-
torch::Dict<std::string, ModelOutput> outputs¶
requested outputs for this run and corresponding settings
Public Static Functions
-
static ModelEvaluationOptions from_json(std::string_view json)¶
Load a serialized
ModelEvaluationOptions
from a JSON string.
-
ModelEvaluationOptionsHolder(std::string length_unit, torch::Dict<std::string, ModelOutput> outputs, torch::optional<TorchLabels> selected_atoms)¶
-
using metatensor_torch::ModelMetadata = torch::intrusive_ptr<ModelMetadataHolder>¶
TorchScript will always manipulate
ModelMetadataHolder
through atorch::intrusive_ptr
-
class ModelMetadataHolder : public CustomClassHolder¶
Metadata about a specific exported model.
Public Functions
-
inline ModelMetadataHolder(std::string name_, std::string description_, std::vector<std::string> authors_, torch::Dict<std::string, std::vector<std::string>> references_, torch::Dict<std::string, std::string> extra_)¶
Initialize
ModelMetadata
with the given information.
-
std::string print() const¶
Implementation of Python’s
__repr__
and__str__
, printing all metadata about this model.
-
std::string to_json() const¶
Serialize
ModelMetadata
to a JSON string.
Public Members
-
std::string name¶
Name of this model.
-
std::string description¶
Description of this model.
-
std::vector<std::string> authors¶
List of authors for this model.
-
torch::Dict<std::string, std::vector<std::string>> references¶
References for this model. The top level dict can have three keys:
“implementation”: for reference to software and libraries used in the implementation of the model (i.e. for PyTorch https://dl.acm.org/doi/10.5555/3454287.3455008)
”architecture”: for reference that introduced the general architecture used by this model
”model”: for reference specific to this exact model
-
torch::Dict<std::string, std::string> extra¶
Extra metadata about this model. This can be anything, and it is intended to be used by models to store data they need.
Public Static Functions
-
static ModelMetadata from_json(std::string_view json)¶
Load a serialized
ModelMetadata
from a JSON string.
-
inline ModelMetadataHolder(std::string name_, std::string description_, std::vector<std::string> authors_, torch::Dict<std::string, std::vector<std::string>> references_, torch::Dict<std::string, std::string> extra_)¶