Changelog¶
All notable changes to metatensor-learn are documented here, following the keep a changelog format. This project follows Semantic Versioning.
Unreleased¶
Version 0.2.2 - 2024-05-16¶
Added¶
Added torch-style activation function module maps to
metatensor.learn.nn
:ReLU
,InvariantReLU
,SiLU
, andInvariantSiLU
(#597)Added torch-style neural network module maps to
metatensor.learn.nn
:LayerNorm
,InvariantLayerNorm
,EquivariantLinear
,Sequential
,Tanh
, andInvariantTanh
(#513)
Fixed¶
Set correct device for output of when torch default device is different than input device (#595)
Version 0.2.1 - 2024-03-01¶
Changed¶
metatensor-learn
is no longer re-exported frommetatensor
andmetatensor.torch
, all functions are still available insidemetatensor.learn
andmetatensor.torch.learn
.
Fixed¶
Make sure the
Dataset
class is iterable (#500)
Version 0.2.0 - 2024-02-07¶
Changed¶
Pluralization removed for special kwarg
sample_ids
ofIndexedDataset
->sample_id
, and provided collate functionsgroup
andgroup_and_join
updated accordingly.
Fixed¶
Removal of usage of Labels.range in nn modules to support torch.jit.save (#410)
Version 0.1.0 - 2024-01-26¶
Added¶
ModuleMap
andLinear
modules, following torch.nn.ModuleDict and torch.nn.Linear in PyTorch but adapted forTensorMap
’s (#427)Dataset
andDataLoader
facilities, following the corresponding classes in PyTorch (#428)