-
Describe the bug Perhaps it is the intended behaviour to copy these values from the foundation model, but then the code in To Reproduce import numpy as np
import torch
from mace.tools.model_script_utils import configure_model
from mace.tools.multihead_tools import dict_to_namespace
from mace.tools.utils import AtomicNumberTable
heads = ["default"]
mean = [0.1]
std = [1.2]
z_table = AtomicNumberTable([1, 2])
atomic_energies = np.random.rand(len(z_table))
model_foundation = torch.load("mace_agnesi_small.model")
model_config = {
"model": "ScaleShiftMACE",
"loss": "weighted",
"compute_energy": True,
"compute_forces": True,
"compute_stress": False,
"compute_dipole": False,
"mean": mean,
"std": std,
"scaling": "rms_forces_scaling",
"foundation_filter_elements": True,
}
model, output_args = configure_model(
dict_to_namespace(model_config), None, atomic_energies, model_foundation, heads, z_table
)
print(model.scale_shift) Expected behavior |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hey @pimdh, indeed that is the expected behavior, though we should make it more explicit in the doc. I want to add, that if you do change functional/code, the E0s will change in the model, meaning that there is a flexiblity to change the shifts in the model. |
Beta Was this translation helpful? Give feedback.
Hey @pimdh, indeed that is the expected behavior, though we should make it more explicit in the doc. I want to add, that if you do change functional/code, the E0s will change in the model, meaning that there is a flexiblity to change the shifts in the model.