NequIP#
The NequIP model as proposed in E(3)-Equivariant Graph Neural Networks for Data-Efficient and Accurate Interatomic Potentials.
Supervised#
The base nequip_model as proposed in the original paper. It has the ability to output graph scalars, a graph vector,
node scalars, and a node vector. The model has the following config
num_node_feats: intThe number of node features. Must be equal to the initial node feature dimension (sum of one-hot encoded features).
num_edge_feats: intThe number of edge features. Must be equal to the initial node feature dimension (sum of one-hot encoded features).
num_layers: int = 4The number of layers of the model.
max_ell: int = 2The maximum SO(3) irreps dimension to use in the model (during interactions).
parity: bool = TrueWhether to use parity odd irreps.
num_features: int = 32The number of features of the hidden embeddings.
mlp_irreps: str = "16x0e"The output MLP irreps.
num_bessel: int = 8The number of bessel functions to use.
bessel_basis_trainable: bool = TrueWhether the bessel function weights are trainable.
num_polynomial_cutoff: int = 6The cut-off polynomial envelope power.
self_connection: bool = TrueWhether to use self connections
resnet: bool = TrueWhether to use a resnet.
avg_num_neighbours: Optional[float] = NoneThe average number of neighbours in the dataset.
scaling_mean: float = 0.0The scaling mean of the model output. This is usually computed as the mean of
(molecule_energy - molecule_self_energy) / molecule_num_atomsscaling_std: float = 1.0The scaling std of the model output. This is usually computed as the std of
(molecule_energy - molecule_self_energy)compute_forces: bool = FalseWhether to compute forces as the gradient of the
y_graph_scalarsand use those as they_node_vectoroutput.y_node_scalars_loss_config: Optional[Dict] = NoneThe loss config for the
y_node_scalars.y_node_vector_loss_config: Optional[Dict] = NoneThe loss config for the
y_edge_scalars.y_graph_scalars_loss_config: Optional[Dict] = NoneThe loss config for the
y_graph_scalars.y_graph_vector_loss_config: Optional[Dict] = NoneThe loss config for the
y_graph_vector.
Supervised mean and variance#
The mean_var_nequip_model which predicts a mean and variance for the y_graph_scalars during training and inference.
It has the same comfig as the nequip_model but without the ability to specify a y_graph_scalars_loss_config which is
hardcoded to be the torch.nn.GaussianNLLLoss. The model config is as follows
num_node_feats: intThe number of node features. Must be equal to the initial node feature dimension (sum of one-hot encoded features).
num_edge_feats: intThe number of edge features. Must be equal to the initial node feature dimension (sum of one-hot encoded features).
num_layers: int = 4The number of layers of the model.
max_ell: int = 2The maximum SO(3) irreps dimension to use in the model (during interactions).
parity: bool = TrueWhether to use parity odd irreps.
num_features: int = 32The number of features of the hidden embeddings.
mlp_irreps: str = "16x0e"The output MLP irreps.
num_bessel: int = 8The number of bessel functions to use.
bessel_basis_trainable: bool = TrueWhether the bessel function weights are trainable.
num_polynomial_cutoff: int = 6The cut-off polynomial envelope power.
self_connection: bool = TrueWhether to use self connections
resnet: bool = TrueWhether to use a resnet.
avg_num_neighbours: Optional[float] = NoneThe average number of neighbours in the dataset.
scaling_mean: float = 0.0The scaling mean of the model output. This is usually computed as the mean of
(molecule_energy - molecule_self_energy) / molecule_num_atomsscaling_std: float = 1.0The scaling std of the model output. This is usually computed as the std of
(molecule_energy - molecule_self_energy)compute_forces: bool = FalseWhether to compute forces as the gradient of the
y_graph_scalarsand use those as they_node_vectoroutput.y_node_scalars_loss_config: Optional[Dict] = NoneThe loss config for the
y_node_scalars.y_node_vector_loss_config: Optional[Dict] = NoneThe loss config for the
y_edge_scalars.y_graph_vector_loss_config: Optional[Dict] = NoneThe loss config for the
y_graph_vector.
Supervised feature scale-shift#
The ssf_nequip_model which adds scale and shift for the hidden embeddings in the MACE model as proposed in
Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning. This is
useful for transfer learning. The config is the same as the nequip_model. The model config is as follows
num_node_feats: intThe number of node features. Must be equal to the initial node feature dimension (sum of one-hot encoded features).
num_edge_feats: intThe number of edge features. Must be equal to the initial node feature dimension (sum of one-hot encoded features).
num_layers: int = 4The number of layers of the model.
max_ell: int = 2The maximum SO(3) irreps dimension to use in the model (during interactions).
parity: bool = TrueWhether to use parity odd irreps.
num_features: int = 32The number of features of the hidden embeddings.
mlp_irreps: str = "16x0e"The output MLP irreps.
num_bessel: int = 8The number of bessel functions to use.
bessel_basis_trainable: bool = TrueWhether the bessel function weights are trainable.
num_polynomial_cutoff: int = 6The cut-off polynomial envelope power.
self_connection: bool = TrueWhether to use self connections
resnet: bool = TrueWhether to use a resnet.
avg_num_neighbours: Optional[float] = NoneThe average number of neighbours in the dataset.
scaling_mean: float = 0.0The scaling mean of the model output. This is usually computed as the mean of
(molecule_energy - molecule_self_energy) / molecule_num_atomsscaling_std: float = 1.0The scaling std of the model output. This is usually computed as the std of
(molecule_energy - molecule_self_energy)compute_forces: bool = FalseWhether to compute forces as the gradient of the
y_graph_scalarsand use those as they_node_vectoroutput.y_node_scalars_loss_config: Optional[Dict] = NoneThe loss config for the
y_node_scalars.y_node_vector_loss_config: Optional[Dict] = NoneThe loss config for the
y_edge_scalars.y_graph_scalars_loss_config: Optional[Dict] = NoneThe loss config for the
y_graph_scalars.y_graph_vector_loss_config: Optional[Dict] = NoneThe loss config for the
y_graph_vector.
Supervised adapters#
The adapter_nequip_model which adds adapters at the beginning and end of the message passing backbone of the NequIP model
as proposed in AdapterGNN. This is useful for transfer learning. The config is the
same as the nequip_model but with the added kwargs for ratio_adapter_down and initial_s. The model config is
as follows
num_node_feats: intThe number of node features. Must be equal to the initial node feature dimension (sum of one-hot encoded features).
num_edge_feats: intThe number of edge features. Must be equal to the initial node feature dimension (sum of one-hot encoded features).
num_layers: int = 4The number of layers of the model.
max_ell: int = 2The maximum SO(3) irreps dimension to use in the model (during interactions).
parity: bool = TrueWhether to use parity odd irreps.
num_features: int = 32The number of features of the hidden embeddings.
mlp_irreps: str = "16x0e"The output MLP irreps.
num_bessel: int = 8The number of bessel functions to use.
bessel_basis_trainable: bool = TrueWhether the bessel function weights are trainable.
num_polynomial_cutoff: int = 6The cut-off polynomial envelope power.
self_connection: bool = TrueWhether to use self connections
resnet: bool = TrueWhether to use a resnet.
avg_num_neighbours: Optional[float] = NoneThe average number of neighbours in the dataset.
scaling_mean: float = 0.0The scaling mean of the model output. This is usually computed as the mean of
(molecule_energy - molecule_self_energy) / molecule_num_atomsscaling_std: float = 1.0The scaling std of the model output. This is usually computed as the std of
(molecule_energy - molecule_self_energy)compute_forces: bool = FalseWhether to compute forces as the gradient of the
y_graph_scalarsand use those as they_node_vectoroutput.y_node_scalars_loss_config: Optional[Dict] = NoneThe loss config for the
y_node_scalars.y_node_vector_loss_config: Optional[Dict] = NoneThe loss config for the
y_edge_scalars.y_graph_scalars_loss_config: Optional[Dict] = NoneThe loss config for the
y_graph_scalars.y_graph_vector_loss_config: Optional[Dict] = NoneThe loss config for the
y_graph_vector.