ANI#

The ANI model as proposed in ANI-1: an extensible neural network potential with DFT accuracy at force field computational cost.

Supervised#

The base ani_model as proposed in the original paper. It has the ability to output graph scalars and optionally a node vector as the gradient of the graph scalars (e.g. force). The model has the following config

  • which_ani: Literal["ani1", "ani2", "ani_spice"] = "ani1"

    The ANI model to use.

    • "ani1" is for H, C, N, O

    • "ani2" is for H, C, N, O, S, F, Cl

    • "ani_spice" is for H, C, N, O, F, P, S, Cl, Br, I

  • scaling_mean: float = 0.0

    The scaling mean of the model output. This is usually computed as the mean of (molecule_energy - molecule_self_energy)

  • scaling_std: float = 1.0

    The scaling std of the model output. This is usually computed as the std of (molecule_energy - molecule_self_energy)

  • compute_forces: bool = False

    Whether to compute forces as the gradient of the y_graph_scalars and use those as the y_node_vector output.

  • y_graph_scalars_loss_config: Optional[Dict] = None

    The loss config for the y_graph_scalars.

  • y_node_vector_loss_config: Optional[Dict] = None

    The loss config for the y_node_vector.

Supervised mean and variance#

The mean_var_ani_model which predicts a mean and variance for the y_graph_scalars during training and inference. It has the same config as the ani_model but without the ability to specify a y_graph_scalars_loss_config which is hardcoded to be the torch.nn.GaussianNLLLoss. The model config is as follows

  • which_ani: Literal["ani1", "ani2", "ani_spice"] = "ani1"

    The ANI model to use.

    • "ani1" is for H, C, N, O

    • "ani2" is for H, C, N, O, S, F, Cl

    • "ani_spice" is for H, C, N, O, F, P, S, Cl, Br, I

  • scaling_mean: float = 0.0

    The scaling mean of the model output. This is usually computed as the mean of (molecule_energy - molecule_self_energy)

  • scaling_std: float = 1.0

    The scaling std of the model output. This is usually computed as the std of (molecule_energy - molecule_self_energy)

  • compute_forces: bool = False

    Whether to compute forces as the gradient of the y_graph_scalars and use those as the y_node_vector output.

  • y_node_vector_loss_config: Optional[Dict] = None

    The loss config for the y_node_vector.

Supervised ensemble#

The ensemble_ani_model which uses a single AEV computer and multiple ANI models. It has the same config as the ani_model but with the additional n_models argument for the number of models in the ensemble. The model config is as follows

  • which_ani: Literal["ani1", "ani2", "ani_spice"] = "ani1"

    The ANI model to use.

    • "ani1" is for H, C, N, O

    • "ani2" is for H, C, N, O, S, F, Cl

    • "ani_spice" is for H, C, N, O, F, P, S, Cl, Br, I

  • n_models: int = 4

    The number of models in the ensemble.

  • scaling_mean: float = 0.0

    The scaling mean of the model output. This is usually computed as the mean of (molecule_energy - molecule_self_energy)

  • scaling_std: float = 1.0

    The scaling std of the model output. This is usually computed as the std of (molecule_energy - molecule_self_energy)

  • compute_forces: bool = False

    Whether to compute forces as the gradient of the y_graph_scalars and use those as the y_node_vector output.

  • y_graph_scalars_loss_config: Optional[Dict] = None

    The loss config for the y_graph_scalars.

  • y_node_vector_loss_config: Optional[Dict] = None

    The loss config for the y_node_vector.