This directory contains multiple examples of machine-learning potentials defined using the ML-IAP package in LAMMPS. The input files are described below. in.mliap.snap.Ta06A ------------------- Run linear SNAP, equivalent to examples/snap/in.snap.Ta06A in.mliap.snap.WBe.PRB2019 ------------------------- Run linear SNAP, equivalent to examples/snap/in.snap.WBe.PRB2019 in.mliap.snap.quadratic ----------------------- Run quadratic SNAP in.mliap.snap.chem ------------------ Run EME-SNAP, equivalent to examples/snap/in.snap.InP.JCPA2020 in.mliap.snap.compute --------------------- Generate the A matrix, the gradients (w.r.t. coefficients) of total potential energy, forces, and stress tensor for linear SNAP, equivalent to in.snap.compute in.mliap.quadratic.compute -------------------------- Generate the A matrix, the gradients (w.r.t. coefficients) of total potential energy, forces, and stress tensor for for quadratic SNAP, equivalent to in.snap.compute.quadratic in.mliap.pytorch.Ta06A ----------------------- This reproduces the output of in.mliap.snap.Ta06A above, but using the Python coupling to PyTorch. This example can be run in two different ways: 1: Running a LAMMPS executable: in.mliap.pytorch.Ta06A First run ``python convert_mliap_Ta06A.py``. It creates a PyTorch energy model that replicates the SNAP Ta06A potential and saves it in the file "Ta06A.mliap.pytorch.model.pt". You can then run the example as follows `lmp -in in.mliap.pytorch.Ta06A -echo both` The resultant log.lammps output should be identical to that generated by in.mliap.snap.Ta06A. If this fails, see the instructions for building the ML-IAP package with Python support enabled. Also, confirm that the LAMMPS Python embedded Python interpreter is working by running ../examples/in.python. 2: Running a Python script: mliap_pytorch_Ta06A.py Before testing this, ensure that the previous method (running a LAMMPS executable) works. You can run the example in serial: `python mliap_pytorch_Ta06A.py` or in parallel: `mpirun -np 4 python mliap_pytorch_Ta06A.py` The resultant log.lammps output should be identical to that generated by in.mliap.snap.Ta06A and in.mliap.pytorch.Ta06A. Not all Python installations support this mode of operation. It requires that the Python interpreter be initialized. If not, the script will exit with an error message. in.mliap.pytorch.relu1hidden ---------------------------- This example demonstrates a simple neural network potential using PyTorch and SNAP descriptors. `lmp -in in.mliap.pytorch.relu1hidden -echo both` It was trained on just the energy component (no forces) of the data used in the original SNAP Ta06A potential for tantalum (Thompson, Swiler, Trott, Foiles, Tucker, J Comp Phys, 285, 316 (2015).). Because of the very small amount of energy training data, it uses just 1 hidden layer with a ReLU activation function. It is not expected to be very accurate for forces. NOTE: Unlike the previous example, this example uses a pre-built PyTorch file `Ta06A.mliap.pytorch.model.pt`. It is read using `torch.load`, which implicitly uses the Python `pickle` module. This is known to be insecure. It is possible to construct malicious pickle data that will execute arbitrary code during unpickling. Never load data that could have come from an untrusted source, or that could have been tampered with. Only load data you trust. in.mliap.nn.Ta06A ------------------- Run linear SNAP using the "nn" model style, equivalent to examples/snap/in.snap.Ta06A in.mliap.nn.cu ------------------------- Run a neural network potential for Cu, a combination of SNAP descriptors and the "nn" model style in.mliap.unified.lj.Ar ----------------------- This reproduces the output of examples/melt/in.melt, but using the ``unified`` keyword and the Python coupling to PyTorch. This example can be run in two different ways: 1: Running a LAMMPS executable: in.mliap.unified.lj.Ar You can run the example as follows `lmp -in in.mliap.unified.lj.Ar -echo both` The resultant thermo output should be identical to that generated by in.melt. The input first runs some python code that creates an ML-IAP unified model that replicates the ``lj/cut`` pair style with parameters as defined in examples/melt/in/melt and saves it in the file "mliap_unified_lj_Ar.pkl". If this fails, see the instructions for building the ML-IAP package with Python support enabled. Also, confirm that the LAMMPS Python embedded Python interpreter is working by running ../examples/in.python and that the LAMMPS python module is installed and working. 2: Running a Python script: mliap_unified_lj_Ar.py Before testing this, ensure that the previous method (running a LAMMPS executable) works. You can run the example in serial: `python mliap_unified_lj_Ar.py` or in parallel: `mpirun -np 4 python mliap_unified_lj_Ar.py` The resultant log.lammps output should be identical to that generated by in.melt and in.mliap.unified.lj.Ar. in.mliap.so3.Ni_Mo ------------------ Example of linear model with SO3 descriptors for Ni/Mo in.mliap.so3.nn.Si ------------------ Example of NN model with SO3 descriptors for Si NOTE: The use of ACE within mliap requires the generalized Glebsch-Gordan coefficients (a.k.a. coupling coefficients, ccs, etc.) are defined within a ctilde file. These are used to construct the ACE descriptors for both linear models and non-linear models within mliap. These define the size of the ACE basis used as well as hyperparameters for the basis functions for descriptors in mliap. These files may be generated with various software for fitting ACE models must follow the format for ACE C-Tilde potential files in `../PACKAGES/pace/*.ace` or the convenient yaml-based `.yace` format. For the ACE mliap interface, the coupling coefficient file should NOT include linear model coefficients. If the linear model coefficients are included in the coupling coefficient file, mliap will evaluate ACE contributions to a linear model rather than ACE descriptors. One convenient feature of mliap/ace, is that it enables the user to decouple the linear model coefficients and the ace descriptors. Both are typically included in ACE potentials with C-Tilde format. This is demonstrated in the first example for ACE: linear model coefficients are provided separately (in `linear_ACE_coeff.acecoeff`) from coupling coefficients (in `linear_ACE_ccs.yace`). They are combined in a linear pytorch model using `convert_mliap_lin_ACE.py`. in.mliap.pytorch.ace ------------------ Example of linear model with ACE descriptors for minimal Ta dataset in.mliap.ace.compute ------------------ Example for calculating multi-element ACE descriptors through ML-IAP in.mliap.pytorch.ace.NN ------------------ Example of NN model with ACE descriptors for minimal Ta dataset mliap_pytorch_ACE.py ------------------ Example of NN model with ACE descriptors for minimal Ta dataset through mliappy