This is an OpenMM plugin that makes possible the PyTorch models to be used for creating external forces.
Before building this package you should install LibTorch from its binary CXX ABI files or build it from its source following instructions here.
Once libtorch is installed, MLForce can be installed as follows:
1- clone mlforce_ft from its repository
git clone https://github.com/ADicksonLab/mlforce_ft.git2- You should use conda (or mamba) to make a new virtual environment using the environment.yml
conda env create -n myenv -f environment.yml
conda activate myenv3- Create build directory in which to install MLForce
cd mlforce_ft
mkdir build && cd build4- Set the CUDACXX environment variable to point to your local nvcc compiler, e.g.:
export CUDACXX="/usr/local/cuda/bin/nvcc"5- Run cmake, including the directory where you installed libtorch:
cmake -DCMAKE_PREFIX_PATH="/path/to/libtorch" ..6- Run ccmake:
ccmake -i ..7- Set OPENMM_DIR and CMAKE_INSTALL_PREFIX to point to the directory where OpenMM is installed.
In this case, it should point to the directory for your conda/mamba environment, e.g.: /your/home/dir/micromamba/envs/myenv
8- Set LIBTORCH_DIR to point to the directory where you installed the Libtorch.
9- If you plan to build the CUDA platform, make sure that CUDA_TOOLKIT_ROOT_DIR is set correctly
and that NN_BUILD_CUDA_LIB is selected. If you don’t plan to use OpenCL, make sure that NN_BUILD_OPENCL_LIB is NOT selected.
10- Press “Configure”, then press “Generate”.
11- Type make install to install the plugin, and make PythonInstall to
install the Python wrapper.
12- Add Libtorch library path to the environment variable LD_LIBRARY_PATH
export LIBTORCH_LIBRARY_PATH="path/to/libtorch/lib"
export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$LIBTORCH_LIBRARY_PATH"12- Test if the installation works
python -c "import mlforce"13- To run all the test cases build the “test” target by running:
make test