Plumed Flagship meeting: Lecture: Parallel and GPUs programming in PLUMED
We have some simple examples for using parallelism in plumed:
SerialCoordination.cpp
is avandalizedsimplified version of the cvCoordination
OMPCoordination.cpp
is an example on how openMP can be used in PlumedMPICoordination.cpp
is an example on how MPI can be used in PlumedCUDACoordination.cpp
is an example on trying to user Cuda for solving this problem (it needs an ad hoc compiler queue and the kernelCUDACoordinationkernel.cu
)
(The original Coordination
combines the use of both openMP and MPI)
Plumed helps the developer with some tools for parallelism:
-
tools/OpenMP.h
contains some function that are useful in working with openMP. In the example we are usingOpenMP::getNumThreads()
to get the number of threads from the environmental variablePLUMED_NUM_THREADS
-
tools/Communicator.h
is present as the variablecomm
that is inherited throughPLMD::Action
.PLMD::Communicator
is an interface to some of the functionalities of the C API ofmpi.h
. In the example we are usingPLMD::Communicator::Get_size()
to get the number of the processes spawned by mpirun,PLMD::Communicator::Get_rank()
to get the id of the process, andPLMD::Communicator::Sum()
to sum the result of the coordination and make the correct value avayable for further calculations.
Intro
The exercise uses Base.hpp
to give a very base version of the COORDINATION CV, in this case it is returning the sum of the number of atoms that are within R_0 from each atom. For simplicity it the pbcs will be ignored throught all of the examples.
MyCoordinationBase
in Base.hpp
do not have the calculate method and so the example will have more or less the following structure:
#include "Base.hpp"
namespace PLMD {
class MyCoordination : public MyCoordinationBase {
public:
explicit MyCoordination(const ActionOptions &ao)
: Action(ao), MyCoordinationBase(ao) {}
~MyCoordination() = default;
// active methods:
void calculate() override;
};
PLUMED_REGISTER_ACTION(MyCoordination, "MYCOORDINATION")
void MyCoordination::calculate() {
...code goes here...
}
} // namespace PLMD
For automation purposes (see the Makefile
) we are using the same key PLUMED_REGISTER_ACTION(MyCoordination, "MYCOORDINATION")
for all of the examples.
Prerequisites
- Plumed 2.9.0 configured and installed with
--enable-modules=all
and MPI- in my workstation I am using gcc 9.4.0 and openmpi 4.1.1
- For the GPU offloading the example is written with Nvdia’s Cuda
- In my workstation I am using cuda 11.7 with a T1000 card
The serial code
Threading: openMP
Processes: MPI
EXTRA: GPU offloading with Cuda
Closing information
The file in the Solution
directory give the correct results