Difference between revisions of "QMCChem"
(24 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | QMC=Chem | + | During the last years we have been actively developing the QMC=Chem quantum Monte Carlo code. This code was initially designed for massively parallel simulations, and uses the manager/worker model. |
− | + | [[File:manager_worker.png|center]] | |
− | [[ | + | When the program starts its execution, the manager runs on the master node and spawns two other processes: a worker process and a data server. The worker is an efficient Fortran executable with minimal memory and disk space requirements (typically a few megabytes for each), where the only MPI communication is the broadcast of the input data (wave function parameters, initial positions in the 3N-space and random seed). The outline of the task of a worker is the following: |
+ | |||
+ | while ( Running ) | ||
+ | { | ||
+ | compute_a_block_of_data(); | ||
+ | Running = send_the_results_to_the_data_server(); | ||
+ | } | ||
+ | |||
+ | The data server is a socket server implemented in Python. When it receives the computed data of a worker, it replies to the worker the order given by the manager to compute another block or to stop. The received data is then stored in a database using an asynchronous I/O mechanism. The manager is always aware of the results computed by all the workers and controls the running/stopping state of the workers and the interaction of the user during the simulation. | ||
+ | |||
+ | QMC=Chem is very well suited to massive parallelism and cloud computing: | ||
+ | * All the implemented algorithms are CPU-bound | ||
+ | * All workers are totally independent | ||
+ | * The load balancing is optimal: the workers always work 100% of the time, independently of their respective CPU speeds | ||
+ | * The code was written to be as portable as possible: the manager is written in standard Python and the worker is written in standard Fortran. | ||
+ | * The network traffic is minimal and the amount of data transferred over the network can even be adjusted by the user | ||
+ | * The number of simultaneous worker nodes can be variable during a calculation | ||
+ | * Fault-tolerance is implemented | ||
+ | * The input and output data are not presented as traditional input files and output files. All the input and output data are stored in a database and an API is provided to access the data. This allows different forms of interaction of the user: scripts, graphical user interfaces, command-line tools, web interfaces, etc. | ||
+ | |||
+ | == Current Features == | ||
+ | |||
+ | [[File:screenshot.png|center|500px]] | ||
+ | |||
+ | === Methods === | ||
+ | * VMC | ||
+ | * DMC with stochastic reconfiguration | ||
+ | * Jastrow factor optimization | ||
+ | * CI coefficients optimization | ||
+ | |||
+ | === Wave functions === | ||
+ | * Single determinant | ||
+ | * Multi-determinant | ||
+ | * Nuclear cusp correction | ||
+ | |||
+ | === Links === | ||
+ | * Easy development with the [http://irpf90.ups-tlse.fr/ IRPF90 tool]. | ||
+ | * The access to input/output files is provided via an API produced by the [http://ezfio.sf.net Easy Fortran I/O library generator] | ||
+ | * [{{SERVER}}/qmcchem_doc/QMC_Chem_Documentation.html Documentation] | ||
+ | |||
+ | == Parallel speed-up curve == | ||
+ | |||
+ | [[File:qmcchem_speedup.png|400px|left]] | ||
+ | Number of computed blocks for the CuCl2 molecule. The simulation stops when the wall-time has reached 5 minutes. Each block is composed of 10 walkers realizing 2,000 VMC steps. | ||
+ | |||
+ | <br style="clear: both" /> | ||
+ | |||
+ | == Features under development == | ||
+ | |||
+ | === Properties === | ||
+ | * Molecular Forces | ||
+ | * Moments (dipole, quadrupole,...) | ||
+ | * Electron density | ||
+ | |||
+ | === Practical aspects === | ||
+ | * Web interface for input and output | ||
+ | |||
+ | == Input file creation == | ||
+ | |||
+ | The QMC=Chem input file can be created using the [{{SERVER}}/qmcchem_input.py web interface]. Upload a Q5Cost file or an output file from GAMESS, Gaussian or Molpro, and you will download the QMC=Chem input directory. | ||
+ | |||
+ | == Papers related to the QMC=Chem code == | ||
+ | |||
+ | * [http://dx.doi.org/10.1002/jcc.23216 Quantum Monte Carlo for large chemical systems: Implementing efficient strategies for petascale platforms and beyond] | ||
+ | * [http://link.springer.com/chapter/10.1007/978-1-4614-0508-5_13 QMC=Chem: a quantum Monte Carlo program for large-scale simulations in chemistry at the petascale level and beyond] | ||
+ | * [http://link.springer.com/chapter/10.1007/978-1-4614-0508-5_13 Large-scale quantum Monte Carlo electronic structure calculations on the EGEE grid] |
Latest revision as of 12:03, 21 November 2014
During the last years we have been actively developing the QMC=Chem quantum Monte Carlo code. This code was initially designed for massively parallel simulations, and uses the manager/worker model.
When the program starts its execution, the manager runs on the master node and spawns two other processes: a worker process and a data server. The worker is an efficient Fortran executable with minimal memory and disk space requirements (typically a few megabytes for each), where the only MPI communication is the broadcast of the input data (wave function parameters, initial positions in the 3N-space and random seed). The outline of the task of a worker is the following:
while ( Running ) { compute_a_block_of_data(); Running = send_the_results_to_the_data_server(); }
The data server is a socket server implemented in Python. When it receives the computed data of a worker, it replies to the worker the order given by the manager to compute another block or to stop. The received data is then stored in a database using an asynchronous I/O mechanism. The manager is always aware of the results computed by all the workers and controls the running/stopping state of the workers and the interaction of the user during the simulation.
QMC=Chem is very well suited to massive parallelism and cloud computing:
- All the implemented algorithms are CPU-bound
- All workers are totally independent
- The load balancing is optimal: the workers always work 100% of the time, independently of their respective CPU speeds
- The code was written to be as portable as possible: the manager is written in standard Python and the worker is written in standard Fortran.
- The network traffic is minimal and the amount of data transferred over the network can even be adjusted by the user
- The number of simultaneous worker nodes can be variable during a calculation
- Fault-tolerance is implemented
- The input and output data are not presented as traditional input files and output files. All the input and output data are stored in a database and an API is provided to access the data. This allows different forms of interaction of the user: scripts, graphical user interfaces, command-line tools, web interfaces, etc.
Contents
Current Features
Methods
- VMC
- DMC with stochastic reconfiguration
- Jastrow factor optimization
- CI coefficients optimization
Wave functions
- Single determinant
- Multi-determinant
- Nuclear cusp correction
Links
- Easy development with the IRPF90 tool.
- The access to input/output files is provided via an API produced by the Easy Fortran I/O library generator
- Documentation
Parallel speed-up curve
Number of computed blocks for the CuCl2 molecule. The simulation stops when the wall-time has reached 5 minutes. Each block is composed of 10 walkers realizing 2,000 VMC steps.
Features under development
Properties
- Molecular Forces
- Moments (dipole, quadrupole,...)
- Electron density
Practical aspects
- Web interface for input and output
Input file creation
The QMC=Chem input file can be created using the web interface. Upload a Q5Cost file or an output file from GAMESS, Gaussian or Molpro, and you will download the QMC=Chem input directory.
- Quantum Monte Carlo for large chemical systems: Implementing efficient strategies for petascale platforms and beyond
- QMC=Chem: a quantum Monte Carlo program for large-scale simulations in chemistry at the petascale level and beyond
- Large-scale quantum Monte Carlo electronic structure calculations on the EGEE grid