Site Tools


tcr_sri002

Quantum Espresso 6.5

description

Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. ( https://www.quantum-espresso.org/project/manifesto ) .

software version

Quantum Espresso 6.5, https://github.com/QEF/q-e/releases .

prepare software

To prepare Quantum Espresso (ver. 6.5) software login to tcr.cent.uw.edu.pl.

Then open interactive session on any computing node with :

srun -n16 -N1 --pty bash -l

When interactive session is started go through Quantum Espresso installation process described with bellow commands. It will take about 12 minutes.

#folder for source files
mkdir -p ~/downloads/quantum_espresso_6.5
#folder for compiled binares
mkdir -p ~/soft/qe-6.5

cd ~/downloads/quantum_espresso_6.5
wget https://github.com/QEF/q-e/releases/download/qe-6.5/qe-6.5-ReleasePack.tgz
tar xvzf qe-6.5-ReleasePack.tgz
cd qe-6.5
module load mpi/openmpi-x86_64

./configure --prefix="/home/users/${USER}/soft/qe-6.5"

time make -j${SLURM_NTASKS} all 
make install

remember to end interactive session with exit command.

If no errors occurred, compiled Quantum Espresso binaries are available in /home/users/${USER}/soft/qe-6.5/bin .

sbatch example

Use Quantum Espresso 6.5. This description assumes that path to binaries is /home/users/${USER}/soft/qe-6.5/bin . Use bellow qe-test.sbatch file to run computation.

#!/bin/bash -l
#SBATCH --job-name="qe-test_N2_n32"
#SBATCH --nodes=2                   # number of computing_nodes
#SBATCH --ntasks=32                 # number of CPU's ( 16*computing_nodes )
#SBATCH --mem-per-cpu=2G
#SBATCH --partition=short
#SBATCH --constraint=intel
#SBATCH --exclusive
#SBATCH --time=2:00:00

WORKDIR="/home/users/${USER}/soft_tests/qe_run_`date +%s`_${RANDOM}/"
mkdir -p ${WORKDIR}
cd ${WORKDIR}

export BIN_DIR="/home/users/${USER}/soft/qe-6.5/bin"
export PATH=${BIN_DIR}:$PATH
export PSEUDO_DIR=${WORKDIR}
export TMP_DIR="/tmp"
   
#copy input files and pseudo files to ${WORKDIR} 
cp /home/users/${USER}/downloads/quantum_espresso_input_files/* ${WORKDIR}
  
module load mpi/openmpi-x86_64

T1=`date +%s`
  
mpirun -np ${SLURM_NTASKS} pw.x -npool ${SLURM_NNODES} -inp Ti2N.in > Ti2N.out
  
T2=`date +%s`
echo -e "stop ${T2}\t start ${T1}\t ${SLURM_NNODES}"

performance tests

Bellow results show time of computation in function of used resources (computation scalability) for a specific computational task done with pw.x program.

Assigning a larger amount of computing nodes does not always lead to a (efficient) reduction in computing time (wall-time of the job). To find the most appropriate number of nodes for a specific type of job, it is essential to run one's own benchmarks. In general, parallel jobs should scale to at least 70% efficiency for the sake of other TCR users. One user using twice the resources to squeeze out 10% more performance may be keeping other users from working at all.

Bellow results should be consider as results of this specific computational task on this specific hardware (TCR cluster) and not overall benchmark for Quantum Espresso software suite.

nodes min [s] avg [s] median [s] max [s] efficiency [%]
1 1862 2065.75 2126 2149 100.00%
2 1105 1192.75 1157.5 1351 84.25%
3 763 770 768.5 780 81.35%
4 798 1026.25 868.5 1570 58.33%
5 571 589.75 574 640 65.22%
6 488 536 525.5 605 63.59%
7 372 502.25 414.5 808 71.51%
8 375 489.75 408.5 767 62.07%
9 324 403.75 346.5 598 63.85%
10 327 456.25 465.5 567 56.94%
12 285 429.4 375 609 54.44%
16 228 274.25 281.5 306 51.04%

*) values (min, avg, median, max, efficiency) do not include failed runs
*) efficiency as t1 / ( nodes * tn ) ( where t1 is min computation time at one node, tn is min computation time on N nodes )

tcr_sri002.txt · Last modified: 2023/08/01 01:08 by 127.0.0.1