This is an old revision of the document!
Quantum ESPRESSO is an integrated suite of Open-Source computer codes for electronic-structure calculations and materials modeling at the nanoscale. It is based on density-functional theory, plane waves, and pseudopotentials. ( https://www.quantum-espresso.org/project/manifesto ) .
Quantum Espresso 6.5, https://github.com/QEF/q-e/releases .
To prepare Quantum Espresso (ver. 6.5) software login to tcr.cent.uw.edu.pl.
Then open interactive session on any computing node with :
srun -n16 -N1 --pty bash -l
When interactive session is started go through Quantum Espresso installation process described with bellow commands. It will take about 12 minutes.
#folder for source files
mkdir -p ~/downloads/quantum_espresso_6.5
#folder for compiled binares
mkdir -p ~/soft/qe-6.5
cd ~/downloads/quantum_espresso_6.5
wget https://github.com/QEF/q-e/releases/download/qe-6.5/qe-6.5-ReleasePack.tgz
tar xvzf qe-6.5-ReleasePack.tgz
cd qe-6.5
module load mpi/openmpi-x86_64
./configure --prefix="/home/users/${USER}/soft/qe-6.5"
time make -j${SLURM_NTASKS} all
make install
remember to end interactive session with exit command.
If no errors occurred, compiled Quantum Espresso binaries are available in /home/users/${USER}/soft/qe-6.5/bin .
Use Quantum Espresso 6.5. This description assumes that path to binaries is /home/users/${USER}/soft/qe-6.5/bin . Use bellow qe-test.sbatch file to run computation.
#SBATCH --job-name="qe-test_N2_n32"
#SBATCH --nodes=2 # number of computing_nodes
#SBATCH --ntasks=32 # number of CPU's ( 16*computing_nodes )
#SBATCH --mem-per-cpu=2G
#SBATCH --partition=short
#SBATCH --constraint=intel
#SBATCH --exclusive
#SBATCH --time=2:00:00
WORKDIR="/home/users/${USER}/soft_tests/qe_run_`date +%s`_${RANDOM}/"
mkdir -p ${WORKDIR}
export BIN_DIR="/home/users/${USER}/soft/qe-6.5/bin"
export PATH=${BIN_DIR}:$PATH
export PSEUDO_DIR=${WORKDIR}
export TMP_DIR="/tmp"
mkdir -p ${WORKDIR}
cd ${WORKDIR}
#copy input files and pseudo files to ${WORKDIR}
cp /home/users/${USER}/downloads/quantum_espresso_input_files/* ${WORKDIR}
module load mpi/openmpi-x86_64
T1=`date +%s`
mpirun -np ${SLURM_NTASKS} pw.x -npool ${SLURM_NNODES} -inp Ti2N.in > Ti2N.out
T2=`date +%s`
echo -e "stop ${T2}\t start ${T1}\t ${SLURM_NNODES}"
Bellow results show time of computation in function of used resources (computation scalability) for a specific computational task done with pw.x program.
Assigning a larger amount of computing nodes does not always lead to a (efficient) reduction in computing time (wall-time of the job). To find the most appropriate number of nodes for a specific type of job, it is essential to run one's own benchmarks. In general, parallel jobs should scale to at least 70% efficiency for the sake of other TCR users. One user using twice the resources to squeeze out 10% more performance may be keeping other users from working at all.
To automate these tests 2 files were prepared ( run_qe-6.4.1_tests.sh, qe-6.4.1.batch ). Test are run with script run_qe-6.4.1_tests.sh that prepares parameters and starts single 'batch' job (qe-6.4.1.batch). File qe-6.4.1.batch uses slurm's arrays that transform each submitted job to 3 separate computational jobs ( it is more efficient than submitting 3 separate jobs ).
Bellow results should be consider as results of this specific computational task on this specific hardware (TCR cluster) and not overall benchmark for Quantum Espresso software suite.