public:en:manual:cidas:nodex_usage
Table of Contents
How to use compute nodes
Compilers
Compilers | Version | Licenses |
---|---|---|
ifort, icc, icpc | Intel oneApi 2021 | 2 floating |
mpiifort, mpiicc, mpiicpc | ||
ifort2020, icc2020, icpc2020 | Intel Parellel Studio 2020 | |
mpiifort2020, mpiicc2020, mpiicpc2020 | ||
ifort2017, icc2017, icpc2017 | Intel Parellel Studio 2017 | 5 floating |
mpiifort2017, mpiicc2017, mpiicpc2017 | 2 floating |
Compile Options
(a) optimal for Fortran
$ ifort –ipo –ip –O3 –xCASCADELAKE prog.f90 | Serial |
$ mpiifort –ipo –ip –O3 –xCASCADELAKE prog.f90 | MPI parallel |
(b) optimal for C
$ icc –ipo –ip –O3 –xCASCADELAKE prog.c | Serial |
$ mpiicc –ipo –ip –O3 –xCASCADELAKE prog.c | MPI parallel |
(c) optimal for C++
$ icpc –ipo –ip –O3 –xCASCADELAKE prog.cpp | Serial |
$ mpiicpc –ipo –ip –O3 –xCASCADELAKE prog.cpp | MPI parallel |
(d) other options
-qopenmp | Use OpenMP |
-parallel | Use automatic parallelization |
-check | For debug |
-traceback | For debug |
Job Queues
Item \ Queue | IDLbatch | SHARED | NODEX1 | NODEX4 |
---|---|---|---|---|
Node usage | 2 IDL nodes | 2 Shared nodes | 12 Occupied nodes | 12 Occupied nodes |
Number of nodes | 1 | 1 | 1 | 2 - 4 |
Number of cores per node | 1 - 2 | 1 - 26 | 52 | 52 |
Default elapsed time | 24h | 24h | 24h | 24h |
Maximum elapsed time | 120h | 120h | 168h | 120h |
Default memory size per node | 4GB | 4GB | 350GB | 350GB |
Maximum memory size per node | 87GB | 175GB | 350GB | 350GB |
Number of submissions per user | 1 | 8 | 4 | 2 |
Number of executions per user | 1 | 4 | 2 | 1 |
Number of executions per system | 52 | unlimited | unlimited | 2 |
NODEX1 and NODEX4 queues are for ISEE computational joint research program only.
Job queuing system
(a) Submit a job
$ qsub <options> <job script file name>
(b) Check status of jobs
$ qstat | List all your jobs |
$ qstat -J | Show all nodes used by your jobs |
$ qstat -S | Show status of all nodes |
$ qstat -Q | Show the number of jobs in queues |
$ qstat -f <request ID> | Show detailed information of a job |
(c) Delete a job
$ qdel <request ID>
Sample Job Scripts
(a) IDLbatch queue
#!/bin/bash #PBS -q IDLbatch #PBS –b 1 #PBS -l elapstim_req=96:00:00 #PBS -l cpunum_lhost=1 #PBS -l memsz_lhost=8GB #PBS -M user@isee.nagoya-u.ac.jp #PBS -m be ulimit -s unlimited cd $PBS_O_WORKDIR idl < prog.pro | # shell : do not change # name of queue : do not change # number of nodes : do not change # elapsed time : default 24:00:00, max 120:00:00 # number of cores : 1 - 2 # memory size : default 4GB, max 87GB # send email to user@isee.nagoya-u.ac.jp # send email at (b) "beginning" and (e) "end" # Do not change # Do not change # |
(b) SHARED queue
#!/bin/bash #PBS -q SHARED #PBS –b 1 #PBS -l elapstim_req=96:00:00 #PBS -l cpunum_lhost=26 #PBS -l memsz_lhost=120GB #PBS -v OMP_STACKSIZE=512m #PBS -v OMP_NUM_THREADS=13 #PBS -M user@isee.nagoya-u.ac.jp #PBS -m be ulimit -s unlimited cd $PBS_O_WORKDIR mpirun -n 2 ./a.out < input.txt | # shell : do not change # name of queue : do not change # number of nodes : do not change # elapsed time : default 24:00:00, max 120:00:00 # number of cores : 1 - 26 # memory size : default 4GB, max 175GB # for OpenMP : stack size # for OpenMP : number of threads # send email to user@isee.nagoya-u.ac.jp # send email at (b) "beginning" and (e) "end" # Do not change # Do not change # use 2 processes (x 13 threads = 26 cores) |
(c) NODEX1 queue
#!/bin/bash #PBS -q NODEX1 #PBS –b 1 #PBS -l elapstim_req=96:00:00 #PBS -v OMP_STACKSIZE=512m #PBS -v OMP_NUM_THREADS=13 #PBS -M user@isee.nagoya-u.ac.jp #PBS -m be ulimit -s unlimited cd $PBS_O_WORKDIR mpirun -n 4 ./a.out < input.txt | # shell : do not change # name of queue : do not change # number of nodes : do not change # elapsed time : default 24:00:00, max 168:00:00 # for OpenMP : stack size # for OpenMP : number of threads # send email to user@isee.nagoya-u.ac.jp # send email at (b) "beginning" and (e) "end" # Do not change # Do not change # use 4 processes (x 13 threads = 52 cores) |
(d) NODEX4 queue
#!/bin/bash #PBS -q NODEX4 #PBS –b 4 #PBS -T intmpi #PBS -l elapstim_req=96:00:00 #PBS -v OMP_STACKSIZE=512m #PBS -v OMP_NUM_THREADS=26 #PBS -M user@isee.nagoya-u.ac.jp #PBS -m be ulimit -s unlimited cd $PBS_O_WORKDIR mpirun -n 8 -npp 2 ./a.out < input.txt | # shell : do not change # name of queue : do not change # number of nodes : 2 - 4 # Do not change # elapsed time : default 24:00:00, max 120:00:00 # for OpenMP : stack size # for OpenMP : number of threads # send email to user@isee.nagoya-u.ac.jp # send email at (b) "beginning" and (e) "end" # Do not change # Do not change # use 8 processes and 2 processes per node (x 26 threads = 52 cores) |
In Publications
When submitting a publication based on results from the CIDAS system, users are asked to include the following line in the acknowledgements:
The computation was performed on the CIDAS computer system at the Institute for Space-Earth Environmental Research, Nagoya University.
public/en/manual/cidas/nodex_usage.txt · Last modified: by 127.0.0.1