User Tools

Site Tools


public:en:manual:cidas:nodex_usage

How to use compute nodes

Compilers

CompilersVersionLicenses
ifort, icc, icpcIntel oneApi 20212 floating
mpiifort, mpiicc, mpiicpc
ifort2020, icc2020, icpc2020Intel Parellel Studio 2020
mpiifort2020, mpiicc2020, mpiicpc2020
ifort2017, icc2017, icpc2017Intel Parellel Studio 20175 floating
mpiifort2017, mpiicc2017, mpiicpc20172 floating

Compile Options

(a) optimal for Fortran

$ ifort –ipo –ip –O3 –xCASCADELAKE prog.f90
Serial
$ mpiifort –ipo –ip –O3 –xCASCADELAKE prog.f90
MPI parallel

(b) optimal for C

$ icc –ipo –ip –O3 –xCASCADELAKE prog.c
Serial
$ mpiicc –ipo –ip –O3 –xCASCADELAKE prog.c
MPI parallel

(c) optimal for C++

$ icpc –ipo –ip –O3 –xCASCADELAKE prog.cpp
Serial
$ mpiicpc –ipo –ip –O3 –xCASCADELAKE prog.cpp
MPI parallel

(d) other options

-qopenmpUse OpenMP
-parallelUse automatic parallelization
-checkFor debug
-tracebackFor debug

Job Queues

Item \ QueueIDLbatchSHAREDNODEX1NODEX4
Node usage2 IDL nodes2 Shared nodes12 Occupied nodes12 Occupied nodes
Number of nodes1112 - 4
Number of cores per node1 - 21 - 265252
Default elapsed time24h24h24h24h
Maximum elapsed time120h 120h 168h 120h
Default memory size per node4GB4GB350GB350GB
Maximum memory size per node87GB175GB350GB350GB
Number of submissions per user1842
Number of executions per user 1421
Number of executions per system52unlimitedunlimited2

NODEX1 and NODEX4 queues are for ISEE computational joint research program only.

Job queuing system

(a) Submit a job

$ qsub <options> <job script file name>

(b) Check status of jobs

$ qstat
List all your jobs
$ qstat -J
Show all nodes used by your jobs
$ qstat -S
Show status of all nodes
$ qstat -Q
Show the number of jobs in queues
$ qstat -f <request ID>
Show detailed information of a job

(c) Delete a job

$ qdel <request ID>

Sample Job Scripts

(a) IDLbatch queue

#!/bin/bash
#PBS -q IDLbatch
#PBS –b 1
#PBS -l elapstim_req=96:00:00
#PBS -l cpunum_lhost=1
#PBS -l memsz_lhost=8GB
#PBS -M user@isee.nagoya-u.ac.jp
#PBS -m be

ulimit -s unlimited
cd $PBS_O_WORKDIR
idl < prog.pro
# shell : do not change
# name of queue : do not change
# number of nodes : do not change
# elapsed time : default 24:00:00, max 120:00:00
# number of cores : 1 - 2
# memory size : default 4GB, max 87GB
# send email to user@isee.nagoya-u.ac.jp
# send email at (b) "beginning" and (e) "end"

# Do not change
# Do not change
#

(b) SHARED queue

#!/bin/bash
#PBS -q SHARED
#PBS –b 1
#PBS -l elapstim_req=96:00:00
#PBS -l cpunum_lhost=26
#PBS -l memsz_lhost=120GB
#PBS -v OMP_STACKSIZE=512m
#PBS -v OMP_NUM_THREADS=13
#PBS -M user@isee.nagoya-u.ac.jp
#PBS -m be

ulimit -s unlimited
cd $PBS_O_WORKDIR
mpirun -n 2 ./a.out < input.txt 
# shell : do not change
# name of queue : do not change
# number of nodes : do not change
# elapsed time : default 24:00:00, max 120:00:00
# number of cores : 1 - 26
# memory size : default 4GB, max 175GB
# for OpenMP : stack size
# for OpenMP : number of threads
# send email to user@isee.nagoya-u.ac.jp
# send email at (b) "beginning" and (e) "end"

# Do not change
# Do not change
# use 2 processes (x 13 threads = 26 cores)

(c) NODEX1 queue

#!/bin/bash
#PBS -q NODEX1
#PBS –b 1
#PBS -l elapstim_req=96:00:00
#PBS -v OMP_STACKSIZE=512m
#PBS -v OMP_NUM_THREADS=13
#PBS -M user@isee.nagoya-u.ac.jp
#PBS -m be

ulimit -s unlimited
cd $PBS_O_WORKDIR
mpirun -n 4 ./a.out < input.txt 
# shell : do not change
# name of queue : do not change
# number of nodes : do not change
# elapsed time : default 24:00:00, max 168:00:00
# for OpenMP : stack size
# for OpenMP : number of threads
# send email to user@isee.nagoya-u.ac.jp
# send email at (b) "beginning" and (e) "end"

# Do not change
# Do not change
# use 4 processes (x 13 threads = 52 cores)

(d) NODEX4 queue

#!/bin/bash
#PBS -q NODEX4
#PBS –b 4
#PBS -T intmpi
#PBS -l elapstim_req=96:00:00
#PBS -v OMP_STACKSIZE=512m
#PBS -v OMP_NUM_THREADS=26
#PBS -M user@isee.nagoya-u.ac.jp
#PBS -m be

ulimit -s unlimited
cd $PBS_O_WORKDIR
mpirun -n 8 -npp 2 ./a.out < input.txt 
# shell : do not change
# name of queue : do not change
# number of nodes : 2 - 4
# Do not change
# elapsed time : default 24:00:00, max 120:00:00
# for OpenMP : stack size
# for OpenMP : number of threads
# send email to user@isee.nagoya-u.ac.jp
# send email at (b) "beginning" and (e) "end"

# Do not change
# Do not change
# use 8 processes and 2 processes per node (x 26 threads = 52 cores)

In Publications

When submitting a publication based on results from the CIDAS system, users are asked to include the following line in the acknowledgements:

The computation was performed on the CIDAS computer system at the Institute for Space-Earth Environmental Research, Nagoya University.


public/en/manual/cidas/nodex_usage.txt · Last modified: by 127.0.0.1

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki