This is an old revision of the document!
High Performance Computing (HPC)
Quick usage instructions
A summary of the steps necessary to get a job done:
Introduction
High Performance Computing (HPC from now on) infrastructures offer CITIUS researchers a platform to resolve problems with high computational requirements. A computational cluster is an set of nodes interconnected by a dedicated network that can act as a single computational element. This offers a huge computational power (allowing the execution of a big parallel job or several concurrent small executions) in a shared infrastructure.
A queue management system is a program that plans how and when jobs will execute using the available computational resources. Allows for an efficient use of computational resources in systems with multiple users. In the our cluster we use PBS/TORQUE.
The way these systems work is:
- The user requests some resources to the queue manager for a computational task. This task is a set of instructions written in a script.
- The queue manager assigns the request to one of its queues.
- When the requested resources are available and depending on the priorities established by the system, the queue manager executes the task and stores the output.
It is important to note that the request and the execution of a given task are independent actions that are not resolved atomically. In fact it is usual that the execution of the task has to wait in one of the queues until the requested resources are available. Also, interactive use is impossible.
Hardware description
Ctcomp2 is a heterogeneous cluster, composed of 8 HP Proliant BL685c G7, 5 Dell PowerEdge M910 and 5 Dell PowerEdge M620 nodes.
- Each HP Proliant node has 4 AMD Opteron 6262 HE (16 cores) processors and 256 GB RAM(except node1 and the master with 128GB).
- Each Dell PowerEdge M910 node has 2 Intel Xeon L7555 (8 cores, 16 threads) processors and 64 GB RAM.
- Each Dell PowerEdge M620 node has 2 Intel Xeon E5-2650L (8 cores, 16 threads) processors and 64 GB RAM.
- Connection with the cluster is made at 1Gb but nodes are connected between them by several 10 GbE networks.
Software description
The job management is done by the queue manager PBS/TORQUE. To improve energetic efficiency an on demand power on and off system called CLUES has been implemented.
User queues
There are four user and eight system queues. The user queues are routing queues that set, depending on the number of computational numbers requested, the system queue in which each job is going to be executed. Users can't send their jobs directly to the system queues, jobs have to be submitted to the user queues.
Independently of the type of queue used for job submissions, an user can only specify the following parameters: node number, process number per node and execution time. Size of memory assigned and maximum execution time of a job are determined by the system queue in which the job gets routed. Jobs that exceed those limits during execution will be canceled. Therefore for jobs in which both memory and execution time are critical it is recommended to modify the number of process requested (even though not all of them get used during the execution) to guarantee that the job needs are fulfilled. The system queue also determines the maximum number of jobs per user and their priority. Users are allowed to specify the job execution time because a precise estimation of execution times allows the queue management system to use resources efficiently without disturbing established priorities. Anyway it is advisable to set an execution time long enough as to guarantee the correct execution of the job and avoid its cancellation. To execute jobs that don't adjust to queue parameters get in touch with the IT department.
User queues are batch
, short
, bigmem
and interactive
.
batch
. It's the default queue.1) Accepts up to 10 jobs per user. Jobs sent to this queue can be executed by any system queue.short
. This queue is designed to reduce the waiting time of jobs that don't need much computational time (maximum 12 hours) and that don't use many resources (less than 16 computational cores). It has more priority than thebatch
queue and admits up to 40 jobs per user. Jobs sent to this queue can be executed by the system queues:np16
,np8
,np4
,np2
andnp1
. To send a job to this queue it is necessary to use the-q
option of theqsub
command explicitly.
ct$ qsub -q short script.sh
bigmem
. This queue is designed for jobs that need a lot of memory. This queue will set aside a full 64 core node for the job, sonodes=1:ppn=64
in the-l
option ofqsub
is required. This queue has more priority than thebatch
queue and is limited to two jobs per user. To send a job to this queue it is necessary to use the-q
option of theqsub
command explicitly:
ct$ qsub -q bigmem script.sh
interactive
. This is the only queue that admits interactive sessions in the computational nodes. Also only one job per user is allowed, with a maximum execution time of one hour and access to a single core of one node. Use of theinteractive
queue doesn't require the use of a script, but it is necessary to denote the interactivity of the job using the-I
option:
ct$ qsub -q interactive -I
The system queues are np1
, np2
, np4
, np8
, np16
, np32
, np64
y parallel
.
np1
. Jobs that require 1 process and 1 node. Maximum memory for jobs in this queue is 1,99 GB and maximum execution time is 672 hours.np2
. Jobs that require 2 processes. Maximum memory for jobs in this queue is 3,75 GB and maximum execution time is 192 hours.np4
.Jobs that require 4 processes. Maximum memory for jobs in this queue is 7,5 GB and maximum execution time is 192 hours.np8
. Jobs that require 8 processes and as much as 5 nodes. Maximum memory for jobs in this queue is 15 GB and maximum execution time is 192 hours.np16
. Jobs that require 16 processes and as much as 5 nodes. Maximum memory for jobs in this queue is 31 GB and maximum execution time is 192 hours.np32
. Jobs that require 32 processes and as much as 5 nodes. Maximum memory for jobs in this queue is 63 GB and maximum execution time is 288 hours.np64
. Jobs that require 64 processes and as much as 5 nodes. Maximum memory for jobs in this queue is 127 GB and maximum execution time is 384 hours.parallel
. Jobs that require more than 32 processes in at least two separate nodes.Maximum memory for jobs in this queue is 64 GB and maximum execution time is 192 hours.
The following table summarizes the characteristics of the user and system queues;
Queue | Limits | |||||
---|---|---|---|---|---|---|
Processes | Nodes | Memory (GB) | Jobs/user | Maximum time (hours) | Priority2) | |
batch | 1-64 | - | - | 128 | - | 1 |
short | 1-16 | - | - | 256 | - | 3 |
bigmem | 64 | - | - | 8 | - | 2 |
interactive | 1 | 1 | 2 | 1 | 1 | 7 |
np1 | 1 | 1 | 1,99 | 120 | 672 | 6 |
np2 | 2 | 2 | 3,75 | 120 | 192 | 5 |
np4 | 4 | 4 | 7,5 | 60 | 192 | 4 |
np8 | 8 | 5 | 15 | 60 | 192 | 4 |
np16 | 16 | 5 | 31 | 15 | 192 | 3 |
np32 | 32 | 5 | 63 | 15 | 288 | 2 |
np64 | 64 | 5 | 127 | 3 | 384 | 1 |
parallel | 32-160 | 5 | 64 | 15 | 192 | 3 |
- Processes: Maximum number of processes by job in this queue.
- Nodes: Maximum numbers of nodes in which the job will be executed.
- Memory: Maximum virtual memory concurrently used by all the job processes.
- Jobs/user: Maximum number of jobs per user regardless of their state.
- Maximum time (hours): Maximum real time during which the job can be in the execution state.
- Priority: Priority of the execution queue related to the other queues. A higher value means more priority. Please note that lacking other criteria, any job sent with qsub will by default be executed in np1 using its limits.