Strumenti Utente

Strumenti Sito


oph:cluster:jobs

Questa è una vecchia versione del documento!


The Frontend

The Frontend is the node you connect to remotely. Its primary function is to allow remote access to the calculation clusters by all users and (in limited circumstances) to edit and compile source codes. It must never be used to execute resource-intensive codes, as these will slow down the work of other users and leads to loss of cluster functionality and eventually lead to the blocking of the entire infrastructure.

If an executable must necessarily be tested on the Frontend, the responsible user must actively monitor the job and be sure that it is not active for more than a few seconds.

Job Management

To execute serial or parallel code, it is necessary to use the Slurm WorkLoad Manager, which will allocate the necessary resources and manage the priority of requests. Below are some of the basic functions and operating instructions for submitting serial and parallel execution (job) via Slurm; please refer to the official documentation for further information.

For each job, it is necessary to specify via a batch script the required resources (e.g. number of nodes, number of processors, memory, execution time) and, optionally, any other constraints (e.g. group of nodes). Optionally, other parameters may also be indicated

Submission via script

Although it is possible to provide job submission information to the WorkLoad Manager via command line parameters, it is normally preferred to create a bash script (job script) that contains the information permanently.

The job script is ideally divided into three sections:

  • The header, consisting of commented text in which information and notes useful to the user but ignored by the system are given (the syntax of the comments is #text-for-user…);
  • The Slurm settings, in which instructions for launching the actual job are specified (the syntax of the instructions is #SLURM –option);
  • The module loading and code execution, the structure of which varies according to the particular software each user is using.

Below is an example job scrip (runParalle.sh) for parallel computing:

#!/bin/bash #—————————————————————————- # # University | DIFA - Dept of Physics and Astrophysics # of | Open Physics Hub # Bologna | (https://site.unibo.it/openphysicshub/en) #—————————————————————————- # # License # This is free software: you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Author # Carlo Cintolesi # # Application # slurm workload manager # # Usage # run a job: sbatch run.sh # check processes: slurmtop # delate a job: scancel <jobNumber> # # Description # Run job on the new cluster of OPH with SLURM # # ————————————————————————— # # SLURM setup # ————————————————————————— # #- (1) Choose the partition where launch the job, # and the account of your research group #- ##SBATCH –partition=g1 ## GPU node #SBATCH –partition=m5 ## Matrix node 23,24,25 #- (2) Select the nodes to work on (discouraged in Matrix), # the number of tasks to be used (or specify the number of node and tasks), # the Infiniband constraint (encoraged in Matrix) # the RAM memory available for each node #- #SBATCH –constraint=ib ## infiniband, keep for all matrix node #SBATCH –ntasks=56 ## number of processors #SBATCH –mem-per-cpu=2G ## ram per cpu (to be tuned) #- (3) Set the name of the job, the log and error files, # define the email address for comunications (just UniBo) #- #SBATCH –job-name=“jobName” ## job name in the scheduler #SBATCH –output=infoRun%j ## log file #SBATCH –error=err%j ## err file #SBATCH –mail-type=ALL ## mail to send communications #SBATCH –mail-user=nome.cognome@unibo.it # ————————————————————————— # # Modules setup and applications run # ————————————————————————— # #- (4) Modules to be load #- ADD MODULES YOU NEED #- (5) Run the job #- mpirun –prefix $MPI_HOME -n 2 –mcapmlucx-x UCX_NET_DEVICES=mlx5_0:1 ./SCRIPT # ————————————————————————end #

oph/cluster/jobs.1681289493.txt.gz · Ultima modifica: 2023/04/12 08:51 da carlo.cintolesi@unibo.it

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki