oph:cluster:storage
Differenze
Queste sono le differenze tra la revisione selezionata e la versione attuale della pagina.
Entrambe le parti precedenti la revisioneRevisione precedenteProssima revisione | Revisione precedente | ||
oph:cluster:storage [2023/04/06 15:11] – marco.baldi5@unibo.it | oph:cluster:storage [2025/02/06 16:07] (versione attuale) – [/home/] nuovo sito per /home/web diego.zuccato@unibo.it | ||
---|---|---|---|
Linea 1: | Linea 1: | ||
====== Storage types ====== | ====== Storage types ====== | ||
- | The DIFA-OPH cluster offers different storage areas with very different | + | The DIFA-OPH cluster offers |
- | ==== $HOME ==== | + | ===== /home/ ===== |
- | This is a storage area available from every node. | + | This is a storage area available from every node and this is the space that you access when you connect to the cluster frontend. This is meant to store non-reproducible data (e.g. source codes) and is regularly backed up. It should be **used for source files, compiling code**, or jobs that do not need a lot of space (see ''/ |
+ | The /home is the area where your home folders are stored, as well as other shared areas such as ''/ | ||
- | Should be used for source files, compiling code, jobs that do not need a lot of space. | + | * **/ |
- | NFS-mount via Ethernet | + | * **/ |
+ | * This space is web-accessible at '' | ||
+ | * web access is read-only and it is not possible to create dynamic pages. | ||
+ | * **Per-sector quota** of 1TB (soft) / 2TB (hard) except astro with 4.4TB/ | ||
+ | * Requires either index.html or .htaccess file with '' | ||
- | Quota limit: 50GB (soft) / 100GB (hard) | + | ==== Technical characteristics ==== |
- | ==== / | + | * NFS-mount via Ethernet (1Gbps which is not very fast but quite responsive); |
+ | * Quota limit to 50 GB (100GB as hard limit); check how much you're using with '' | ||
- | This share is web-accessible (read-only) at https://apps.difa.unibo.it/ | + | ===== /scratch/ ==== |
- | ==== /home/work ==== | + | This is the fast Input/Output area to be used for direct read/write operations from the compute nodes. There is no quota in this area, but an automatic cleaning procedure is enforced on all files older than 40 days to avoid the disk space being exhausted, as this would make running jobs crash when trying to write outputs on disk. Therefore, once your jobs are finished you are recommended to archive the relevant data to ''/ |
- | Used as work area for jobs that do not need very big datasets or need to have lots of files in a single directory (**not recommended**, might degrade performances for all users!). | + | :!: folders inside sectors areas **must** use the account as name or you won't get important mails => possibe data loss. |
+ | ==== Technical characteristics: | ||
- | ==== /scratch ==== | + | * Parallel filesystem, for quick (SSD-backed) access to the data you are working on; |
+ | * No quota, but **files older than 40 days will be automatically deleted** without further notice; | ||
+ | * No cross-server redundancy, just local RAID. Hence, if (when) a server (or two disks in the same RAID) fails, all data becomes unavailable. | ||
- | Main archive area. To be used for large files, big datasets and archives. | ||
- | **DO NOT USE FOR LOTS OF SMALL FILES**: | + | ===== /archive/ ==== |
- | If you need some big files (say a dataset) and your sw does not allow for specifying a different path, just use symlinks | + | This is the main archive area to be used for large files, |
+ | |||
+ | :!: folders inside sectors areas **must** use the account as name or you won't get important mails => possibe data loss. | ||
+ | ==== Technical characteristics: | ||
+ | |||
+ | * **To not be used to store a large number of small files**, this will compromise the functionality of the storage space eventually blocking all the reading/ | ||
+ | * Max size for a single file is 8TB. When archiving big datasets please split them into sub-folders, | ||
+ | * Read-only access from compute nodes, read/write only from frontends | ||
+ | * Quota imposed (both on file size and number of files) per sector, with extra space allocated for specific projects (in / | ||
+ | * Currently ACLs (setfacl) are notsupported (cephfs exported via NFS-Ganesha | ||
+ | ==== Monitoring system of space usage ==== | ||
+ | |||
+ | To allow users/ | ||
+ | |||
+ | In particular: | ||
+ | * every sector/ | ||
+ | * every sector/ | ||
+ | * individual users will receive an email only if their sector/ | ||
+ | |||
+ | |||
+ | ====== $TMPDIR local node space (advanced) ====== | ||
+ | |||
+ | Every node does have some available space on local storage in '' | ||
+ | |||
+ | ==== Technical characteristics: | ||
+ | |||
+ | * local space: not shared between multiple nodes, not even for a single multi-node job | ||
+ | * quite fast | ||
+ | * automatically cleaned when job ends | ||
- | Quota: per-sector, with extra space allocated for specific projects or bought by individuals. |
oph/cluster/storage.1680793893.txt.gz · Ultima modifica: 2023/04/06 15:11 da marco.baldi5@unibo.it