HPC Platform¶
The HPC Platform (HPCP) provides compute, storage, and related services for the HPC community in Switzerland and abroad. The majority of compute cycles are provided to the User Lab via peer-reviewed allocation schemes.
Getting Started¶
Getting access¶
Principal Investigators (PIs) and Deputy PIs can invite users to join their projects using the account and resource management tool.
Once invited to a project you will receive an email with information on how to create an account and configure multi-factor authentication (MFA).
Systems¶
-
Daint is a large Grace-Hopper cluster for GPU-enabled workloads.
File systems and storage¶
There are three main file systems mounted on the HPCP clusters.
type | mount | file system |
---|---|---|
Home | /users/$USER | VAST |
Scratch | /capstor/scratch/cscs/$USER |
Capstor |
Store | /capstor/store/cscs/<customer>/<project> |
Capstor |
Home¶
Every user has a home path ($HOME
) mounted at /users/$USER
on the VAST file system.
Home directories have 50 GB of capacity and are intended for keeping configuration files, small software packages, and scripts.
Scratch¶
The Scratch file system is a large, temporary storage system designed for high-performance I/O. It is not backed up.
See the Scratch documentation for more information.
The environment variable $SCRATCH
points to /capstor/scratch/cscs/$USER
, and can be used as a shortcut to access your scratch folder.
scratch cleanup policy
Files that have not been accessed in 30 days are automatically deleted.
Scratch is not intended for permanent storage: transfer files back to the Store after batch job completion.
Store¶
The Store (or Project) file system is provided as a space to store datasets, code, or configuration scripts that can be accessed from different clusters. The file system is backed up and there is no automated deletion policy.
The environment variable $STORE
can be used as a shortcut to access the Store folder of your primary project.
Hard limits on the amount of data and number of files (inodes) will prevent you from writing to Store if your quotas are exceeded.
You can check how much data and inodes you are consuming -- and their respective quotas -- by running the quota
command on a login node.
Warning
It is not recommended to write directly to the $STORE
path from batch jobs.