Skip to content

Eiger

Eiger is an Alps cluster that provides compute nodes and file systems designed to meet the needs of CPU-only workloads for the HPC Platform.

Note

This documentation is for the updated cluster Eiger.Alps reachable at eiger.alps.cscs.ch, that replaced the former cluster as on July 1 2025.

Important changes from Eiger

The redeployment of eiger.cscs.ch as eiger.alps.cscs.ch has introduced changes that may affect some users.

Breaking changes

Sarus is replaced with the Container Engine

The Sarus container runtime is replaced with the Container Engine.

If you are using Sarus to run containers on Eiger, you will have to rebuild and adapt your containers for the Container Engine.

Cray modules and EasyBuild are no longer supported

The Cray Programming Environment (accessed via the cray module) is no longer supported by CSCS, along with software that CSCS provided using EasyBuild.

The same version of the Cray modules is still available, along with software that was installed using them, however they will not receive updates or support from CSCS.

You are strongly encouraged to start using uenv to access supported applications and to rebuild your own applications.

  • The versions of compilers, cray-mpich, Python and libraries in uenv are up to date.
  • The scientific application uenv have up to date versions of the supported applications.

Minor changes

Slurm is updated from version 23.02.6 to 24.05.4

Cluster specification

Compute nodes

Eiger consists of multicore AMD Epyc Rome compute nodes: please note that the total number of available compute nodes on the system might vary over time. See the Slurm documentation for information on how to check the number of nodes.

Additionally, there are four login nodes with host names eiger-ln00[1-4].

Storage and file systems

Eiger uses the HPCP filesystems and storage policies.

Getting started

Logging into Eiger

To connect to Eiger via SSH, first refer to the ssh guide.

~/.ssh/config

Add the following to your SSH configuration to enable you to directly connect to eiger using ssh eiger.alps.

Host eiger.alps
    HostName eiger.alps.cscs.ch
    ProxyJump ela
    User cscsusername
    IdentityFile ~/.ssh/cscs-key
    IdentitiesOnly yes

Software

uenv

CSCS and the user community provide uenv software environments on Eiger.

  • Scientific Applications

    Provide the latest versions of scientific applications, tuned for Eiger, and the tools required to build your own version of the applications.

  • Programming Environments

    Provide compilers, MPI, Python, common libraries and tools used to build your own applications.

Containers

Eiger supports container workloads using the Container Engine.

To build images, see the guide to building container images on Alps.

Sarus is not available

A key change with the new Eiger deployment is that the Sarus container runtime is replaced with the Container Engine.

If you are using Sarus to run containers on Eiger, you will have to rebuild and adapt your containers for the Container Engine.

Cray Modules

Warning

The Cray Programming Environment (CPE), loaded using module load cray, is no longer supported by CSCS.

CSCS will continue to support and update uenv and the Container Engine, and users are encouraged to update their workflows to use these methods at the first opportunity.

The CPE is deprecated and will be removed completely at a future date.

Running jobs on Eiger

Slurm

Eiger uses Slurm as the workload manager, which is used to launch and monitor workloads on compute nodes.

There are multiple Slurm partitions on the system:

  • the debug partition can be used to access a small allocation for up to 30 minutes for debugging and testing purposes
  • the prepost partition is meant for small high priority allocations up to 30 minutes, for pre- and post-processing jobs.
  • the normal partition is for all production workloads.
  • the xfer partition is for internal data transfer.
  • the low partition is a low-priority partition, which may be enabled for specific projects at specific times.
name max nodes per job time limit
debug 1 30 minutes
prepost 1 30 minutes
normal - 24 hours
xfer 1 24 hours
low - 24 hours
  • nodes in the normal and debug partitions are not shared
  • nodes in the xfer partition can be shared

See the Slurm documentation for instructions on how to run jobs on the AMD CPU nodes.

JupyterHub

A JupyterHub service for Eiger is available at https://jupyter-eiger.cscs.ch.

FirecREST

Eiger can also be accessed using FirecREST at the https://api.cscs.ch/hpc/firecrest/v2 API endpoint.

The FirecREST v1 API is still available, but deprecated

Maintenance and status

Scheduled maintenance

Wednesday mornings 8:00-12:00 CET are reserved for periodic updates, with services potentially unavailable during this time frame. If the batch queues must be drained (for redeployment of node images, rebooting of compute nodes, etc) then a Slurm reservation will be in place that will prevent jobs from running into the maintenance window.

Exceptional and non-disruptive updates may happen outside this time frame and will be announced to the users mailing list, the CSCS Status Page and the #eiger channel of the CSCS User Slack.

Change log

2025-06-05 Early access phase

Early access phase is open

2025-05-23 Creation of Eiger on Alps

Eiger is deployed as a vServices-enabled cluster

Known issues