NHR-NORD@Göttingen

“Grete“, named after Grete Hermann (1901-1984) who as a doctoral student of Emmy Noether made contributions fundamental for computer algebra, is a High Performance Computing cluster based on graphics processors (GPU) and extends the HPC system “Emmy“ particularly in the field of artificial intelligence (AI).

In Göttingen a strategy of maximum energy efficiency for running all HPC systems is being pursued. For designing the entire system the direct liquid cooling (DLC) concept, which has already proved itself for the NHR system “Emmy“, is a critical factor in minimizing the power demand of the cooling equipment.

See the Quickstart Guide for all necessary information on how to get access and started on the system. If you are using our systems for your research, please also refer to the guidelines on using our systems for your research.

System Overview

8 Racks

The nodes of phase 1 are placed together with nodes from phase 2 in 2 racks and the remaining nodes of phase 2 are operated in the remaining 5 racks.

106 Compute Nodes

3 compute nodes belong to phase 1 and are equipped with Intel Xeon Gold 6148 SKL-SP CPU’s and Nvidia V100 32GB. The other 103 compute nodes of phase 2 are equipped with Zen2 EPYC 7662 / Zen3 EPYC 7513 CPU’s und Nvidia A100 GPUs with 40GB or 80GB VRAM.

6,840 CPU Cores

These are distributed over phase 1 and phase 2 with 120 and 6,720 cores, respectively.

100 Gbit/s Interconnect und 2 x 200 Gbits Interconnect

2 x 200 Gb/s Infiniband (Grete Phase 2) and 100 Gb/s Infiniband (Grete Phase 1) interconnects provide low latency and high bandwidth connectivity in an 8:1 blocking factor fat tree topology (16 nodes per leaf) supporting transfers between GPUs without having passing through host RAM.

5,46 PetaFlop/s

This result was achieved during the LINPACK benchmark, putting Grete in 142th place on the TOP500 list (as of November 2023) and in 16th place on the GREEN500 list (as of November 2023). At the launch of the system Grete was Germanys most energy efficient supercomputer.

49.7 TB RAM

Across all 106 nodes are 49.7 TB of memory available.

27.5 TB VRAM

Across all 106 nodes are 27.5 TB of memory available.

8,4 PiB Storage

A total of 8.4 PiB of storage capacity is available on global parallel file systems, divided into 340TiB for the GPFS-based Home and 8.1 PiB for the Lustre-based Work file system.

6,4 PiB Tape Storage

To archive their results, users have a total of 6.4 PiB of tape storage with a 120 TiB hard disk cache at their disposal.

Icons made by Freepik and phatplus

Node Architectures

Further information about the hardware can be found in the HPC-documentation.