Skip to main content

Sabine Cluster

Sabine is the latest addition to the RCDC shared campus resource pool housing public CPU and GPU nodes with shared access to storage resources. It hosts a total of 5704 CPU cores in 169 compute and 12 GPU nodes. 

Sabine is operated under the new RCDC policy. It is housed in the RCDC (Research Computing Data Core), and went into production mid January 2018. A second phase has been added in Spetember of 2018. If you plan to use this system, please make sure your PI has been granted an allocation and you have requested an account on the system. Please refer to the User Guide to find out about specifics of running jobs, selecting resources and available software on this cluster.

Theoretical peak performance is approximately 600 TFlops.

Node Type CPU Type CPU Socket Count Total Cores Memory Disk Space Node Count
Login HP DL 380 Intel Xeon E5-2680v4 2 28 128GB 36TB 1
Compute ProLiant HPE XL170r Intel Xeon E5-2680v4 2 28 256GB 1 TB 68
Compute ProLiant HPE XL170r Intel Xeon E5-2680v4 2 28 128GB 1 TB 48
GPU Accelarator HPE ProLiant XL190r Intel Xeon E5-2680v4
Nvidia P100
CPU: 2
GPU: 2
CPU: 28
GPU: 7,168
CPU: 256GB
GPU: 32GB
1 TB 8
Compute HPE Proliant XL170r Intel Xeon G6148  2 40 192GB 1 TB 52
GPU Accelerator HPE Proliant XL270d Intel Xeon E5-2680v4 
Nvidia V100
CPU: 2
GPU: 8
CPU: 28
GPU: 40,960
CPU: 256GB
GPU: 128GB
4.8TB 4
Large Memory HPE DL 360 Intel Xeon G6148 2 40 768GB 4TB 1

 Interconnect: Sabine nodes are connected via Intel OmniPath switch with a 100Gb Line Rate.

 Storage: Sabine has a shared ~725TB NFS storage.