Skip to content

Hardware Overview

Partitions

intel partition

This is the default partition selected if the -p/--partition argument is not provided to srun or sbatch, and is currently where the majority of our CPU-only nodes reside. There are no nodes in the intel partition with GPUs.

The CPUs in this partition vary. Some are E5 2660's, some are E5 2660v2's, and some are E5 2670v2's. All of the E5 2660v2 and E5 2670v2 nodes are connected with a low-latency high-throughput FDR InfiniBand fabric. The openmpi modules on the cluster are compiled to communicate over the InfiniBand fabric when available through UCX.

These CPUs are aging, but workable throughput can still be achieved through the use of Job Arrays or openmpi to achieve some degree of parallelism.

Partition Nodes Location CPU Sockets Cores RAM GPUs/Count Connectivity
intel 25 ACF R1 Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz 2 20 125GiB none/0 1 Gigabit Ethernet, FDR InfiniBand (56Gb/s)
intel 20 ACF R29 Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz 2 20 125GiB none/0 1 Gigabit Ethernet, FDR InfiniBand (56Gb/s)
intel 16 ACF R4 Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz 2 20 125GiB none/0 QDR Infiniband (40Gb/s), 10 Gigabit Ethernet
intel 11 ACF R3 Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz 2 16 125GiB none/0 1 Gigabit Ethernet
intel 6 Upstairs R1 Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz 2 16 62GiB none/0 10 Gigabit Ethernet
intel 6 Upstairs R1 Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz 2 16 31GiB none/0 10 Gigabit Ethernet
intel 4 ACF R4 Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz 2 16 62GiB none/0 10 Gigabit Ethernet
intel 4 ACF R29 Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz 2 20 251GiB none/0 1 Gigabit Ethernet, FDR InfiniBand (56Gb/s)
intel 4 ACF R3 Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz 2 16 251GiB none/0 1 Gigabit Ethernet
intel 3 ACF R1 Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz 2 20 251GiB none/0 1 Gigabit Ethernet, FDR InfiniBand (56Gb/s)
intel 2 Upstairs R1 Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz 2 16 251GiB none/0 10 Gigabit Ethernet
intel 1 Upstairs R1 Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz 2 16 125GiB none/0 10 Gigabit Ethernet
intel 1 Upstairs R1 Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz 2 20 125GiB none/0 10 Gigabit Ethernet

gpu partition

This is where nodes with GPUs reside. Visit Slurm GPU Jobs for more information on how to use GPU nodes. If using OnDemand, visit OnDemand Desktop first.

Partition Nodes Location CPU Sockets Cores RAM GPUs/Count Connectivity
gpu 2 ACF R28 Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz 1 6 62GiB NVIDIA TITAN Xp/2 1 Gigabit Ethernet
gpu 2 ACF R28 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 2 24 125GiB Tesla P100-PCIE-16GB/4 1 Gigabit Ethernet
gpu 1 ACF R28 Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz 1 6 94GiB NVIDIA TITAN Xp/1, NVIDIA TITAN RTX/1 1 Gigabit Ethernet
gpu 1 ACF R28 Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz 2 16 125GiB Tesla V100S-PCIE-32GB/2 10 Gigabit Ethernet
gpu 1 ACF R26 Intel(R) Xeon(R) Platinum 8260 CPU @ 2.40GHz 4 96 1510GiB NVIDIA A100-PCIE-40GB/2 10 Gigabit Ethernet, FDR InfiniBand (56Gb/s)

bigm partition

This partition contains nodes with 512GB of memory or more. All of these have E5 2670v2's and FDR InfiniBand networking.

Partition Nodes Location CPU Sockets Cores RAM GPUs/Count Connectivity
bigm 4 ACF R1 Intel(R) Xeon(R) CPU E5-2670 v2 @ 2.50GHz 2 20 503GiB none/0 1 Gigabit Ethernet, FDR InfiniBand (56Gb/s)

ampere partition

This partition contains two Ampere Altra servers with 1 80-core Neoverse N1 CPU each.

Partition Nodes Location CPU Sockets Cores RAM GPUs/Count Connectivity
ampere 2 ACF R26 Neoverse-N1 1 80 125GiB none/0 1 Gigabit Ethernet

mmicc partition

This partition contains a node with two NVIDIA L40S GPUs, and is exclusive to researchers in MMICC.

Partition Nodes Location CPU Sockets Cores RAM GPUs/Count Connectivity
mmicc 1 ACF R25 Intel(R) Xeon(R) Gold 6426Y 2 32 251GiB NVIDIA L40S/2 10 Gigabit Ethernet, FDR InfiniBand (56Gb/s)

Using the nodes

Visit OnDemand Desktop or Slurm to learn how to submit a job.