On this page:
9.1 Cloud  Lab Utah
9.2 Cloud  Lab Wisconsin
9.3 Cloud  Lab Clemson
9.4 Apt Cluster
9.5 IG-DDC Cluster
2016-11-08 (b72d2a9)

9 Hardware

CloudLab can allocate experiments on any one of several federated clusters.

CloudLab has the ability to dispatch experiments to several clusters: three that belong to CloudLab itself, plus several more that belong to federated projects.

Additional hardware expansions are planned, and descriptions of them can be found at https://www.cloudlab.us/hardware.php

9.1 CloudLab Utah

The CloudLab cluster at the University of Utah is being built in partnership with HP. The consists of 315 64-bit ARM servers and 270 Intel Xeon-D severs. Each has 8 cores, for a total of 4,680 cores. The servers are built on HP’s Moonshot platform. The cluster is housed in the University of Utah’s Downtown Data Center in Salt Lake City.

More technical details can be found at https://www.cloudlab.us/hardware.php#utah

m400

   

315 nodes (64-bit ARM)

CPU

   

Eight 64-bit ARMv8 (Atlas/A57) cores at 2.4 GHz (APM X-GENE)

RAM

   

64GB ECC Memory (8x 8 GB DDR3-1600 SO-DIMMs)

Disk

   

120 GB of flash (SATA3 / M.2, Micron M500)

NIC

   

Dual-port Mellanox ConnectX-3 10 GB NIC (PCIe v3.0, 8 lanes

m510

   

270 nodes (Intel Xeon-D)

CPU

   

Eight-core Intel Xeon D-1548 at 2.0 GHz

RAM

   

64GB ECC Memory (4x 16 GB DDR4-2133 SO-DIMMs)

Disk

   

512 GB NVMe flash storage

NIC

   

Dual-port Mellanox ConnectX-3 10 GB NIC (PCIe v3.0, 8 lanes

There are 45 nodes in a chassis, and this cluster consists of thirteen chassis. Each chassis has two 45XGc switches; each node is connected to both switches, and each chassis switch has four 40Gbps uplinks, for a total of 320Gbps of uplink capacity from each chassis. One switch is used for control traffic, connecting to the Internet, etc. The other is used to build experiment topologies, and should be used for most experimental purposes.

All chassis are interconnected through a large HP FlexFabric 12910 switch which has full bisection bandwidth internally.

We have plans to enable some users to allocate entire chassis; when allocated in this mode, it will be possible to have complete administrator control over the switches in addition to the nodes.

9.2 CloudLab Wisconsin

The CloudLab cluster at the University of Wisconsin is built in partnership with Cisco, Seagate, and HP. The cluster, which is in Madison, Wisconsin, has 270 servers with a total of 5,000 cores connected in a CLOS topology with full bisection bandwidth. It has 1,070 TB of storage, including SSDs on every node.

Note: As of the time of writing, Wiscosin is working on upgrading nodes, and some may still be listed under their old “type”

More technical details can be found at https://www.cloudlab.us/hardware.php#wisconsin

c220g1

   

90 nodes (Haswell, 16 core, 3 disks)

CPU

   

Two Intel E5-2630 v3 8-core CPUs at 2.40 GHz (Haswell w/ EM64T)

RAM

   

128GB ECC Memory (8x 16 GB DDR4 1866 MHz dual rank RDIMMs)

Disk

   

Two 1.2 TB 10K RPM 6G SAS SFF HDDs

Disk

   

One Intel DC S3500 480 GB 6G SATA SSDs

NIC

   

Dual-port Intel X520-DA2 10Gb NIC (PCIe v3.0, 8 lanes)

NIC

   

Onboard Intel i350 1Gb

c240g1

   

10 nodes (Haswell, 16 core, 14 disks)

CPU

   

Two Intel E5-2630 v3 8-core CPUs at 2.40 GHz (Haswell w/ EM64T)

RAM

   

128GB ECC Memory (8x 16 GB DDR4 1866 MHz dual rank RDIMMs)

Disk

   

Two Intel DC S3500 480 GB 6G SATA SSDs

Disk

   

Twelve 3 TB 3.5" HDDs donated by Seagate

NIC

   

Dual-port Intel X520-DA2 10Gb NIC (PCIe v3.0, 8 lanes)

NIC

   

Onboard Intel i350 1Gb

c220g2

   

163 nodes (Haswell, 20 core, 3 disks)

CPU

   

Two Intel E5-2660 v3 10-core CPUs at 2.60 GHz (Haswell EP)

RAM

   

160GB ECC Memory (10x 16 GB DDR4 2133 MHz dual rank RDIMMs)

Disk

   

One Intel DC S3500 480 GB 6G SATA SSDs

Disk

   

Two 1.2 TB 10K RPM 6G SAS SFF HDDs

NIC

   

Dual-port Intel X520 10Gb NIC (PCIe v3.0, 8 lanes

NIC

   

Onboard Intel i350 1Gb

c240g2

   

7 nodes (Haswell, 20 core, 14 disks)

CPU

   

Two Intel E5-2660 v3 10-core CPUs at 2.60 GHz (Haswell EP)

RAM

   

160GB ECC Memory (10x 16 GB DDR4 2133 MHz dual rank RDIMMs)

Disk

   

Two Intel DC S3500 480 GB 6G SATA SSDs

Disk

   

Twelve 3 TB 3.5" HDDs donated by Seagate

NIC

   

Dual-port Intel X520 10Gb NIC (PCIe v3.0, 8 lanes

NIC

   

Onboard Intel i350 1Gb

All nodes are connected to two networks:

The experiment network at Wisconsin is transitioning to using HP switches in order to provide OpenFlow 1.3 support.

9.3 CloudLab Clemson

The CloudLab cluster at Clemson University has been built partnership with Dell. The cluster so far has 186 servers with a total of 4,400 cores, 596TB of disk space, and 48TB of RAM. All nodes have 10GB Ethernet and QDR Infiniband. It is located in Clemson, South Carolina.

More technical details can be found at https://www.cloudlab.us/hardware.php#clemson

c8220

   

96 nodes (Ivy Bridge, 20 core)

CPU

   

Two Intel E5-2660 v2 10-core CPUs at 2.20 GHz (Ivy Bridge)

RAM

   

256GB ECC Memory (16x 16 GB DDR4 1600MT/s dual rank RDIMMs

Disk

   

Two 1 TB 7.2K RPM 3G SATA HDDs

NIC

   

Dual-port Intel 10Gbe NIC (PCIe v3.0, 8 lanes

NIC

   

Qlogic QLE 7340 40 Gb/s Infiniband HCA (PCIe v3.0, 8 lanes)

c8220x

   

4 nodes (Ivy Bridge, 20 core, 20 disks)

CPU

   

Two Intel E5-2660 v2 10-core CPUs at 2.20 GHz (Ivy Bridge)

RAM

   

256GB ECC Memory (16x 16 GB DDR4 1600MT/s dual rank RDIMMs

Disk

   

Eight 1 TB 7.2K RPM 3G SATA HDDs

Disk

   

Twelve 4 TB 7.2K RPM 3G SATA HDDs

NIC

   

Dual-port Intel 10Gbe NIC (PCIe v3.0, 8 lanes

NIC

   

Qlogic QLE 7340 40 Gb/s Infiniband HCA (PCIe v3.0, 8 lanes)

c6320

   

84 nodes (Haswell, 28 core)

CPU

   

Two Intel E5-2683 v3 14-core CPUs at 2.00 GHz (Haswell)

RAM

   

256GB ECC Memory

Disk

   

Two 1 TB 7.2K RPM 3G SATA HDDs

NIC

   

Dual-port Intel 10Gbe NIC (X520)

NIC

   

Qlogic QLE 7340 40 Gb/s Infiniband HCA (PCIe v3.0, 8 lanes)

c4130

   

2 nodes (Haswell, 28 core, two GPUs)

CPU

   

Two Intel E5-2680 v3 12-core processors at 2.50 GHz (Haswell)

RAM

   

256GB ECC Memory

Disk

   

Two 1 TB 7.2K RPM 3G SATA HDDs

GPU

   

Two Tesla K40m GPUs

NIC

   

Dual-port Intel 1Gbe NIC (i350)

NIC

   

Dual-port Intel 10Gbe NIC (X710)

NIC

   

Qlogic QLE 7340 40 Gb/s Infiniband HCA (PCIe v3.0, 8 lanes)

There are three networks at the Clemson site:

9.4 Apt Cluster

This cluster is not owned by CloudLab, but is federated and available to CloudLab users.

The main Apt cluster is housed in the University of Utah’s Downtown Data Center in Salt Lake City, Utah. It contains two classes of nodes:

r320

   

128 nodes (Sandy Bridge, 8 cores)

CPU

   

1x Xeon E5-2450 processor (8 cores, 2.1Ghz)

RAM

   

16GB Memory (4 x 2GB RDIMMs, 1.6Ghz)

Disks

   

4 x 500GB 7.2K SATA Drives (RAID5)

NIC

   

1GbE Dual port embedded NIC (Broadcom)

NIC

   

1 x Mellanox MX354A Dual port FDR CX3 adapter w/1 x QSA adapter

c6220

   

64 nodes (Ivy Bridge, 16 cores)

CPU

   

2 x Xeon E5-2650v2 processors (8 cores each, 2.6Ghz)

RAM

   

64GB Memory (8 x 8GB DDR-3 RDIMMs, 1.86Ghz)

Disks

   

2 x 1TB SATA 3.5” 7.2K rpm hard drives

NIC

   

4 x 1GbE embedded Ethernet Ports (Broadcom)

NIC

   

1 x Intel X520 PCIe Dual port 10Gb Ethernet NIC

NIC

   

1 x Mellanox FDR CX3 Single port mezz card

All nodes are connected to three networks with one interface each:

9.5 IG-DDC Cluster

This cluster is not owned by CloudLab, but is federated and available to CloudLab users.

This small cluster is an InstaGENI Rack housed in the University of Utah’s Downtown Data Center. It has nodes of only a single type:

dl360

   

33 nodes (Sandy Bridge, 16 cores)

CPU

   

2x Xeon E5-2450 processors (8 cores each, 2.1Ghz)

RAM

   

48GB Memory (6 x 8GB RDIMMs, 1.6Ghz)

Disk

   

1 x 1TB 7.2K SATA Drive

NIC

   

1GbE 4-port embedded NIC

It has two network fabrics: