Tier2 HPC

Andy Turner, EPCC
26 March 2017
a.turner@epcc.ed.ac.uk

https://aturner-epcc.github.io/hpc-sheffield-2018

Clone/fork this presentation at: https://github.com/aturner-epcc/hpc-sheffield-2018

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Slide content is available under under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

This means you are free to copy and redistribute the material and adapt and build on the material under the following terms: You must give appropriate credit, provide a link to the license and indicate if changes were made. If you adapt or build on the material you must distribute your work under the same license as the original.
Note that this presentation contains images owned by others. Please seek their permission before reusing these images.

Built using reveal.js

reveal.js is available under the MIT licence

Tier2 Facilities

Tier 2 national HPC logo EPSRC logo

Nationally accessible, located across the UK

Logos of Tier 2 HPC hosting organisations
Map of Tier 2 HPC hosting organisations

Intel Xeon Clusters

Cirrus@EPCC
10,080 core Broadwell, FDR Infiniband Hypercube
HPC Midlands+
14,336 core Broadwell, 3:1 blocking EDR Infiniband (764 core non-blocking)
10 POWER8 per nodes with 1 TB memory
Cambridge CSD3
24,576 core Skylake, Intel Omnipath
Materials and Molecular Modelling Hub
17,280 core Broadwell, 3:1 blocking Intel Omnipath (864 core non-blocking)

Other architectures

JADE
22 Nvidia DGX-1: 2× Xeon + 8× Nvidia P100 GPGPU
Cambridge CSD3
90 nodes: Xeon + 4× Nvidia P100 GPGPU, EDR Infiniband
384 Xeon Phi nodes (96 GiB memory per node), Intel Omnipath
50 node Hadoop Cluster
Isambard@GW4
10,000+ ARMv8 cores, Cray Aries Interconnect
Due mid-2018
Cirrus@EPCC
http://www.cirrus.ac.uk
HPC Midlands+
http://www.hpc-midlands-plus.ac.uk
Cambridge CSD3
http://www.csd3.cam.ac.uk
JADE
http://www.jade.ac.uk
Isambard@GW4
http://gw4.ac.uk/isambard
Materials and Molecular Modelling Hub
https://mmmhub.ac.uk
HPC-UK logo

Information on facilities and how to access them

http://www.hpc-uk.ac.uk

Open Source, community-developed resource


https://hpc-uk.github.io/facilities-presentation/

https://youtu.be/YmL5zLw6Sx8

Comparitive Benchmarking

System comparison

System Processor Cores per node Total cores Interconnect
ARCHER Ivy Bridge 24 (2× 12) 118,080 Aries Dragonfly
Cirrus Broadwell 36 (2× 18) 10,080 FDR Hypercube
Athena Broadwell 28 (2× 14) 14,336 EDR Tree
Thomas Broadwell 24 (2× 12) 17,280 OPA Tree
CSD3-Skylake Skylake Gold 32 (2× 16) 24,576 OPA Tree

Node comparison

System Cores@GHz FP / Gflops/s …to ARCHER Memory Channels Memory BW / GB/s …to ARCHER
ARCHER 24@2.7 518.4 1.0 8 119.4 1.0
Cirrus 36@2.1 1209.6 2.3 8 153.6 1.3
Athena 28@2.4 1075.2 2.1 8 153.6 1.3
Thomas 24@2.1 806.4 1.6 8 153.6 1.3
CSD3-Skylake 32@2.6 2662.4 5.1 12 238.4 2.0

CASTEP - Medium

CASTEP medium benchmark performance plot

CASTEP - Medium (1 node)

System SCF cycle …to ARCHER …to ARCHER (FP) …to ARCHER (Mem)
ARCHER 184.2 1.0 1.0 1.0
Cirrus 102.4 1.8 2.3 1.3
Athena 100.6 1.8 2.1 1.3
Thomas 123.3 1.5 1.6 1.3
CSD3-Skylake 61.3 3.0 5.1 2.0

GROMACS

GROMACS benchmark performance plot

OpenSBLI (CFD)

OpenSBLI medium benchmark performance plot

Further work required to understand this

HPCC RandomRing

HPCC RandomRing benchmark performance plot

CASTEP - Large

CASTEP large benchmark performance plot

Summary

  • Wide range of HPC resources available
  • Initial benchmarking of Xeon systems undertaken
  • More recent processors generally give improved performance…
  • …but amount of improvement is application-dependent
  • Synthetic benchmark results do not always reflect real performance

Questions?

https://github.com/ARCHER-CSE/archer-benchmarks

UK National HPC Benchmarks (ARCHER White Paper)