Cirrus RSE Update
- Supporting user requests: software installations, technical help, etc.
- Comparative benchmarking across ARCHER and Tier2 systems
- Initial focus on CPU-based systems
- Synthetic benchmarks: HPC Challenge, parallel file system performance (MDS and bandwidth)
- HPC application benchmarks: CASTEP, GROMACS, OpenSBLI (CFD)
- Installation and configuration of ISV sofware and licences
Benchamrks
- Applications:
- CASTEP: Al Slab (medium) and DNA (large)
- CP2K: Hybrid functional, LiH
- GROMACS: Large biological system
- OpenSBLI: Taylor-Green vortex
- OASIS: Coupled Met Office UM and NEMO
- Synthetic:
- HPC Challenge (floating point, memory, interconnect)
- benchio (parallel file system bandwidth)
- mdtest (parallel file system metadata)
Systems Included
System |
Processor |
Memory |
Interconnect |
ARCHER |
12-core Xeon v2, 2.7 GHz |
4 channel DDR3 |
Cray Aries |
Cirrus@EPCC |
18-core Xeon v4, 2.1 GHz |
4 channel DDR4 |
FDR Hypercube |
Thomas@MMMHub |
12-core Xeon v4, 2.1 GHz |
4 channel DDR4 |
OPA |
Athena@HPC-Mid+ |
14-core Xeon v4, 2.4 GHz |
4 channel DDR4 |
EDR |
CSD3@Cambridge |
16-core Xeon Skylake, 2.6 GHz |
6 channel DDR4 |
OPA |
CASTEP: Al Slab (al3x3)
GROMACS: large
OpenSBLI: Taylor-Green Vortex
Future Work
- Include non-CPU systems (GPGPU: JADE, CSD3-GPU; KNL: CSD3-KNL)
- Include ML/DL benchmarks
- Profile benchmarks to get better understanding of performance differences
Tier2 RSE Coordination
- EPCC have initiated a regular series of meetings between Tier2 RSE teams and ARCHER CSE
- Initial focus:
- Sharing experiences and approaches
- Sharing introductory HPC training material
- Future plans:
- Group involvement in RSE Leaders workshop submission for RSE2018
- Sharing best practice and experience with DiRAC RSE group
- Identifiy opportunities for cross-site collaborations
- Face-to-face day-long workshop (potentially with DiRAC RSE group) for late 2018