Changes
Page history
add: client nodes specs
authored
Jul 10, 2019
by
Linh Vu
Show whitespace changes
Inline
Side-by-side
HPC-IO500-2019-07-08.md
View page @
419cb21c
...
...
@@ -31,6 +31,18 @@ RHEL 7.6, kernel-lt elrepo 4.4.135-1.el7.elrepo.x86_64, Mellanox OFED 4.3-3.0.2.
*
On 10 of the 16 SSD OSD nodes
*
Each node has 1x NVMe (Optane 900p 480GB) partitioned into 4, each becomes an OSD (so 40 NVMe OSDs in total)
### Client nodes:
We use 2, 4, 10 and 32 nodes from our gpgpu cluster. Each node has the following specs:
spartan-gpgpu:
*
2x12-cores Xeon v4 2.2GHz
*
128GB of RAM
*
1x100Gbe Mellanox
*
4x Tesla P100s (not actually used in IO500 benchmarks)
Nodes are spread between as many racks (one switch per rack) as possible, up to 6 racks.
## Compiling IO500
We run IO500 via Spartan Slurm, and compile it through Spartan modules.
...
...
...
...