Skip to content
GitLab
Explore
Sign in
Primary navigation
Search or go to…
Project
D
devops
Manage
Activity
Members
Labels
Plan
Issues
Issue boards
Milestones
Wiki
Code
Merge requests
Repository
Branches
Commits
Tags
Repository graph
Compare revisions
Snippets
Build
Pipelines
Jobs
Pipeline schedules
Artifacts
Deploy
Releases
Container registry
Model registry
Operate
Environments
Monitor
Incidents
Analyze
Value stream analytics
Contributor analytics
CI/CD analytics
Repository analytics
Model experiments
Help
Help
Support
GitLab documentation
Compare GitLab plans
Community forum
Contribute to GitLab
Provide feedback
Keyboard shortcuts
?
Snippets
Groups
Projects
Show more breadcrumbs
resplat-public
devops
Wiki
HPC IO500 2019 07 08
Changes
Page history
New page
Templates
Clone repository
add: client nodes specs
authored
5 years ago
by
Linh Vu
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
HPC-IO500-2019-07-08.md
+12
-0
12 additions, 0 deletions
HPC-IO500-2019-07-08.md
with
12 additions
and
0 deletions
HPC-IO500-2019-07-08.md
View page @
419cb21c
...
...
@@ -31,6 +31,18 @@ RHEL 7.6, kernel-lt elrepo 4.4.135-1.el7.elrepo.x86_64, Mellanox OFED 4.3-3.0.2.
*
On 10 of the 16 SSD OSD nodes
*
Each node has 1x NVMe (Optane 900p 480GB) partitioned into 4, each becomes an OSD (so 40 NVMe OSDs in total)
### Client nodes:
We use 2, 4, 10 and 32 nodes from our gpgpu cluster. Each node has the following specs:
spartan-gpgpu:
*
2x12-cores Xeon v4 2.2GHz
*
128GB of RAM
*
1x100Gbe Mellanox
*
4x Tesla P100s (not actually used in IO500 benchmarks)
Nodes are spread between as many racks (one switch per rack) as possible, up to 6 racks.
## Compiling IO500
We run IO500 via Spartan Slurm, and compile it through Spartan modules.
...
...
This diff is collapsed.
Click to expand it.