@@ -128,6 +128,29 @@ We run each test twice, once on NLSAS, once on SSD. The tests are:
...
@@ -128,6 +128,29 @@ We run each test twice, once on NLSAS, once on SSD. The tests are:
| mdtest hard files per t | 500K | 62.5K | 100K | 500K |
| mdtest hard files per t | 500K | 62.5K | 100K | 500K |
| mdtest hard files total | 1000K | 2000K | 16000K | 16000K |
| mdtest hard files total | 1000K | 2000K | 16000K | 16000K |
Why these test?
2n1t was chosen to fit a simple MPI job. 4n8t was requested by HPC team to match one of their bigger jobs. 10n16t is to match with certain known IO500 submissions from vendors. 32n1t is to match some non-IO500 benchmarks done by vendors.
* 2n1t was chosen to fit a simple MPI job.
\ No newline at end of file
* 4n8t was requested by HPC team to match one of their bigger jobs.
* 10n16t is to match with certain known IO500 submissions from vendors.
* 32n1t is to match some non-IO500 benchmarks done by vendors.
* (1) had a ceph MDS hiccup with client failing to release caps error, killed the slurm job as it was taking too long
* (2) 32n1t ssd put too high loads on the SSD pool, perhaps due to having too few storage nodes there vs the clients, and also the big mismatch in network speed (100G on client vs 25G on storage), and crashed 2 storage nodes. Did not have time to run the 32n1t nlsas