@@ -165,8 +165,6 @@ During testing, we observed and also monitored the system loads. Here are some h
IO easy writes put a lot of load on our NLSAS OSDs, which created a storm of slow requests. At worst, they affected every single NLSAS OSD, and piled up like this: `90817 slow requests are blocked > 32 sec`. However, they cleared up as soon as the test neared its end, and did not cause any harmful effect.
Although the cluster and NLSAS pool coped fine, due to this, we do not recommend running big jobs directly on the NLSAS pool, which should be reserved for long term storage only. Compute jobs should be on faster scratch storage e.g SSD or NVMe.
### MDS requests
Our MDS nodes got hit really hard during the metadata tests. The 10n16t benchmarks put the biggest load we had ever seen on them, e.g:
...
...
@@ -180,4 +178,46 @@ Our MDS nodes got hit really hard during the metadata tests. The 10n16t benchmar
Although the cluster and NLSAS pool coped fine, due to this, we do not recommend running big jobs directly on the NLSAS pool, which should be reserved for long term storage only. Compute jobs should be on faster scratch storage e.g SSD or NVMe.