Commit 053e6f17 authored by Research Platforms's avatar Research Platforms

Update partition specs

parent dfc317f1
Pipeline #2160 failed with stages
Last Updated - 2018-02-26
Last Updated - 2019-09-02
This is a collection of sample test jobs for Spartan, written with SLURM for job submission, across various applications, and also with small sample MPI and OpenMP programs.
......@@ -11,15 +11,15 @@ Functionally, this works by creating a new user session on a compute node (or in
The compute nodes on Spartan are divided into partitions based on their intended use. This also influences the type of hardware we employ. Each node in a partition has the same amount of CPUs and RAM. Configurations are as follows:
cloud - 8 CPU - 64GB RAM - Intended for single node calculations of any type, or loosely coupled multi-node applications. Cloud nodes, 1:1 (non-oversubscribed) vCPUs. No fast interconnect, standard TCP networking.
cloud - 12 CPU - 100GB RAM - Intended for single node calculations of any type, or loosely coupled multi-node applications. Cloud nodes, 1:1 (non-oversubscribed) vCPUs. No fast interconnect, standard TCP networking.
physical - 12 CPU - 256GB RAM - Intended for MPI multi-node calculations. Fast interconnect.
physical - 12-72 CPU - 256GB-1.5TB RAM - Intended for MPI multi-node calculations. Fast interconnect.
bigmem - 32 CPU - 1.5TB RAM - Intended for single node operations that require a large amount of RAM but cannot easily be distributed over multiple nodes.
Note there are several queues that are accessible only to specific project groups:
gpgpu - 24 CPU - 128GB RAM - 4xP100 GPU - LIEF GPGPU, accessible to specified GPGPU projects only. Please see the GPU directory for details.
gpgpu - 24 CPU - 127GB RAM - 4xP100 GPU - LIEF GPGPU, accessible to specified GPGPU projects only. Please see the GPU directory for details.
deeplearn - 28 CPU - 256GB RAM - 4xV100 GPU - Intended for use by MSE and associated groups, for neural network development.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment