Commit 0026b2b1 authored by root's avatar root
Browse files

2020/07/22 update

parent 7584a2f4
#!/bin/bash
# This is an abstract example of things that could do wrong!
#SBATCH --output=/home/example/data/output_%j.out
for for file in /home/example/data/*
do
sbatch application ${file}
done
......@@ -15,6 +15,8 @@
# SBATCH --mail-type=ALL
# Load the environment variables
module purge
source /usr/local/module/spartan_old.sh
module load HTSlib/1.9-intel-2018.u4
# Start the tabix binary from htslib
......
......@@ -4,10 +4,14 @@
sinteractive --nodes=1 --ntasks-per-node=2 --time=0:10:0
# Example interactive job that specifies cloud partition with X-windows forwarding, after loggin in with secure X-windows forwarding. Note that X-windows forwarding is not highly recommended; try to do compute on Spartan and visualisation locally. However if one absolutely has to visualise from Spartan, the following can be used.
# Example multi-threaded application. Read file for instructions. Run it single-threaded and multi-threaded.
iterate.c
# Example interactive job with X-windows forwarding, after loggin in with secure X-windows forwarding. Note that X-windows forwarding is not highly recommended; try to do compute on Spartan and visualisation locally. However if one absolutely has to visualise from Spartan, the following can be used.
ssh <username>@spartan.hpc.unimelb.edu.au -X
sinteractive -p cloud --x11=first
sinteractive --x11=first
xclock
# If you are running interactive jobs on GPU partitions you have to include the appropriate QOS commands or account.
......@@ -18,7 +22,6 @@ sinteractive --x11=first --partition=deeplearn --qos=gpgpudeeplearn --gres=gpu:v
sinteractive --partition=gpgpu --account=hpcadmingpgpu --gres=gpu:2
# If the user is not using a Linux local machine they will need to install an X-windows client, such as Xming for MS-Windows or X11 on Mac OSX from the XQuartz project.
# If you need to download files whilst on an interactive job you must use the University proxy.
......
The file `gattaca.txt` is used for diff examples in the Introductory course and for regular expressions in the Intermediate course.
The file `default.slurm` uses all the default values for slurm on this system; cloud partition, one node, one task, one cpu-per-task, no mail, jobid as job name, ten minute walltime, etc.
The file `default.slurm` uses all the default values for slurm on this system; physical partition, one node, one task, one cpu-per-task, no mail, jobid as job name, ten minute walltime, etc. It has no specific Slurm directives other than the default!
The file `specific.slurm` runs on a specific node. The list may be specified as a comma-separated list of hosts, a range of hosts (host[1-5,7,...] for example), or a filename.
......
#!/bin/bash
#SBATCH --partition=physical
# SBATCH --partition=physical
#SBATCH --constraint=physg4
#SBATCH --ntasks=72
# Load modules, show commands etc
......
......@@ -11,7 +11,7 @@ touch * # What are you thinking?!
rm * # Really?! You want to remove all files in your directory?
rm '*' # Safer, but shouldn't have been created in the first place.
# Best to keep to plain, old fashioned, alphanumerics. CamelCase is helpful.
# Best to keep to plain, old fashioned, alphanumerics. Snake_case or CamelCase is helpful.
touch "This_is_a_long_filename"
touch "ThisIsALongFilename
......@@ -2,11 +2,14 @@ The following as some sinfo examples that you might find useful on Spartan.
`sinfo -s`
Provides summary information the system's partitions, from the partition name, whether the partition is available, walltime limits, node information (allocated, idle, out, total), and the nodelist.
Provides summary information the system's partitions, from the partition name, whether the partition is available, walltime limits,
node information (allocated, idle, out, total), and the nodelist.
`sinfo -p $partition`
Provides information about the particular partition specified. Breaks sinfo up for that partition into node states (drain, drng, mix, alloc, idle) and the nodes in that state. `Drain` means that the node is marked for maintenance, and whilst existing jobs will run it will not accept new jobs.
Provides information about the particular partition specified. Breaks sinfo up for that partition into node states (drain, drng,
mix, alloc, idle) and the nodes in that state. `Drain` means that the node is marked for maintenance, and whilst existing jobs will
run it will not accept new jobs.
`sinfo -a`
......@@ -14,6 +17,7 @@ Similar to `sinfo -p` but for all partitions.
`sinfo -n $nodes -p $partition`
Print information only for specified nodes in specified partition; can use comma-separated values or range expression e.g., `sinfo -n spartan-rc[001-010] -p cloud`.
Print information only for specified nodes in specified partition; can use comma-separated values or range expression e.g., `sinfo
-n spartan-bm[001-010]`.
#!/bin/bash
#SBATCH --partition=cloud
#SBATCH --ntasks=1
#SBATCH --nodelist=spartan-rc005
#SBATCH --nodelist=spartan-bm005
# Alternative to exclude specific nodes.
# SBATCH --exclude=spartan-rc005
# SBATCH --exclude=spartan-bm005
echo $(hostname ) $SLURM_JOB_NAME running $SLURM_JOBID >> hostname.txt
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=JAGS-test.slurm
#SBATCH -p cloud
# Run on four CPUs
#SBATCH --ntasks=4
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 1:00:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load JAGS/4.3.0-intel-2017.u2
# Extract the classic BUGS examples
tar xzvf classic-bugs.tar.gz
sleep 240
cd classic-bugs/vol1
make -j4 check
cd ../vol2
make -j4 check
#!/bin/bash
#SBATCH -p cloud
#SBATCH ntasks=1
module load Julia/0.6.0-binary
julia simple.jl
......@@ -21,8 +21,10 @@ function quadratic2(a::Float64, b::Float64, c::Float64)
end
vol = sphere_vol(3)
# @printf allows number formatting but does not automatically append the \n to statements, see below
@printf "volume = %0.3f\n" vol
# @printf "volume = %0.3f\n" vol
# @printf deprecated, removed from example, 202007LL
quad1, quad2 = quadratic2(2.0, -2.0, -12.0)
println("result 1: ", quad1)
......
......@@ -8,7 +8,7 @@ unset I_MPI_PMI_LIBRARY
In order to use LAMMPS with the GPU module enabled you need to use the -sf and -pk flags, as per the following command:
mpiexec -np 2 lmp_mpi -sf gpu -pk gpu 1 -in <in.input>
srun -n 2 lmp_mpi -sf gpu -pk gpu 1 -in <in.input>
the number after the -pk flag indicates the number of gpu instances you are requesting, and it should line up with the number requested in your gres gpu request. For example, a slurm script with the following line:
......
......@@ -2,8 +2,8 @@ It is not highly recommended, but if a user wants to do X-Windows forwarding wit
If the user is not using a Linux local machine they will need to install an X-windows client, such as Xming for MS-Windows or X11 on Mac OSX from the XQuartz project.
ssh <username>@spartan.hpc.unimelb.edu.au -Y
sinteractive -p cloud --x11=first
ssh <username>@spartan.hpc.unimelb.edu.au -X
sinteractive --x11=first
module load MATLAB/2017a
matlab
......
#!/bin/bash
#SBATCH -p physicaltest
#SBATCH --ntasks=1
module load MATLAB
module purge
source /usr/local/module/spartan_old.sh
module load MATLAB/2016a
matlab -nodesktop -nodisplay -nosplash< mypi.m
#!/bin/bash
#SBATCH -p physical
#SBATCH --ntasks=8
module load MATLAB
time matlab -nodesktop -nodisplay -nosplash < tictoc.m
time matlab -nodesktop -nodisplay -nosplash < tictoc-p.m
#!/bin/bash
#SBATCH -p cloud
#SBATCH --ntasks=1
module load MATLAB/2016a
matlab -nodesktop -nodisplay -nosplash< polar-plot.m
tic
n = 200;
A = 500;
n = 400;
A = 1000;
a = zeros(n);
parfor i = 1:n
a(i) = max(abs(eig(rand(A))));
......
tic
n = 200;
A = 500;
n = 400;
A = 1000;
a = zeros(n);
for i = 1:n
a(i) = max(abs(eig(rand(A))));
......
#!/bin/bash
# Name and partition
#SBATCH --job-name=Mathematica-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load Mathematica/12.0.0
# Read and evaluate the .m file
# Example derived from: https://pages.uoregon.edu/noeckel/Mathematica.html
math -noprompt -run "<<test.m" > output.txt
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=MrBayes-test.slurm
#SBATCH -p cloud
# Run on 1 cores
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load MrBayes/3.2.6-intel-2016.u3
mb Dengue4.env.xml
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment