Commit 0026b2b1 authored by root's avatar root

2020/07/22 update

parent 7584a2f4
Current Error
=============
so: undefined symbol: __intel_avx_rep_memcpy
Instructions
============
Abaqus example modified from Lev Lafayette, "Supercomputing with Linux", Victorian Partnership for Advanced Computing, 2015
The Abaqus FEA suite is commonly used in automatic engineering problems using a common model data structure and integrated solver technology. As licensed software it requires a number of license tokens based on the number of cores required, which can be calculated by the simple formula int(5 x N^0.422), where N is the number of cores. Device Analytics offers an online calculator at http://deviceanalytics.com/abaqus-token-calculator .
......
#!/bin/bash
#SBATCH --ntasts=1
#SBATCH --time=0:05:00
#SBATCH --GRES=abaqus+5
module load ABAQUS/6.14.2-linux-x86_64
# Run the job 'Door'
abaqus job=Door
#!/bin/bash
#SBATCH --partition=physical
#SBATCH --time=1:00:00
#SBATCH --ntasks=8
module load ABINIT/8.0.8b-intel-2016.u3
abinit < tbase1_x.files >& log
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=ABRicate-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load ABRicate/0.8.7-spartan_intel-2017.u2
# The command to actually run the job
abricate ecoli_rel606.fasta
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=ABySS-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load ABySS/2.0.2-goolf-2015a
# Assemble a small synthetic data set
tar xzvf test-data.tar.gz
sleep 20
abyss-pe k=25 name=test in='test-data/reads1.fastq test-data/reads2.fastq'
# Calculate assembly contiguity statistics
abyss-fac test-unitigs.fa
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=ADMIXTURE-test.slurm
#SBATCH -p cloud
# Run with two threads
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load ADMIXTURE/1.3.0
# Untar sample files, run application
# See admixture --help for options.
tar xvf hapmap3-files.tar.gz
admixture -j2 hapmap3.bed 1
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=AFNI-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load AFNI/linux_openmp_64-spartan_intel-2017.u2-20190219
# Untar dataset and run script
tar xvf ARzs_data.tgz
./@ARzs_analyze
#!/bin/bash
# SBATCH --account=punim0396
# SBATCH --partition=punim0396
#SBATCH --job-name="ANSYS test"
#SBATCH --partition=physical-cx4
#SBATCH --ntasks=1
#SBATCH --time=0-00:10:00
#SBATCH --gres=aa_r+1%aa_r_hpc+12
module load ansys/19.0-intel-2017.u2
ansys19 -b < OscillatingPlate.inp > OscillatingPlate.db
#!/bin/bash
# Job name and partition
#SBATCH --job-name=ARAGORN-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load ARAGORN/1.2.36-GCC-4.9.2
# Run the application
aragorn -o results sample.fa
#!/bin/bash
# Add your project account details here.
# SBATCH --account=XXXX
#SBATCH --partition=gpgpu
#SBATCH --ntasks=4
#SBATCH --time=1:00:00
module load Amber/16-gompi-2017b-CUDA-mpi
mpiexec /usr/local/easybuild/software/Amber/16-gompi-2017b-CUDA-mpi/amber16/bin/pmemd.cuda_DPFP.MPI -O -i mdin -o mdout -inf mdinfo -x mdcrd -r restrt
#!/bin/bash
# Job name and partition
#SBATCH --job-name=BAMM-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Speciation-extinction analyses
# You must have an ultrametric phylogenetic tree.
# Load the environment variables
module load BAMM/2.5.0-spartan_intel-2017.u2
# Example from: `http://bamm-project.org/quickstart.html`
# To run bamm you must specify a control file.
# The following is for diversification.
# You may wish to use traits instead
# bamm -c template_trait.txt
bamm -c template_diversification.txt
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=ABySS-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load BBMap/36.62-intel-2016.u3-Java-1.8.0_71
# See examples at:
# http://seqanswers.com/forums/showthread.php?t=58221
reformat.sh in=sample1.fq out=processed.fq
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=BEDTools-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load BEDTools/2.28.0-spartan_intel-2017.u2
# BEDTools has an extensive test suite; but the tests assumes the wrong
# location for the application!
# So all these tests need to be be modified to include:
# BT=$(which bedtools)
cp -r /usr/local/easybuild/software/BEDTools/2.27.1-intel-2017.u2/test/* .
find ./ -type f -exec sed -i -e 's/${BT-..\/..\/bin\/bedtools}/$(which bedtools)/g' {} \;
sh test.sh
# Specific example commands available here:
# https://bedtools.readthedocs.io/en/latest/content/example-usage.html#bedtools-intersect
# Calculate assembly contiguity statistics
abyss-fac test-unitigs.fa
#!/bin/bash
# Set the partition
#SBATCH -p cloud
# Set the number of processors that will be used.
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
# Set the walltime (10 hrs)
#SBATCH --time=10:00:00
# Load the environment variables
module load BLAST/2.2.26-Linux_x86_64
# Run the job
blastall -i ./rat-ests/rn_est -d ./dbs/rat.1.rna.fna -p blastn -e 0.05 -v 5 -b 5 -T F -m 9 -o rat_blast_tab.txt -a 8
#!/bin/bash
#SBATCH --partition=cloud
#SBATCH --time=2:00:00
#SBATCH --ntasks=1
mkdir -p data/ref_genome
curl -L -o data/ref_genome/ecoli_rel606.fasta.gz ftp://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/000/017/985/GCA_000017985.1_ASM1798v1/GCA_000017985.1_ASM1798v1_genomic.fna.gz
sleep 30
gunzip data/ref_genome/ecoli_rel606.fasta.gz
curl -L -o sub.tar.gz https://downloader.figshare.com/files/14418248
sleep 60
tar xvf sub.tar.gz
mv sub/ data/trimmed_fastq_small
mkdir -p results/sam results/bam results/bcf results/vcf
module load BWA/0.7.17-intel-2017.u2
bwa index data/ref_genome/ecoli_rel606.fasta
bwa mem data/ref_genome/ecoli_rel606.fasta data/trimmed_fastq_small/SRR2584866_1.trim.sub.fastq data/trimmed_fastq_small/SRR2584866_2.trim.sub.fastq > results/sam/SRR2584866.aligned.sam
samtools view -S -b results/sam/SRR2584866.aligned.sam > results/bam/SRR2584866.aligned.bam
samtools sort -o results/bam/SRR2584866.aligned.sorted.bam results/bam/SRR2584866.aligned.bam
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=BEAST-test.slurm
#SBATCH -p cloud
# Run on 4 cores
#SBATCH --ntasks=4
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load Beast/2.3.1-intel-2016.u3
beast Dengue4.env.xml
#!/bin/bash
#SBATCH --time=24:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
# You might need an external license file
# export LM_LICENSE_FILE=port@licenseserver
module load COMSOL/5.2
# Example batch command from csiro.org.au
comsol batch -inputfile mymodel.mph -outputfile mymodelresult.mph -batchlog mybatch.log -j b1 -np 8 -mpmode owner
#!/bin/bash
# Name and Partition
#SBATCH --job-name=CPMD-test.slurm
#SBATCH -p cloud
# Run on two cores
#SBATCH --ntasks=2
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load CPMD/4.3-intel-2018.u4
# Example taken from Axek Kohlmeyer's classic tutorial
# http://www.theochem.ruhr-uni-bochum.de/~legacy.akohlmey/cpmd-tutor/index.html
mpiexec -np 2 cpmd.x 1-h2-wave.inp > 1-h2-wave.out
#!/bin/bash
#SBATCH --job-name=Cufflinks-test.slurm
#SBATCH -p cloud
# Multicore
#SBATCH --ntasks=1
#SBATCH --ntasks-per-node=2
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 1:00:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module purge
module load Cufflinks/2.2.1-GCC-4.9.2
# Set the Cufflinks environment
CUFFLINKS_OUTPUT="${PWD}"
cufflinks --quiet --num-threads $SLURM_NTASKS --output-dir $CUFFLINKS_OUTPUT sample.bam
#!/bin/bash
# Name and Partition
#SBATCH --job-name=ABySS-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
#SBATCH --job-name=Deflt3D-test.slurm
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
#SBATCH -t 1:00:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
......@@ -16,6 +12,7 @@
# Load the environment variables
module purge
source /usr/local/module/spartan_old.sh
module load Delft3D/7545-intel-2016.u3
./run_all_examples.sh
#!/bin/bash
#SBATCH --job-name FDS_example_job
#How many nodes/cores? FDS is MPI enabled and can operate across multiple nodes
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#What is the maximum time this job is expected to take? (Walltime)
#Format: Days-Hours:Minutes:Seconds
#SBATCH --time=1-24:00:00
module load FDS
fds inputfile.fds outputfile.fds
#!/bin/bash
# Name and partition
#SBATCH --job-name=FFTW-test.slurm
#SBATCH -p cloud
# Run on single CPU. This can run with MPI if you have a bit problem.
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load FFTW/3.3.6-gompi-2017b
# Compile and execute
g++ -lfftw3 fftw_example.c -o fftw_example
./fftw_example > results.txt
# Example from : https://github.com/undees/fftw-example
#!/bin/bash
#SBATCH -p cloud
#SBATCH --ntasks=1
#SBATCH -t 0:15:00
module load FSL/5.0.9-centos6_64
# FSL needs to be sourced
source $FSLDIR/etc/fslconf/fsl.sh
srun bet /usr/local/common/FSL/intro/structural.nii.gz test1FSL -f 0.1
#!/bin/bash
#SBATCH -p cloud
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH -t 0:00:05
module load FSL/5.0.9-centos6_64
# FSL needs to be sourced
source $FSLDIR/etc/fslconf/fsl.sh
time bet /usr/local/common/FSL/intro/structural.nii.gz test1FSL -f 0.1
......@@ -2,7 +2,6 @@
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=FreePascal-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
......@@ -15,6 +14,8 @@
# SBATCH --mail-type=ALL
# Load the environment variables
module purge
source /usr/local/module/spartan_old.sh
module load fpc/3.0.4
......
We have a FreePascal compiler on Spartan!
# We have a FreePascal compiler on Spartan!
`module load fpc/3.0.4`
module purge
/usr/local/module/spartan_old.sh
module load fpc/3.0.4
However, you will also need a fpc.cfg and a fp.cfg, for command-line and gui IDE. This includes PATHs to the various units etc.
These are all included in this directory, along with a simple "Hello World" program.
Compile with
`fpc hello.pas`
# However, you will also need a fpc.cfg and a fp.cfg, for command-line and gui IDE. This includes PATHs to the various units etc.
# These are all included in this directory, along with a simple "Hello World" program.
#
# Compile with
fpc hello.pas
#!/bin/bash
#SBATCH -p cloud
#SBATCH --ntasks=1
#SBATCH -t 00:15:00
......
......@@ -2,7 +2,7 @@ We don't have the complete set of libraries installed (yet) on the compute nodes
The following is an example session for visualisation.
[lev@cricetomys HPCshells]$ ssh spartan -Y
[lev@cricetomys HPCshells]$ ssh spartan -X
..
[lev@spartan ~]$ module load FreeSurfer/6.0.0-GCC-4.9.2-centos6_x86_64
[lev@spartan ~]$ module load X11/20160819-GCC-4.9.2
......
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=GAMESS-test.slurm
#SBATCH -p cloud
# Run on 1 cores
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# GAMESS likes memory!
#SBATCH --mem=64G
# Load the environment variables
module load GAMESS-US/20160708-GCC-4.9.2
rungms exam01.inp
# This is a directory for memory and program debugging and profiling.
# Launch an interactive job for these examples!
sinteractive --partition=physical --ntasks=2 --time=1:00:00
# Valgrind
# The test program valgrindtest.c is form Punit Guron. In this example the memory allocated to the pointer 'ptr' is never freed in the program.
# Load the module and compile with debugging symbols.
module load Valgrind/3.13.0-goolf-2015a
gcc -Wall -g valgrindtest.c -o valgrindtest
valgrind --leak-check=full ./valgrindtest 2> valgrind.out
# GDB
# Compile with debugging symbols. A good compiler will give a warning here, and run the program.
gcc -Wall -g gdbtest.c -o gdbtest
$ ./gdbtest
Enter the number: 3
The factorial of 3 is 0
# Load the GDB module e.g.,
module load GDB/7.8.2-goolf-2015a
# Launch GDB, set up a break point in the code, and execute
gbd gdbtest
..
(gdb) break 10
(gdb) run
(gdb) print j
# Basic commands in GDB
# run = run a program until end, signit, or breakpoint. Use Ctrl-C to stop
# break = set a breakpoint, either by linenumber, function etc. (shortcut b)
# list = list the code above and below where the program stopped (shortcut l)
# continue = restart execution of program where is stopped (shortcut c).
# print = print a variable (shortcut p)
# next, step = after using a signal or breakpoint use next and step to
# continue a progame line-by-line.
# NB: next will go 'over' the function call to the next line of code,
# step will go 'into' the function call (shortcut s)
#
# Variables can be temporarily modified with the `set` command
# e.g., set j=1
# The code will hit the breakpoint where you can interrogate the variables.
# Testing the variable 'j' will show it has not been initialised.
# Create a new file, initialise j to 1, and test again.
cp gdbtest.c gdbtest2.c
gcc -Wall -g gdbtest2.c -o gdbtest2
$ ./gdbtest
# There is still another bug! Can you find it? Use GDB to help.
# Once you have fixed the second bug, use diff and patch to fix the original.
# The -u option provides unified content for both files.
diff -u gdbtest.c gdbtest2.c > gdbpatch.patch
# The patch command will overwrite the source with the modifications
# specified in the destination. Test the original again!
patch gdbtest.c gdbpatch.patch
# For Gprof, instrumentation code is inserted with the `-pg` option when
# compiled.
#
# GPROF output consists of two parts; the flat profile and the call graph.
# The flat profile gives the total execution time spent in each function.
# The textual call graph, shows for each function;
# (a) who called it (parent) and (b) who it called (child subroutines).
#
# Sample progam from Himanshu Arora, published on The Geek Stuff
# Compile, run the executable.
# Run the gprof tool. Various output options are available.
gcc -Wall -pg test_gprof.c test_gprof_new.c -o test_gprof
./test_gprof
gprof test_gprof gmon.out > analysis.txt
# For parallel applications each parallel process can be given its own
# output file, using the undocumented environment variable GMON_OUT_PREFIX
# Then run the parallel application as normal.
# Each grof will create a binary for each profile ID.
# View the gmon.out's as one
export GMON_OUT_PREFIX=gmon.out
mpicc -Wall -pg mpi-debug.c -o mpi-debug
srun -n2 mpi-debug
gprof mpi-debug gmon.out.*
# Last update 20190416 LL
#!/bin/bash
#SBATCH --job-name="Gaussian Test"
#SBATCH --ntasks=1