Commit 0026b2b1 authored by root's avatar root

2020/07/22 update

parent 7584a2f4
Current Error
=============
so: undefined symbol: __intel_avx_rep_memcpy
Instructions
============
Abaqus example modified from Lev Lafayette, "Supercomputing with Linux", Victorian Partnership for Advanced Computing, 2015
The Abaqus FEA suite is commonly used in automatic engineering problems using a common model data structure and integrated solver technology. As licensed software it requires a number of license tokens based on the number of cores required, which can be calculated by the simple formula int(5 x N^0.422), where N is the number of cores. Device Analytics offers an online calculator at http://deviceanalytics.com/abaqus-token-calculator .
......
#!/bin/bash
#SBATCH --ntasts=1
#SBATCH --time=0:05:00
#SBATCH --GRES=abaqus+5
module load ABAQUS/6.14.2-linux-x86_64
# Run the job 'Door'
abaqus job=Door
#!/bin/bash
#SBATCH --partition=physical
#SBATCH --time=1:00:00
#SBATCH --ntasks=8
module load ABINIT/8.0.8b-intel-2016.u3
abinit < tbase1_x.files >& log
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=ABRicate-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load ABRicate/0.8.7-spartan_intel-2017.u2
# The command to actually run the job
abricate ecoli_rel606.fasta
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=ABySS-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load ABySS/2.0.2-goolf-2015a
# Assemble a small synthetic data set
tar xzvf test-data.tar.gz
sleep 20
abyss-pe k=25 name=test in='test-data/reads1.fastq test-data/reads2.fastq'
# Calculate assembly contiguity statistics
abyss-fac test-unitigs.fa
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=ADMIXTURE-test.slurm
#SBATCH -p cloud
# Run with two threads
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=2
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load ADMIXTURE/1.3.0
# Untar sample files, run application
# See admixture --help for options.
tar xvf hapmap3-files.tar.gz
admixture -j2 hapmap3.bed 1
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=AFNI-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load AFNI/linux_openmp_64-spartan_intel-2017.u2-20190219
# Untar dataset and run script
tar xvf ARzs_data.tgz
./@ARzs_analyze
#!/bin/bash
# SBATCH --account=punim0396
# SBATCH --partition=punim0396
#SBATCH --job-name="ANSYS test"
#SBATCH --partition=physical-cx4
#SBATCH --ntasks=1
#SBATCH --time=0-00:10:00
#SBATCH --gres=aa_r+1%aa_r_hpc+12
module load ansys/19.0-intel-2017.u2
ansys19 -b < OscillatingPlate.inp > OscillatingPlate.db
#!/bin/bash
# Job name and partition
#SBATCH --job-name=ARAGORN-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load ARAGORN/1.2.36-GCC-4.9.2
# Run the application
aragorn -o results sample.fa
#!/bin/bash
# Add your project account details here.
# SBATCH --account=XXXX
#SBATCH --partition=gpgpu
#SBATCH --ntasks=4
#SBATCH --time=1:00:00
module load Amber/16-gompi-2017b-CUDA-mpi
mpiexec /usr/local/easybuild/software/Amber/16-gompi-2017b-CUDA-mpi/amber16/bin/pmemd.cuda_DPFP.MPI -O -i mdin -o mdout -inf mdinfo -x mdcrd -r restrt
#!/bin/bash
# Job name and partition
#SBATCH --job-name=BAMM-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Speciation-extinction analyses
# You must have an ultrametric phylogenetic tree.
# Load the environment variables
module load BAMM/2.5.0-spartan_intel-2017.u2
# Example from: `http://bamm-project.org/quickstart.html`
# To run bamm you must specify a control file.
# The following is for diversification.
# You may wish to use traits instead
# bamm -c template_trait.txt
bamm -c template_diversification.txt
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=ABySS-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load BBMap/36.62-intel-2016.u3-Java-1.8.0_71
# See examples at:
# http://seqanswers.com/forums/showthread.php?t=58221
reformat.sh in=sample1.fq out=processed.fq
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=BEDTools-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load BEDTools/2.28.0-spartan_intel-2017.u2
# BEDTools has an extensive test suite; but the tests assumes the wrong
# location for the application!
# So all these tests need to be be modified to include:
# BT=$(which bedtools)
cp -r /usr/local/easybuild/software/BEDTools/2.27.1-intel-2017.u2/test/* .
find ./ -type f -exec sed -i -e 's/${BT-..\/..\/bin\/bedtools}/$(which bedtools)/g' {} \;
sh test.sh
# Specific example commands available here:
# https://bedtools.readthedocs.io/en/latest/content/example-usage.html#bedtools-intersect
# Calculate assembly contiguity statistics
abyss-fac test-unitigs.fa
#!/bin/bash
# Set the partition
#SBATCH -p cloud
# Set the number of processors that will be used.
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
# Set the walltime (10 hrs)
#SBATCH --time=10:00:00
# Load the environment variables
module load BLAST/2.2.26-Linux_x86_64
# Run the job
blastall -i ./rat-ests/rn_est -d ./dbs/rat.1.rna.fna -p blastn -e 0.05 -v 5 -b 5 -T F -m 9 -o rat_blast_tab.txt -a 8
#!/bin/bash
#SBATCH --partition=cloud
#SBATCH --time=2:00:00
#SBATCH --ntasks=1
mkdir -p data/ref_genome
curl -L -o data/ref_genome/ecoli_rel606.fasta.gz ftp://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/000/017/985/GCA_000017985.1_ASM1798v1/GCA_000017985.1_ASM1798v1_genomic.fna.gz
sleep 30
gunzip data/ref_genome/ecoli_rel606.fasta.gz
curl -L -o sub.tar.gz https://downloader.figshare.com/files/14418248
sleep 60
tar xvf sub.tar.gz
mv sub/ data/trimmed_fastq_small
mkdir -p results/sam results/bam results/bcf results/vcf
module load BWA/0.7.17-intel-2017.u2
bwa index data/ref_genome/ecoli_rel606.fasta
bwa mem data/ref_genome/ecoli_rel606.fasta data/trimmed_fastq_small/SRR2584866_1.trim.sub.fastq data/trimmed_fastq_small/SRR2584866_2.trim.sub.fastq > results/sam/SRR2584866.aligned.sam
samtools view -S -b results/sam/SRR2584866.aligned.sam > results/bam/SRR2584866.aligned.bam
samtools sort -o results/bam/SRR2584866.aligned.sorted.bam results/bam/SRR2584866.aligned.bam
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=BEAST-test.slurm
#SBATCH -p cloud
# Run on 4 cores
#SBATCH --ntasks=4
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load Beast/2.3.1-intel-2016.u3
beast Dengue4.env.xml
#!/bin/bash
#SBATCH --time=24:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
# You might need an external license file
# export LM_LICENSE_FILE=port@licenseserver
module load COMSOL/5.2
# Example batch command from csiro.org.au
comsol batch -inputfile mymodel.mph -outputfile mymodelresult.mph -batchlog mybatch.log -j b1 -np 8 -mpmode owner
#!/bin/bash
# Name and Partition
#SBATCH --job-name=CPMD-test.slurm
#SBATCH -p cloud
# Run on two cores
#SBATCH --ntasks=2
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load CPMD/4.3-intel-2018.u4
# Example taken from Axek Kohlmeyer's classic tutorial
# http://www.theochem.ruhr-uni-bochum.de/~legacy.akohlmey/cpmd-tutor/index.html
mpiexec -np 2 cpmd.x 1-h2-wave.inp > 1-h2-wave.out
#!/bin/bash
#SBATCH --job-name=Cufflinks-test.slurm
#SBATCH -p cloud
# Multicore
#SBATCH --ntasks=1
#SBATCH --ntasks-per-node=2
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 1:00:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module purge
module load Cufflinks/2.2.1-GCC-4.9.2
# Set the Cufflinks environment
CUFFLINKS_OUTPUT="${PWD}"
cufflinks --quiet --num-threads $SLURM_NTASKS --output-dir $CUFFLINKS_OUTPUT sample.bam
#!/bin/bash
# Name and Partition
#SBATCH --job-name=ABySS-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
#SBATCH --job-name=Deflt3D-test.slurm
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
#SBATCH -t 1:00:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
......@@ -16,6 +12,7 @@
# Load the environment variables
module purge
source /usr/local/module/spartan_old.sh
module load Delft3D/7545-intel-2016.u3
./run_all_examples.sh
#!/bin/bash
#SBATCH --job-name FDS_example_job
#How many nodes/cores? FDS is MPI enabled and can operate across multiple nodes
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
#What is the maximum time this job is expected to take? (Walltime)
#Format: Days-Hours:Minutes:Seconds
#SBATCH --time=1-24:00:00
module load FDS
fds inputfile.fds outputfile.fds
#!/bin/bash
# Name and partition
#SBATCH --job-name=FFTW-test.slurm
#SBATCH -p cloud
# Run on single CPU. This can run with MPI if you have a bit problem.
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load FFTW/3.3.6-gompi-2017b
# Compile and execute
g++ -lfftw3 fftw_example.c -o fftw_example
./fftw_example > results.txt
# Example from : https://github.com/undees/fftw-example
#!/bin/bash
#SBATCH -p cloud
#SBATCH --ntasks=1
#SBATCH -t 0:15:00
module load FSL/5.0.9-centos6_64
# FSL needs to be sourced
source $FSLDIR/etc/fslconf/fsl.sh
srun bet /usr/local/common/FSL/intro/structural.nii.gz test1FSL -f 0.1
#!/bin/bash
#SBATCH -p cloud
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
#SBATCH -t 0:00:05
module load FSL/5.0.9-centos6_64
# FSL needs to be sourced
source $FSLDIR/etc/fslconf/fsl.sh
time bet /usr/local/common/FSL/intro/structural.nii.gz test1FSL -f 0.1
......@@ -2,7 +2,6 @@
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=FreePascal-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
......@@ -15,6 +14,8 @@
# SBATCH --mail-type=ALL
# Load the environment variables
module purge
source /usr/local/module/spartan_old.sh
module load fpc/3.0.4
......
We have a FreePascal compiler on Spartan!
# We have a FreePascal compiler on Spartan!
`module load fpc/3.0.4`
module purge
/usr/local/module/spartan_old.sh
module load fpc/3.0.4
However, you will also need a fpc.cfg and a fp.cfg, for command-line and gui IDE. This includes PATHs to the various units etc.
These are all included in this directory, along with a simple "Hello World" program.
Compile with
`fpc hello.pas`
# However, you will also need a fpc.cfg and a fp.cfg, for command-line and gui IDE. This includes PATHs to the various units etc.
# These are all included in this directory, along with a simple "Hello World" program.
#
# Compile with
fpc hello.pas
#!/bin/bash
#SBATCH -p cloud
#SBATCH --ntasks=1
#SBATCH -t 00:15:00
......
......@@ -2,7 +2,7 @@ We don't have the complete set of libraries installed (yet) on the compute nodes
The following is an example session for visualisation.
[lev@cricetomys HPCshells]$ ssh spartan -Y
[lev@cricetomys HPCshells]$ ssh spartan -X
..
[lev@spartan ~]$ module load FreeSurfer/6.0.0-GCC-4.9.2-centos6_x86_64
[lev@spartan ~]$ module load X11/20160819-GCC-4.9.2
......
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=GAMESS-test.slurm
#SBATCH -p cloud
# Run on 1 cores
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# GAMESS likes memory!
#SBATCH --mem=64G
# Load the environment variables
module load GAMESS-US/20160708-GCC-4.9.2
rungms exam01.inp
# This is a directory for memory and program debugging and profiling.
# Launch an interactive job for these examples!
sinteractive --partition=physical --ntasks=2 --time=1:00:00
# Valgrind
# The test program valgrindtest.c is form Punit Guron. In this example the memory allocated to the pointer 'ptr' is never freed in the program.
# Load the module and compile with debugging symbols.
module load Valgrind/3.13.0-goolf-2015a
gcc -Wall -g valgrindtest.c -o valgrindtest
valgrind --leak-check=full ./valgrindtest 2> valgrind.out
# GDB
# Compile with debugging symbols. A good compiler will give a warning here, and run the program.
gcc -Wall -g gdbtest.c -o gdbtest
$ ./gdbtest
Enter the number: 3
The factorial of 3 is 0
# Load the GDB module e.g.,
module load GDB/7.8.2-goolf-2015a
# Launch GDB, set up a break point in the code, and execute
gbd gdbtest
..
(gdb) break 10
(gdb) run
(gdb) print j
# Basic commands in GDB
# run = run a program until end, signit, or breakpoint. Use Ctrl-C to stop
# break = set a breakpoint, either by linenumber, function etc. (shortcut b)
# list = list the code above and below where the program stopped (shortcut l)
# continue = restart execution of program where is stopped (shortcut c).
# print = print a variable (shortcut p)
# next, step = after using a signal or breakpoint use next and step to
# continue a progame line-by-line.
# NB: next will go 'over' the function call to the next line of code,
# step will go 'into' the function call (shortcut s)
#
# Variables can be temporarily modified with the `set` command
# e.g., set j=1
# The code will hit the breakpoint where you can interrogate the variables.
# Testing the variable 'j' will show it has not been initialised.
# Create a new file, initialise j to 1, and test again.
cp gdbtest.c gdbtest2.c
gcc -Wall -g gdbtest2.c -o gdbtest2
$ ./gdbtest
# There is still another bug! Can you find it? Use GDB to help.
# Once you have fixed the second bug, use diff and patch to fix the original.
# The -u option provides unified content for both files.
diff -u gdbtest.c gdbtest2.c > gdbpatch.patch
# The patch command will overwrite the source with the modifications
# specified in the destination. Test the original again!
patch gdbtest.c gdbpatch.patch
# For Gprof, instrumentation code is inserted with the `-pg` option when
# compiled.
#
# GPROF output consists of two parts; the flat profile and the call graph.
# The flat profile gives the total execution time spent in each function.
# The textual call graph, shows for each function;
# (a) who called it (parent) and (b) who it called (child subroutines).
#
# Sample progam from Himanshu Arora, published on The Geek Stuff
# Compile, run the executable.
# Run the gprof tool. Various output options are available.
gcc -Wall -pg test_gprof.c test_gprof_new.c -o test_gprof
./test_gprof
gprof test_gprof gmon.out > analysis.txt
# For parallel applications each parallel process can be given its own
# output file, using the undocumented environment variable GMON_OUT_PREFIX
# Then run the parallel application as normal.
# Each grof will create a binary for each profile ID.
# View the gmon.out's as one
export GMON_OUT_PREFIX=gmon.out
mpicc -Wall -pg mpi-debug.c -o mpi-debug
srun -n2 mpi-debug
gprof mpi-debug gmon.out.*
# Last update 20190416 LL
#!/bin/bash
#SBATCH --job-name="Gaussian Test"
#SBATCH --ntasks=1
#SBATCH --time=0-0:10:00
# Change these as appropriate
INPUT_FILE="test0001.com"
OUTPUT_FILE="test0001.log"
module load Gaussian/g09
g09 < $INPUT_FILE > $OUTPUT_FILE
#!/bin/bash
# This script generates slurm scripts for the standard Gaussian tests.
# To submit the jobs use the following loop:
# for test in {0001..1044}; do sbatch job${test}.slurm; done
# Enjoy submitting 1044 Gaussian test jobs!
# Lev Lafayette, 2017
for test in {0001..1044}
do
cat <<- EOF > job${test}.slurm
#!/bin/bash
#SBATCH --job-name="Gaussian Test ${test}"
#SBATCH --partition=cloud
#SBATCH --ntasks=1
#SBATCH --time=12:00:00
module load Gaussian/g09TEST
g09 < test${test}.com > test${test}.log
EOF
done
We have GnuCOBOL on Spartan!
GnuCOBOL is a free version of the COBOL compiler. Best of all, it's a transpiler, which translates into C.
Which means parallel COBOL!
Various example programs from Lev Lafayette's talk to Linux Users of Victoria,
GnuCOBOL: A Gnu Life for an Old Workhorse, July 2016
http://levlafayette.com/files/2016cobol.pdf
Here's some various tests:
`module load gnucobol/3.0-rc1-GCC-6.2.0`
`cobc -Wall -x -free hello.cob -o hello-world`
./hello-world
cobc -Wall -m -free hello.cob
cobc -Wall -C -free hello.cob
cobc -x shortest.cob
./shortest
cobc -x hello-trad.cob
./hello-trad
cobc -Wall -x -free luv.cob
./luv
cobc -Wall -free -x literals.cob
./literals
cobc -Wall -free -x posmov1.cob
./posmov1
cobc -Wall -free -x posmov2.cob
./posmov2
cobc -Wall -free -x redefines.cob
./redefines
cobc -Wall -free -x renames.cob
./renames
cobc -Wall -free -x posmov3.cob
./posmov3
cobc -Wall -free -x posmov4.cob
cobc -Wall -free -x class.cob
./posmov4
./class
./evaluate
#!/bin/bash
#SBATCH -p cloud
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=8
module load Gurobi/7.0.1
export GRB_LICENSE_FILE=/usr/local/easybuild/software/Gurobi/gurobi.lic
time gurobi_cl misc07.mps
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=HMMER-test.slurm
#SBATCH -p cloud
# One task, multi-threaded by default
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module purge
module load HMMER/3.2.1-foss-2017b
# Build a profile from a basic Stockholm alignment file
hmmbuild globins4.hmm globins4.sto
# Searche a profile against a sequence database.
hmmsearch globins4.hmm globins45.fa > searchresults.txt
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
PATH=$PATH:$HOME/.local/bin:$HOME/bin
export PATH
alias ls='ls -F'
alias cp='cp -i'
alias ll='ls -laxp'
alias lo='exit'
# Undocumented feature which sets the size to "unlimited".
# http://stackoverflow.com/questions/9457233/unlimited-bash-history
export HISTFILESIZE=
export HISTSIZE=
export HISTTIMEFORMAT="[%F %T] "
# Change the file location because certain bash sessions truncate .bash_history file upon close.
# http://superuser.com/questions/575479/bash-history-truncated-to-500-lines-on-each-login
export HISTFILE=~/.bash_eternal_history
# Force prompt to write history after every command.
# http://superuser.com/questions/20900/bash-history-loss
PROMPT_COMMAND="history -a; $PROMPT_COMMAND"
AWK examples, from Supercomputing with Linux, Lev Lafayette, VPAC, 2015
awk '$7=="A" { ++count } END { print count }' simple1.txt
awk '{sum+=$7} END {print sum}' simple2.txt
awk '{ for(i=1; i<=NF;i++) j+=$i; print j; j=0 }' simple3.txt
#!/bin/bash
tar cvfz homeuser.tgz /home/lev/
#!/bin/bash
BU=homeuser$(date +%Y%m%d).tgz
tar cvfz $BU $(pwd)
#!/bin/bash
LIMIT=19 # Upper limit
echo
echo "Printing Numbers 1 through 20 (but breaks loop at 3)."
count=0
while [ "$count" -le "$LIMIT" ]
do
count=$(($count+1))
if [ "$count" -gt 2 ]
then
break # Skip entire rest of loop.
fi
echo -n "$count "
done
echo; echo; echo
exit 0
#!/bin/bash
# Handy Extract Program
if [[ -f $1 ]]; then
case $1 in
*.tar.bz2) tar xvjf $1 ;;
*.tar.gz) tar xvzf $1 ;;
*.bz2) bunzip2 $1 ;;
*.rar) unrar x $1 ;;
*.gz) gunzip $1 ;;
*.tar) tar xvf $1 ;;
*.tbz2) tar xvjf $1 ;;
*.tgz) tar xvzf $1 ;;
*.zip) unzip $1 ;;
*.Z) uncompress $1 ;;
*.7z) 7z x $1 ;;
*) echo "'$1' cannot be extracted via >extract<" ;;
esac
else
echo "'$1' is not a valid file!"
fi
#!/bin/bash
if [ "$1" == "-n" ];then
NAGIOS=1
fi
SQUEUE=/usr/local/slurm/latest/bin/squeue
#SACCT=/usr/local/slurm/latest/bin/sacct
if [ \! -x ${SQUEUE} ]; then
if [ "${NAGIOS}" ]; then
echo -n "WARNING: "
fi
echo "ERROR: no squeue - wrong machine?"
exit 1
fi
# If run by a normal user.
if [ $(id -u) != 0 ]
then
SQUEUE="$SQUEUE -u `whoami`"
fi
# If we have no jobs queued, abort
if [ -z "$(${SQUEUE} -h)" ]; then
if [ "${NAGIOS}" ]; then
echo -n "OK: "
fi
#echo "No jobs queued"
exit 0
fi
ACTIVEJOBS=$(${SQUEUE} -t R -ho "%.15i %D" | fgrep -vw 1 | awk '{print $1}' | tr '\n' ',' | sed -e 's/,$/\n/')
if [ -z ${ACTIVEJOBS} ]; then
if [ "${NAGIOS}" ]; then
echo -n "OK: "
fi
#echo "No bad jobs found"
exit 0
fi
JOBLIST=$(${SQUEUE} -o %i -hs -j ${ACTIVEJOBS} | awk -F. '{print $1}' | uniq -c | fgrep -w 2 | awk '{print $2}' | tr '\n' ',' | sed -e 's/,$//')
export SQUEUE_FORMAT='%.10i %.9P %.15u %.8D %.8C %N'
if [ -z "$JOBLIST" ]; then
if [ "${NAGIOS}" ]; then
echo -n "OK: "
fi
#echo "No bad jobs found"
exit 0;
fi
if [ "${NAGIOS}" ]; then
echo "CRITICAL: bad jobs found ${JOBLIST}"
exit 2
fi
echo ""
echo "The following candidate jobs were found:"
echo "----------------------------------------"
${SQUEUE} -o "${SQUEUE_FORMAT}" -j ${JOBLIST}
echo ""
#!/bin/bash
LIMIT=19 # Upper limit
echo
echo "Printing Numbers 1 through 20 (but not 3 and 11)."
count=0
while [ $count -le "$LIMIT" ]
do
count=$(($count+1))
if [ "$count" -eq 3 ] || [ "$count" -eq 11 ] # Excludes 3 and 11.
then
continue # Skip rest of this particular loop iteration.
fi
echo -n "$count " # This will not execute for 3 and 11.
done
echo; echo; echo
exit 0
#! /bin/bash
# Tests whether a specified file exists of not, illustrates if/then/else.
file=$1
if [ -e $file ]
then
echo -e "File $file exists"
else
echo -e "File $file doesn't exists"
fi
exit 0
#!/bin/bash
# Search for email addresses in file, extract, turn into csv with designated file name
# Constants
INPUT=${1}
OUTPUT=${2}
# Filecheck Subroutine
filecheck() {
if [ ! $INPUT -o ! $OUTPUT ]; then
echo "Input file not found, or output file not specified. Exiting script."
exit 0
fi
}
# Search and Sort Subroutine
searchsort() {
grep --only-matching -E '[.[:alnum:]]+@[.[:alnum:]]+' $INPUT > $OUTPUT
sed -i 's/$/,/g' $OUTPUT
sort -u $OUTPUT -o $OUTPUT
sed -i '{:q;N;s/\n/ /g;t q}' $OUTPUT
}
# View and Print Subroutine
viewprint() {
echo "Data file extracted to" $OUTPUT
read -t5 -n1 -r -p "Press any key to see the list, sorted and with unique record"
if [ $? -eq 0 ]; then
echo A key was pressed
else
echo No key was pressed
exit
fi
less $OUTPUT | \
# Output file piped through sort and uniq.
# Show that line extension still works with comments.
sort | uniq
}
main() {
filecheck
searchsort
viewprint
}
# Main function
main
exit
#!/bin/bash
subroutineA() {
codeblock
}
subroutineB() {
codeblock
}
main() {
subroutineA
subroutineB
}
main
exit
#!/bin/bash
# Enter two names when invoking script
# Define your function here
# Firstname and Surname are first two parameters.
Hello () {
echo "Hello World $Firstname $Surname"
return $(bc -l <<< ${#1}+${#2})
}
# Invoke your function
Hello ${1} ${2}
# Capture value returned by last command
echo The name has this many characters $?
exit
#!/bin/bash
for a in {1..99}
do
cat <<- EOF > job${a}
#!/bin/bash
#SBATCH -p cloud
#SBATCH -N ${a}
#SBATCH --nodes=1
#SBATCH --tasks-per-node=1
#echo $(pwd) >> results.txt
EOF
done
Talking chamber foxtrot@example.com as shewing an it minutes. Trees fully of blind do. Exquisite favourite at do extensive listening. Improve up
musical welcome he. Gay attended vicinity prepared now diverted. Esteems it ye sending reached lima@example.com as. Longer lively her design settle
tastes advice mrs off who.indigo@example.com kilo@example.com May indulgence difficulty ham can put especially. Bringing remember echo@example.com for
supplied her why was confined. Middleton principle did she procuring extensive believing add. Weather adapted prepare oh is calling. bravo@example.com
Far advanced settling say finished raillery. Offered chiefly farther of my no colonel shyness. hotel@example.com juliet@example.com Inhabit hearing
perhaps on ye do no. It maids decay as there he. Smallest on suitable disposed do although blessing he juvenile in. Society or if excited forbade.
Here name off yet delta@example.com she long sold easy whom. Differed oh cheerful procured pleasure securing suitable in. Hold rich on an he oh fine.
Chapter ability shyness alpha@example.com Inquietude simplicity terminated she compliment remarkably few her nay. The weeks are ham mike@.... asked
jokes. Neglected perceived shy nay concluded. Not mile draw plan snug charlie@example.com ext all. Houses latter an valley be indeed wished mere
golf@example.com In my. Money doubt oh drawn every or an china
# The following are simple examples of a "for" loop.
# You may require specific software installed e.g., ffmpeg, image magick, liberoffice, calibre.
# Note the use of command substitution by using $(command); sometimes you will find the use of backticks instead (e.g., for i in * ; do mv $i `echo $i | tr "A-Z" "a-z"` ; done); this is not recommended.
for item in ./*.mp3 ; do ffmpeg -i "${item}" "${item/%mp3/ogg}" ; done
for item in ./*.jpeg ; do convert "$item" "${item%.*}.png" ; done
for item in ./*; do convert "$item" -define jpeg:extent=512kb "${item%.*}.jpg" ; done
for item in ./*.doc ; do /usr/bin/soffice --headless --convert-to-pdf *.doc ; done
for item in ./*.pdf ' do ebook-convert "$item" "${item}.mobi ; done
# Loops can be applied in a step-wise manner.
$ cd ~/Genomics/shell_data
$ for filename in *.fastq
> do
> head -n 2 ${filename} >> seq_info.txt
> done
# Basename in a loop.
# Basename is removing a uniform part of a name from a list of files.
# In this case remove the .fastq extension and echo the output.
$ for filename in *.fastq
> do
> name=$(basename ${filename} .fastq)
> echo ${name}
> done
# What would happen if backticks were used instead of $() for shell substitution? What if someone mistook the backticks for single quotes?
for item in ./* ; do mv $item $(echo $item | tr "A-Z" "a-z") ; done
# What's wrong with spaces in filenames?
touch "This is a long file name"
for item in $(ls ./*); do echo ${item}; done
# The following examples remove spaces from filenames and apostrophes. The script is designed to prevent expansion from the wildcard, but remember that a `mv` command will overwrite existing files that have the same name.
for item in ./*; do mv "$item" "$(echo "$item" | tr -d " ")"; done
# Finally a few simple examples of loops with conditional tests.
x=1; while [ $x -le 5 ]; do echo "While-do count up $x"; x=$(( $x + 1 )); done
x=5; until [ $x -le 0 ]; do echo "Until-do count down $x"; x=$(( $x - 1 )); done
x=1; until [ $x = 6 ]; do echo "Until-do count up $x"; x=$(( $x + 1 )); done
# A while loop that reads in data from a file and runs a command on that data.
# This is what we used to originally set quotas on home and project directories.
# The 'read' command reads one line from standard input or a specified file.
while read line; do sleep 5; ./setquota.sh $line; done < quotalist.txt
# when searching for lines that contain a particular sequence in a file (e.g., from grep), reading those lines for processing can be accomplished with the something like the following:
grep sequence datafile.dat | while read -r line ; do
echo "Processing $line"
# Processing code #
done
# Curly braces are used to encapsulate statements or variables with {} or ${}
var # Set a variable
$var # Invoke the variable
${var}bar # Invoke the variable, append "bar".
# Example of determining jobs running on a set of nodes.
for host in "spartan-rc"{001..10}; do squeue -w $host; done
#!/bin/bash
# Converts ("moves") all files in the directory run to lowercase.
for i
do
mv $i $(echo $i | tr "A-Z" "a-z")
done
This source diff could not be displayed because it is too large. You can view the blob instead.
#!/bin/bash
# Illustrates the difference between various types (and lack of) quoting.
SAMPLE="The quick brown fox jumps over the lazy dog"
echo "Double quotes gives you $SAMPLE"
echo 'Single quotes gives you $SAMPLE'
exit
#!/bin/bash
OPTIONS="Sedimentary Igneous Metamorphic Quit"
select opt in $OPTIONS; do
if [ "$opt" = "Quit" ]; then
echo done
exit
elif [ "$opt" = "Sedimentary" ]; then
echo "Sedimentary rocks are formed by sedminentation of particles at or near the Earth's surface and within bodies of water."
elif [ "$opt" = "Igneous" ]; then
echo "Igneous rock forms through the cooling and solidification of magma or lava."
elif [ "$opt" = "Metamorphic" ]; then
echo "Metamorphic rocks are formed by subjecting any rock type -sedimentary rock, igneous rock or another older metamorphic rock - to different temperature and pressure conditions than those in which the original rock was formed."
else
echo "Select again; 1, 2, 3 or 4"
fi
done
A C C T A G T
C A A A G T A
C A T T A C C
A G T A C A A
1 2 3 4 5 6 7 8 9
2 3 4 5 6 7 8 9 10
3 4 5 6 7 8 9 11 12
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
#! /bin/bash
# Access to a system is tested with ping every few minutes until a connection is made whereupon it opens an SSH session.
read -p "Enter Hostname:" nethost
echo $nethost
until ping -c 1 $nethost
do
sleep 180;
done
ssh $nethost
#!/bin/bash
# Prevents use of Control-C to prematurely end important script.
# User can override if they're really, really sure.
ctrlc_count=0
function test_ctrlc()
{
let ctrlc_count++
echo
if [[ $ctrlc_count == 1 ]]; then
echo "Cntrl-C prevented unless you're sure."
elif [[ $ctrlc_count == 2 ]]; then
echo "Really sure?"
elif [[ $ctrlc_count == 3 ]]; then
echo "Really, really sure?"
else
echo "OK, you're really, really sure.."
exit
fi
}
trap test_ctrlc SIGINT
while true
do
echo "This is a sleeping loop. The loop that keeps on sleeping on."
sleep 2
done
exit
#!/bin/bash
# This is an abstract example of things that could do wrong!
#SBATCH --output=/home/example/data/output_%j.out
for for file in /home/example/data/*
do
sbatch application ${file}
done
......@@ -15,6 +15,8 @@
# SBATCH --mail-type=ALL
# Load the environment variables
module purge
source /usr/local/module/spartan_old.sh
module load HTSlib/1.9-intel-2018.u4
# Start the tabix binary from htslib
......
......@@ -4,10 +4,14 @@
sinteractive --nodes=1 --ntasks-per-node=2 --time=0:10:0
# Example interactive job that specifies cloud partition with X-windows forwarding, after loggin in with secure X-windows forwarding. Note that X-windows forwarding is not highly recommended; try to do compute on Spartan and visualisation locally. However if one absolutely has to visualise from Spartan, the following can be used.
# Example multi-threaded application. Read file for instructions. Run it single-threaded and multi-threaded.
iterate.c
# Example interactive job with X-windows forwarding, after loggin in with secure X-windows forwarding. Note that X-windows forwarding is not highly recommended; try to do compute on Spartan and visualisation locally. However if one absolutely has to visualise from Spartan, the following can be used.
ssh <username>@spartan.hpc.unimelb.edu.au -X
sinteractive -p cloud --x11=first
sinteractive --x11=first
xclock
# If you are running interactive jobs on GPU partitions you have to include the appropriate QOS commands or account.
......@@ -18,7 +22,6 @@ sinteractive --x11=first --partition=deeplearn --qos=gpgpudeeplearn --gres=gpu:v
sinteractive --partition=gpgpu --account=hpcadmingpgpu --gres=gpu:2
# If the user is not using a Linux local machine they will need to install an X-windows client, such as Xming for MS-Windows or X11 on Mac OSX from the XQuartz project.
# If you need to download files whilst on an interactive job you must use the University proxy.
......
The file `gattaca.txt` is used for diff examples in the Introductory course and for regular expressions in the Intermediate course.
The file `default.slurm` uses all the default values for slurm on this system; cloud partition, one node, one task, one cpu-per-task, no mail, jobid as job name, ten minute walltime, etc.
The file `default.slurm` uses all the default values for slurm on this system; physical partition, one node, one task, one cpu-per-task, no mail, jobid as job name, ten minute walltime, etc. It has no specific Slurm directives other than the default!
The file `specific.slurm` runs on a specific node. The list may be specified as a comma-separated list of hosts, a range of hosts (host[1-5,7,...] for example), or a filename.
......
#!/bin/bash
#SBATCH --partition=physical
# SBATCH --partition=physical
#SBATCH --constraint=physg4
#SBATCH --ntasks=72
# Load modules, show commands etc
......
......@@ -11,7 +11,7 @@ touch * # What are you thinking?!
rm * # Really?! You want to remove all files in your directory?
rm '*' # Safer, but shouldn't have been created in the first place.
# Best to keep to plain, old fashioned, alphanumerics. CamelCase is helpful.
# Best to keep to plain, old fashioned, alphanumerics. Snake_case or CamelCase is helpful.
touch "This_is_a_long_filename"
touch "ThisIsALongFilename
......@@ -2,11 +2,14 @@ The following as some sinfo examples that you might find useful on Spartan.
`sinfo -s`
Provides summary information the system's partitions, from the partition name, whether the partition is available, walltime limits, node information (allocated, idle, out, total), and the nodelist.
Provides summary information the system's partitions, from the partition name, whether the partition is available, walltime limits,
node information (allocated, idle, out, total), and the nodelist.
`sinfo -p $partition`
Provides information about the particular partition specified. Breaks sinfo up for that partition into node states (drain, drng, mix, alloc, idle) and the nodes in that state. `Drain` means that the node is marked for maintenance, and whilst existing jobs will run it will not accept new jobs.
Provides information about the particular partition specified. Breaks sinfo up for that partition into node states (drain, drng,
mix, alloc, idle) and the nodes in that state. `Drain` means that the node is marked for maintenance, and whilst existing jobs will
run it will not accept new jobs.
`sinfo -a`
......@@ -14,6 +17,7 @@ Similar to `sinfo -p` but for all partitions.
`sinfo -n $nodes -p $partition`
Print information only for specified nodes in specified partition; can use comma-separated values or range expression e.g., `sinfo -n spartan-rc[001-010] -p cloud`.
Print information only for specified nodes in specified partition; can use comma-separated values or range expression e.g., `sinfo
-n spartan-bm[001-010]`.
#!/bin/bash
#SBATCH --partition=cloud
#SBATCH --ntasks=1
#SBATCH --nodelist=spartan-rc005
#SBATCH --nodelist=spartan-bm005
# Alternative to exclude specific nodes.
# SBATCH --exclude=spartan-rc005
# SBATCH --exclude=spartan-bm005
echo $(hostname ) $SLURM_JOB_NAME running $SLURM_JOBID >> hostname.txt
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=JAGS-test.slurm
#SBATCH -p cloud
# Run on four CPUs
#SBATCH --ntasks=4
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 1:00:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load JAGS/4.3.0-intel-2017.u2
# Extract the classic BUGS examples
tar xzvf classic-bugs.tar.gz
sleep 240
cd classic-bugs/vol1
make -j4 check
cd ../vol2
make -j4 check
#!/bin/bash
#SBATCH -p cloud
#SBATCH ntasks=1
module load Julia/0.6.0-binary
julia simple.jl
......@@ -21,8 +21,10 @@ function quadratic2(a::Float64, b::Float64, c::Float64)
end
vol = sphere_vol(3)
# @printf allows number formatting but does not automatically append the \n to statements, see below
@printf "volume = %0.3f\n" vol
# @printf "volume = %0.3f\n" vol
# @printf deprecated, removed from example, 202007LL
quad1, quad2 = quadratic2(2.0, -2.0, -12.0)
println("result 1: ", quad1)
......
......@@ -8,7 +8,7 @@ unset I_MPI_PMI_LIBRARY
In order to use LAMMPS with the GPU module enabled you need to use the -sf and -pk flags, as per the following command:
mpiexec -np 2 lmp_mpi -sf gpu -pk gpu 1 -in <in.input>
srun -n 2 lmp_mpi -sf gpu -pk gpu 1 -in <in.input>
the number after the -pk flag indicates the number of gpu instances you are requesting, and it should line up with the number requested in your gres gpu request. For example, a slurm script with the following line:
......
......@@ -2,8 +2,8 @@ It is not highly recommended, but if a user wants to do X-Windows forwarding wit
If the user is not using a Linux local machine they will need to install an X-windows client, such as Xming for MS-Windows or X11 on Mac OSX from the XQuartz project.
ssh <username>@spartan.hpc.unimelb.edu.au -Y
sinteractive -p cloud --x11=first
ssh <username>@spartan.hpc.unimelb.edu.au -X
sinteractive --x11=first
module load MATLAB/2017a
matlab
......
#!/bin/bash
#SBATCH -p physicaltest
#SBATCH --ntasks=1
module load MATLAB
module purge
source /usr/local/module/spartan_old.sh
module load MATLAB/2016a
matlab -nodesktop -nodisplay -nosplash< mypi.m
#!/bin/bash
#SBATCH -p physical
#SBATCH --ntasks=8
module load MATLAB
time matlab -nodesktop -nodisplay -nosplash < tictoc.m
time matlab -nodesktop -nodisplay -nosplash < tictoc-p.m
#!/bin/bash
#SBATCH -p cloud
#SBATCH --ntasks=1
module load MATLAB/2016a
matlab -nodesktop -nodisplay -nosplash< polar-plot.m
tic
n = 200;
A = 500;
n = 400;
A = 1000;
a = zeros(n);
parfor i = 1:n
a(i) = max(abs(eig(rand(A))));
......
tic
n = 200;
A = 500;
n = 400;
A = 1000;
a = zeros(n);
for i = 1:n
a(i) = max(abs(eig(rand(A))));
......
#!/bin/bash
# Name and partition
#SBATCH --job-name=Mathematica-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load Mathematica/12.0.0
# Read and evaluate the .m file
# Example derived from: https://pages.uoregon.edu/noeckel/Mathematica.html
math -noprompt -run "<<test.m" > output.txt
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=MrBayes-test.slurm
#SBATCH -p cloud
# Run on 1 cores
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load MrBayes/3.2.6-intel-2016.u3
mb Dengue4.env.xml
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
# VMD psfgen example:
# this script generates a psf and pdb file of a given
# structure in preparation of a namd simulation.
#
#
# usage: at the command line type:
# vmd -dispdev text -e build_example.pgn
package require psfgen
topology top_all27_prot_na.rtf
# Alias residue names
alias residue HIS HSE
alias atom ILE CD1 CD
# Build protein segment
segment A {pdb 1ubq_chainA.pdb}
# Patch protein segment: for adding disulphide bonds etc: segment MF {pdb nad.pdb}
# patch DISU A:97 A:104
# patch TP2 A:199
# regenerate angles dihedrals
coordpdb 1ubq_chainA.pdb A
guesscoord
# Write structure and coordinate files
writepsf model_ubq_x.psf
writepdb model_ubq_x.pdb
# Still need to solvate and ionize!
# do that with vmd modules
exit
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
#!/bin/bash
# SLURM job script, Lev Lafayette (October 2016)
# Which partition?
#SBATCH --partition=cloud
# Job name:
#SBATCH --job-name ubiquitin2
# How many cores ?
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=8
# How long to run the job? (hours:minutes:seconds)
#SBATCH --time=00:15:00
# Environmental varibles to make it work:
# NAMD/2.10-intel-2016.u3-mpi
module load NAMD/2.10-gompi-2015a-mpi
# Launching the job!
mpiexec namd2 namd_1ubq_example.conf
##############################################################################
## Namd configuration file.
##############################################################################
# Simple NAMD script m.kuiper April 2012
#
# -this is an example configuration file is to demonstrate a simple
# simulation of a molecular system within namd.
#
##############################################################################
## Input Parameters:
##############################################################################
structure 1ubq_example.psf
coordinates 1ubq_example.pdb
outputName 1ubq_output
firsttimestep 0
set Temp 310
temperature $Temp
##############################################################################
## Simulation Parameters:
##############################################################################
## Parameters ----------------------------------------------------------------
## -make sure to use the right parameter set!
paraTypeCharmm on
parameters par_all27_prot_na.prm
## Additional constraints: --------------------------------------------------
# -can use this section to constrain various parts of the simulation, -for
# example protein backbone, with either harmonic constraints or fixed atoms.
# - make sure to assign a non-zero values to the B column of the pdb file an uncomment the
# appropriate section.
# constraints on
# consexp 2
# consref InputFiles/change_me.pdb
# conskfile InputFiles/change_me.pdb
# conskcol B
# constraintScaling 0.5
# fixedAtoms on
# fixedAtomsFile InputFiles/change_me.pdb
# fixedAtomsCol B
## Example of interactive molecular dynamics: (Uncommment next 4 lines)
# IMDon on
# IMDport 5678
# IMDfreq 10
# IMDwait no
## parameter settings:-------------------------------------------------------
# Force-Field Parameters
exclude scaled1-4
1-4scaling 1.0
cutoff 12
switching on
switchdist 10
pairlistdist 14
# Integrator Parameters
timestep 2
rigidBonds all
nonbondedFreq 1
fullElectFrequency 2
stepspercycle 10
# Constant Temperature Control
langevin on
langevinDamping 5
langevinTemp $Temp
langevinHydrogen off
## Periodic Boundary Conditions: ----------------------------------------------
# make sure to check that the cell dimensions match your input files!
cellBasisVector1 48. 0. 0.
cellBasisVector2 0. 48. 0.
cellBasisVector3 0. 0. 48.
cellOrigin 0 0 0
wrapAll on
wrapWater on
## PME (for full-system periodic electrostatics) -------------------------------
PME yes
PMEGridSpacing 1.0
## Constant Pressure Control (variable volume) ---------------------------------
useGroupPressure yes
useFlexibleCell yes
useConstantArea yes
langevinPiston on
langevinPistonTarget 1.01325
langevinPistonPeriod 100.
langevinPistonDecay 50.
langevinPistonTemp $Temp
## Output settings: -----------------------------------------------------------
## ** Note! ** this file is for a short example run. dcdfreq is *very* small
## which means data will be written out very frequently leading to HUGE files.
## Make sure to change for production runs.
restartfreq 5000
dcdfreq 100
xstFreq 100
outputEnergies 100
outputPressure 100
outputTiming 100
###############################################################################
## Execution script:
###############################################################################
minimize 500
run 10000
##############################################################################
## Namd configuration file.
##############################################################################
# Simple NAMD restart script m.kuiper April 2012
#
# -this is an example configuration file is to demonstrate a simple
# simulation restart of a molecular system within namd.
#
##############################################################################
## Input Parameters:
##############################################################################
structure 1ubq_example.psf
coordinates 1ubq_example.pdb
outputName 1ubq_restart_output_run_1
# firsttimestep 0
set Temp 310
# temperature $Temp <- temperature not needed in restart file!
##############################################################################
## Simulation Parameters:
##############################################################################
## Parameters ----------------------------------------------------------------
## -make sure to use the right parameter set!
paraTypeCharmm on
parameters par_all27_prot_na.prm
set inputname 1ubq_output.restart
binCoordinates $inputname.coor ;# coordinates from last run (binary)
binVelocities $inputname.vel ;# velocities from last run (binary)
extendedSystem $inputname.xsc ;# cell dimensions from last run
firsttimestep 0 ;# last step of previous run
numsteps 50000 ;# run stops when this step is reached
## Additional constraints: --------------------------------------------------
# -can use this section to constrain various parts of the simulation, -for
# example protein backbone, with either harmonic constraints or fixed atoms.
# - make sure to assign a non-zero values to the B column of the pdb file an uncomment the
# appropriate section.
# constraints on
# consexp 2
# consref InputFiles/change_me.pdb
# conskfile InputFiles/change_me.pdb
# conskcol B
# constraintScaling 0.5
# fixedAtoms on
# fixedAtomsFile InputFiles/change_me.pdb
# fixedAtomsCol B
## Example of interactive molecular dynamics: (Uncommment next 4 lines)
# IMDon on
# IMDport 5678
# IMDfreq 10
# IMDwait no
## parameter settings:-------------------------------------------------------
# Force-Field Parameters
exclude scaled1-4
1-4scaling 1.0
cutoff 12
switching on
switchdist 10
pairlistdist 14
# Integrator Parameters
timestep 2
rigidBonds all
nonbondedFreq 1
fullElectFrequency 2
stepspercycle 10
# Constant Temperature Control
langevin on
langevinDamping 5
langevinTemp $Temp
langevinHydrogen off
## Periodic Boundary Conditions: ----------------------------------------------
# make sure to check that the cell dimensions match your input files!
#cellBasisVector1 48. 0. 0. <- not needed in restart file!
#cellBasisVector2 0. 48. 0.
#cellBasisVector3 0. 0. 48.
#cellOrigin 0 0 0
wrapAll on
wrapWater on
## PME (for full-system periodic electrostatics) -------------------------------
PME yes
PMEGridSpacing 1.0
## Constant Pressure Control (variable volume) ---------------------------------
useGroupPressure yes
useFlexibleCell yes
useConstantArea yes
langevinPiston on
langevinPistonTarget 1.01325
langevinPistonPeriod 100.
langevinPistonDecay 50.
langevinPistonTemp $Temp
## Output settings: -----------------------------------------------------------
## ** Note! ** this file is for a short example run. dcdfreq is *very* small
## which means data will be written out very frequently leading to HUGE files.
## Make sure to change for production runs.
restartfreq 50000
dcdfreq 1000
xstFreq 1000
outputEnergies 1000
outputPressure 1000
outputTiming 1000
###############################################################################
## Execution script:
###############################################################################
# minimize 500 <- not needed for restart file
# run 10000
This diff is collapsed.
#!/bin/bash
## sbatch launching script March 2014 m.kuiper
## - A script to run a simple NAMD job on the vlsci BlueGene/Q cluster, Avoca.
## - using a restart script
#SBATCH --nodes=4
ntpn=8 # number of tasks per node:
ppn=8 # processors per node:
## 60 minute walltime:
#SBATCH --time=1:0:0
# add your account number here if you have multiple accounts.
##SBATCH --account=VLSCI
module load namd-xl-pami-smp/2.9
# Note: newer versions of namd may exist. Alter module accordingly.
# -Submit the job: -----------------------
srun --ntasks-per-node $ntpn namd2 +ppn $ppn namd_1ubq_restart_example.conf > Namd_1ubq_restart_example_output.txt
#!/bin/bash
## sbatch launching script March 2014 m.kuiper
## - A script to run a simple NAMD job on the vlsci BlueGene/Q cluster, Avoca.
#SBATCH --nodes=4
ntpn=8 # number of tasks per node:
ppn=8 # processors per node:
## 30 minute walltime:
#SBATCH --time=0:30:0
# add your account number here if you have multiple accounts.
##SBATCH --account=VLSCI
module load namd-xl-pami-smp/2.9
# Note: newer versions of namd may exist. Alter module accordingly.
# -Submit the job: -----------------------
srun --ntasks-per-node $ntpn namd2 +ppn $ppn namd_1ubq_example.conf > Namd_1ubq_example_output.txt
This diff is collapsed.
This diff is collapsed.
#############################################################
## JOB DESCRIPTION ##
#############################################################
#
# Example configuration file for a NAMD simulation
# of ubiquitin. (In vacuum, for quick demonstration purposes!)
# by Mike Feb 2008
#
# Some features have been commented out, -uncomment sections
# to enable!
##############################################################
## INPUT FILES
#############################################################
structure 1ubq_example.psf
coordinates 1ubq_example.pdb
outputName 1ubq_example_output_01
firsttimestep 0
## set simulation temperature (in Kelvin):
set temp 310
temperature $temp
## SIMULATION PARAMETERS
###########################################################
## Parameter file:
paraTypeCharmm on
parameters par_all27_prot_na.inp
## Force-Field Parameters
exclude scaled1-4
1-4scaling 1.0
cutoff 18
switching on
switchdist 16
pairlistdist 20
## Integrator Parameters
timestep 1
rigidBonds all
nonbondedFreq 1
fullElectFrequency 2
stepspercycle 10
## Constant Temperature Control
langevin on
langevinDamping 5
langevinTemp $temp
langevinHydrogen off
## Periodic Boundary Conditions
cellBasisVector1 42. 0. 0.
cellBasisVector2 0. 42. 0.
cellBasisVector3 0. 0. 42.
cellOrigin 0 0 0
wrapAll on
wrapWater on
## PME (for full-system periodic electrostatics)
## (uncomment next 4 lines to use PME )
# PME yes
# PMEGridSizeX 42
# PMEGridSizeY 42
# PMEGridSizeZ 42
## Constant Pressure Control (variable volume)
useGroupPressure yes
useFlexibleCell no
useConstantArea no
langevinPiston on
langevinPistonTarget 1.01325
langevinPistonPeriod 100.
langevinPistonDecay 50.
langevinPistonTemp $temp
## Output files:
restartfreq 20000
dcdfreq 50
xstFreq 20000
outputEnergies 50
outputPressure 50
outputTiming 1000
## EXECUTION SCRIPT
#############################################################
## Minimize , reinitialize velocities, run dynamics:
minimize 100
reinitvels $temp
run 5000
#!/bin/bash
# SLURM job script, Lev Lafayette (August 2016)
# Derived from PBS/TORQUE script by Mike Kuiper (March 2007)
# Job name:
#SBATCH --job-name namd_example_job_01
# How many cores ?
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=4
# How long to run the job? (hours:minutes:seconds)
#SBATCH --time=00:15:00
# Environmental varibles to make it work:
module load NAMD/2.10-gompi-2015a-mpi
# Launching the job!
srun namd2 Ubiquitin_example.conf
This diff is collapsed.
#!/bin/bash
#SBATCH -p cloud
#SBATCH --time=0-00:05:00
#SBATCH --nodes=1
#SBATCH --ntasks=4
module load ORCA/4_1_0-linux_x86-64-OpenMPI-3.1.3
$EBROOTORCA/orca orca.in 1> orcaNEW3.out
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment