Commit 99118f0a authored by root's avatar root

Add more samples

parent 6412db16
BLAST/dbs
BLAST/rat-ests
Cufflinks/sample.bam
digits/digits.img
FreeSurfer/buckner_data
FreeSurfer/buckner_data-tutorial_subjs.tar.gz
......@@ -16,4 +17,7 @@ NAMD/stmv
Python/minitwitter.csv
SAMtools/sample.sam.gz
Singularity/vsoch-hello-world-master.simg
Trimmomatic/*.gz
Trimmomatic/*.fa
Trimmomatic/.backup
#!/bin/bash
# Job name and partition
#SBATCH --job-name=ARAGORN-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load ARAGORN/1.2.36-GCC-4.9.2
# Run the application
aragorn -o results sample.fa
>derice
ACTGACTAGCTAGCTAACTG
>sanka
GCATCGTAGCTAGCTACGAT
>junior
CATCGATCGTACGTACGTAG
>yul
ATCGATCGATCGTACGATCG
\ No newline at end of file
#!/bin/bash
# Job name and partition
#SBATCH --job-name=BAMM-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Speciation-extinction analyses
# You must have an ultrametric phylogenetic tree.
# Load the environment variables
module load BAMM/2.5.0-spartan_intel-2017.u2
# Example from: `http://bamm-project.org/quickstart.html`
# To run bamm you must specify a control file.
# The following is for diversification.
# You may wish to use traits instead
# bamm -c template_trait.txt
bamm -c template_diversification.txt
# BAMM configuration file for speciation/extinction analysis
# ==========================================================
#
# Format
# ------
#
# - Each option is specified as: option_name = option_value
# - Comments start with # and go to the end of the line
# - True is specified with "1" and False with "0"
################################################################################
# GENERAL SETUP AND DATA INPUT
################################################################################
modeltype = speciationextinction
# Specify "speciationextinction" or "trait" analysis
treefile = whaletree.txt
# File name of the phylogenetic tree to be analyzed
runInfoFilename = run_info.txt
# File name to output general information about this run
sampleFromPriorOnly = 0
# Whether to perform analysis sampling from prior only (no likelihoods computed)
runMCMC = 1
# Whether to perform the MCMC simulation. If runMCMC = 0, the program will only
# check whether the data file can be read and the initial likelihood computed
simulatePriorShifts = 1
# Whether to simulate the prior distribution of the number of shift events,
# given the hyperprior on the Poisson rate parameter. This is necessary to
# compute Bayes factors
loadEventData = 0
# Whether to load a previous event data file
eventDataInfile = event_data_in.txt
# File name of the event data file to load, used only if loadEventData = 1
initializeModel = 1
# Whether to initialize (but not run) the MCMC. If initializeModel = 0, the
# program will only ensure that the data files (e.g., treefile) can be read
useGlobalSamplingProbability = 1
# Whether to use a "global" sampling probability. If False (0), expects a file
# name for species-specific sampling probabilities (see sampleProbsFilename)
globalSamplingFraction = 1.0
# The sampling probability. If useGlobalSamplingFraction = 0, this is ignored
# and BAMM looks for a file name with species-specific sampling fractions
sampleProbsFilename = sample_probs.txt
# File name containing species-specific sampling fractions
# seed = 12345
# Seed for the random number generator.
# If not specified (or is -1), a seed is obtained from the system clock
overwrite = 0
# If True (1), the program will overwrite any output files in the current
# directory (if present)
################################################################################
# PRIORS
################################################################################
expectedNumberOfShifts = 1.0
# prior on the number of shifts in diversification
# Suggested values:
# expectedNumberOfShifts = 1.0 for small trees (< 500 tips)
# expectedNumberOfShifts = 10 or even 50 for large trees (> 5000 tips)
lambdaInitPrior = 1.0
# Prior (rate parameter of exponential) on the initial lambda value for rate
# regimes
lambdaShiftPrior = 0.05
# Prior (std dev of normal) on lambda shift parameter for rate regimes
# You cannot adjust the mean of this distribution (fixed at zero, which is
# equal to a constant rate diversification process)
muInitPrior = 1.0
# Prior (rate parameter of exponential) on extinction rates
lambdaIsTimeVariablePrior = 1
# Prior (probability) of the time mode being time-variable (vs. time-constant)
################################################################################
# MCMC SIMULATION SETTINGS & OUTPUT OPTIONS
################################################################################
numberOfGenerations = 5000
# Number of generations to perform MCMC simulation
mcmcOutfile = mcmc_out.txt
# File name for the MCMC output, which only includes summary information about
# MCMC simulation (e.g., log-likelihoods, log-prior, number of processes)
mcmcWriteFreq = 1000
# Frequency in which to write the MCMC output to a file
eventDataOutfile = event_data.txt
# The raw event data (these are the main results). ALL of the results are
# contained in this file, and all branch-specific speciation rates, shift
# positions, marginal distributions etc can be reconstructed from this output.
# See R package BAMMtools for working with this output
eventDataWriteFreq = 1000
# Frequency in which to write the event data to a file
printFreq = 100
# Frequency in which to print MCMC status to the screen
acceptanceResetFreq = 1000
# Frequency in which to reset the acceptance rate calculation
# The acceptance rate is output to both the MCMC data file and the screen
# outName = BAMM
# Optional name that will be prefixed on all output files (separated with "_")
# If commented out, no prefix will be used
################################################################################
# OPERATORS: MCMC SCALING OPERATORS
################################################################################
updateLambdaInitScale = 2.0
# Scale parameter for updating the initial speciation rate for each process
updateLambdaShiftScale = 0.1
# Scale parameter for the exponential change parameter for speciation
updateMuInitScale = 2.0
# Scale parameter for updating initial extinction rate for each process
updateEventLocationScale = 0.05
# Scale parameter for updating LOCAL moves of events on the tree
# This defines the width of the sliding window proposal
updateEventRateScale = 4.0
# Scale parameter (proportional shrinking/expanding) for updating
# the rate parameter of the Poisson process
################################################################################
# OPERATORS: MCMC MOVE FREQUENCIES
################################################################################
updateRateEventNumber = 0.1
# Relative frequency of MCMC moves that change the number of events
updateRateEventPosition = 1
# Relative frequency of MCMC moves that change the location of an event on the
# tree
updateRateEventRate = 1
# Relative frequency of MCMC moves that change the rate at which events occur
updateRateLambda0 = 1
# Relative frequency of MCMC moves that change the initial speciation rate
# associated with an event
updateRateLambdaShift = 1
# Relative frequency of MCMC moves that change the exponential shift parameter
# of the speciation rate associated with an event
updateRateMu0 = 1
# Relative frequency of MCMC moves that change the extinction rate for a given
# event
updateRateLambdaTimeMode = 0
# Relative frequency of MCMC moves that flip the time mode
# (time-constant <=> time-variable)
localGlobalMoveRatio = 10.0
# Ratio of local to global moves of events
################################################################################
# INITIAL PARAMETER VALUES
################################################################################
lambdaInit0 = 0.032
# Initial speciation rate (at the root of the tree)
lambdaShift0 = 0
# Initial shift parameter for the root process
muInit0 = 0.005
# Initial value of extinction (at the root)
initialNumberEvents = 0
# Initial number of non-root processes
################################################################################
# METROPOLIS COUPLED MCMC
################################################################################
numberOfChains = 4
# Number of Markov chains to run
deltaT = 0.01
# Temperature increment parameter. This value should be > 0
# The temperature for the i-th chain is computed as 1 / [1 + deltaT * (i - 1)]
swapPeriod = 1000
# Number of generations in which to propose a chain swap
chainSwapFileName = chain_swap.txt
# File name in which to output data about each chain swap proposal.
# The format of each line is [generation],[rank_1],[rank_2],[swap_accepted]
# where [generation] is the generation in which the swap proposal was made,
# [rank_1] and [rank_2] are the chains that were chosen, and [swap_accepted] is
# whether the swap was made. The cold chain has a rank of 1.
################################################################################
# NUMERICAL AND OTHER PARAMETERS
################################################################################
minCladeSizeForShift = 1
# Allows you to constrain location of possible rate-change events to occur
# only on branches with at least this many descendant tips. A value of 1
# allows shifts to occur on all branches.
segLength = 0.02
# Controls the "grain" of the likelihood calculations. Approximates the
# continuous-time change in diversification rates by breaking each branch into
# a constant-rate diversification segments, with each segment given a length
# determined by segLength. segLength is in units of the root-to-tip distance of
# the tree. So, if the segLength parameter is 0.01, and the crown age of your
# tree is 50, the "step size" of the constant rate approximation will be 0.5.
# If the value is greater than the branch length (e.g., you have a branch of
# length < 0.5 in the preceding example) BAMM will not break the branch into
# segments but use the mean rate across the entire branch.
# BAMM configuration file for phenotypic analysis
# ===============================================
#
# Format
# ------
#
# - Each option is specified as: option_name = option_value
# - Comments start with # and go to the end of the line
# - True is specified with "1" and False with "0"
################################################################################
# GENERAL SETUP AND DATA INPUT
################################################################################
modeltype = trait
# Specify "speciationextinction" or "trait" analysis
treefile = %%%%
# File name of the phylogenetic tree to be analyzed
traitfile = %%%%
# File name of the phenotypic traits file
runInfoFilename = run_info.txt
# File name to output general information about this run
sampleFromPriorOnly = 0
# Whether to perform analysis sampling from prior only (no likelihoods computed)
runMCMC = 1
# Whether to perform the MCMC simulation. If runMCMC = 0, the program will only
# check whether the data file can be read and the initial likelihood computed
simulatePriorShifts = 1
# Whether to simulate the prior distribution of the number of shift events,
# given the hyperprior on the Poisson rate parameter. This is necessary to
# compute Bayes factors
loadEventData = 0
# Whether to load a previous event data file
eventDataInfile = event_data_in.txt
# File name of the event data file to load, used only if loadEventData = 1
initializeModel = 1
# Whether to initialize (but not run) the MCMC. If initializeModel = 0, the
# program will only ensure that the data files (e.g., treefile) can be read
# seed = 12345
# Seed for the random number generator.
# If not specified (or is -1), a seed is obtained from the system clock
overwrite = 0
# If True (1), the program will overwrite any output files in the current
# directory (if present)
################################################################################
# PRIORS
################################################################################
expectedNumberOfShifts = 1.0
# prior on the number of shifts in diversification
# Suggested values:
# expectedNumberOfShifts = 1.0 for small trees (< 500 tips)
# expectedNumberOfShifts = 10 or even 50 for large trees (> 5000 tips)
betaInitPrior = 1.0
# Prior (rate parameter of exponential) on the initial
# phenotypic evolutionary rate associated with regimes
betaShiftPrior = 0.05
# Prior (std dev of normal) on the rate-change parameter
# You cannot adjust the mean of this distribution (fixed at zero, which is
# equal to a constant rate diversification process)
useObservedMinMaxAsTraitPriors = 1
# If True (1), will put a uniform prior density on the distribution
# of ancestral character states, with upper and lower bounds determined
# by the min and max of the observed data
traitPriorMin = 0
# User-defined minimum value for the uniform density on the distribution of
# ancestral character states. Only used if useObservedMinMaxAsTraitPriors = 0.
traitPriorMax = 0
# User-defined maximum value for the uniform density on the distribution of
# ancestral character states. Only used if useObservedMinMaxAsTraitPriors = 0.
betaIsTimeVariablePrior = 1
# Prior (probability) of the time mode being time-variable (vs. time-constant)
################################################################################
# MCMC SIMULATION SETTINGS & OUTPUT OPTIONS
################################################################################
numberOfGenerations = %%%%
# Number of generations to perform MCMC simulation
mcmcOutfile = mcmc_out.txt
# File name for the MCMC output, which only includes summary information about
# MCMC simulation (e.g., log-likelihoods, log-prior, number of processes)
mcmcWriteFreq = %%%%
# Frequency in which to write the MCMC output to a file
eventDataOutfile = event_data.txt
# The raw event data (these are the main results). ALL of the results are
# contained in this file, and all branch-specific speciation rates, shift
# positions, marginal distributions etc can be reconstructed from this output.
# See R package BAMMtools for working with this output
eventDataWriteFreq = %%%%
# Frequency in which to write the event data to a file
printFreq = %%%%
# Frequency in which to print MCMC status to the screen
acceptanceResetFreq = 1000
# Frequency in which to reset the acceptance rate calculation
# The acceptance rate is output to both the MCMC data file and the screen
# outName = BAMM
# Optional name that will be prefixed on all output files (separated with "_")
# If commented out, no prefix will be used
################################################################################
# OPERATORS: MCMC SCALING OPERATORS
################################################################################
updateBetaInitScale = 1
# Scale operator for proportional shrinking-expanding move to update
# initial phenotypic rate for rate regimes
updateBetaShiftScale = 1
# Scale operator for sliding window move to update initial phenotypic rate
updateNodeStateScale = 1
# Scale operator for sliding window move to update ancestral states
# at internal nodes
updateEventLocationScale = 0.05
# Scale parameter for updating LOCAL moves of events on the tree
# This defines the width of the sliding window proposal
updateEventRateScale = 4.0
# Scale parameter (proportional shrinking/expanding) for updating
# the rate parameter of the Poisson process
################################################################################
# OPERATORS: MCMC MOVE FREQUENCIES
################################################################################
updateRateEventNumber = 1
# Relative frequency of MCMC moves that change the number of events
updateRateEventPosition = 1
# Relative frequency of MCMC moves that change the location of an event
# on the tree
updateRateEventRate = 1
# Relative frequency of MCMC moves that change the rate at which events occur
updateRateBeta0 = 1
# Relative frequency of MCMC moves that change the initial phenotypic rate
# associated with an event
updateRateBetaShift = 1
# Relative frequency of MCMC moves that change the exponential shift parameter
# of the phenotypic rate associated with an event
updateRateNodeState = 25
# Relative frequency of MCMC moves that update the value of ancestral
# character states. You have as many ancestral states as you have
# internal nodes in your tree, so there are a lot of parameters:
# you should update this much more often than you update the event-associated
# parameters.
updateRateBetaTimeMode = 0
# Relative frequency of MCMC moves that flip the time mode
# (time-constant <=> time-variable)
localGlobalMoveRatio = 10.0
# Ratio of local to global moves of events
################################################################################
# INITIAL PARAMETER VALUES
################################################################################
betaInit = 0.5
# Initial value of the phenotypic evolutionary process at the root of the tree
betaShiftInit = 0
# Initial value of the exponential change parameter for the phenotypic
# evolutionary process at the root of the tree. A value of zero implies
# time-constant rates
initialNumberEvents = 0
# Initial number of non-root processes
################################################################################
# METROPOLIS COUPLED MCMC
################################################################################
numberOfChains = 4
# Number of Markov chains to run
deltaT = 0.01
# Temperature increment parameter. This value should be > 0
# The temperature for the i-th chain is calculated as 1 / [1 + deltaT * (i - 1)]
swapPeriod = 1000
# Number of generations in which to propose a chain swap
chainSwapFileName = chain_swap.txt
# File name in which to output data about each chain swap proposal.
# The format of each line is [generation],[rank_1],[rank_2],[swap_accepted]
# where [generation] is the generation in which the swap proposal was made,
# [rank_1] and [rank_2] are the chains that were chosen, and [swap_accepted] is
# whether the swap was made. The cold chain has a rank of 1.
(((Balaena_mysticetus:8.81601900,(Eubalaena_australis:1.62202100,(Eubalaena_glacialis:0.34702900,Eubalaena_japonica:0.34702900):1.27499200):7.19399800):19.18398100,(Caperea_marginata:26.06301600,(Eschrichtius_robustus:17.89065500,(((Balaenoptera_acutorostrata:5.27807100,Balaenoptera_bonaerensis:5.27807100):9.82125300,(Balaenoptera_physalus:10.47223400,Megaptera_novaeangliae:10.47223400):4.62709000):0.96736100,(Balaenoptera_musculus:12.84739500,(Balaenoptera_omurai:11.38295800,(Balaenoptera_borealis:5.26532500,(Balaenoptera_brydei:4.32202200,Balaenoptera_edeni:4.32202200):0.94330300):6.11763300):1.46443700):3.21929000):1.82397000):8.17236100):1.93698400):7.85784400,((Physeter_catodon:22.04439100,(Kogia_breviceps:8.80301800,Kogia_simus:8.80301800):13.24137300):11.75461200,((Platanista_gangetica:0.28307000,Platanista_minor:0.28307000):32.10759100,((Tasmacetus_shepherdi:19.19566400,((Berardius_arnuxii:6.28945000,Berardius_bairdii:6.28945000):11.73396200,(Ziphius_cavirostris:15.66970200,((Indopacetus_pacificus:11.02830400,(Hyperoodon_ampullatus:8.10026600,Hyperoodon_planifrons:8.10026600):2.92803800):3.51197900,(Mesoplodon_bidens:13.04286900,(Mesoplodon_traversii:11.07929300,(Mesoplodon_ginkgodens:8.92594300,(Mesoplodon_europaeus:8.25210300,(Mesoplodon_mirus:7.67742400,((Mesoplodon_bowdoini:4.73244800,(Mesoplodon_carlhubbsi:4.17096800,Mesoplodon_layardii:4.17096800):0.56148000):2.44423200,(Mesoplodon_hectori:6.37563100,((Mesoplodon_densirostris:4.92715900,Mesoplodon_stejnegeri:4.92715900):0.86942800,(Mesoplodon_grayi:5.09638400,(Mesoplodon_perrini:4.16622600,Mesoplodon_peruvianus:4.16622600):0.93015800):0.70020200):0.57904500):0.80104900):0.50074400):0.57467900):0.67384000):2.15335000):1.96357700):1.49741400):1.12941900):2.35371000):1.17225200):12.42586500,((Lipotes_vexillifer:24.69821400,(Inia_geoffrensis:18.22641900,Pontoporia_blainvillei:18.22641900):6.47179400):1.30178600,(((Delphinapterus_leucas:5.46643200,Monodon_monoceros:5.46643200):8.59512200,((Neophocaena_phocaenoides:4.98587600,(Phocoena_phocoena:3.70627600,Phocoenoides_dalli:3.70627600):1.27960000):0.63050500,(Phocoena_sinus:4.94717000,(Phocoena_dioptrica:4.04570300,Phocoena_spinipinnis:4.04570300):0.90146700):0.66921000):8.44517400):3.87787200,(Orcinus_orca:10.70277000,((Orcaella_brevirostris:8.20905000,(Grampus_griseus:6.04703000,(Pseudorca_crassidens:5.49324100,(Feresa_attenuata:4.45272000,(Peponocephala_electra:3.04867500,(Globicephala_macrorhynchus:1.47078200,Globicephala_melas:1.47078200):1.57789300):1.40404500):1.04052100):0.55379000):2.16202000):1.22896300,(Lagenorhynchus_albirostris:8.71603900,((Lagenorhynchus_acutus:6.97518500,((Lissodelphis_borealis:1.36135800,Lissodelphis_peronii:1.36135800):3.90213700,((Cephalorhynchus_hectori:3.29116300,(Cephalorhynchus_commersonii:1.82123900,Cephalorhynchus_eutropia:1.82123900):1.46992500):1.27972900,(Lagenorhynchus_obscurus:3.79185300,(Lagenorhynchus_obliquidens:2.91977900,(Cephalorhynchus_heavisidii:2.09621900,(Lagenorhynchus_australis:1.57043300,Lagenorhynchus_cruciger:1.57043300):0.52578600):0.82356000):0.87207400):0.77904000):0.69260200):1.71169000):1.16787300,((Steno_bredanensis:5.89750600,(Sotalia_fluviatilis:3.21256100,Sotalia_guianensis:3.21256100):2.68494500):1.61688800,((Lagenodelphis_hosei:3.44536000,Stenella_longirostris:3.44536000):0.91035700,((Stenella_attenuata:3.07917000,(Tursiops_aduncus:2.19441300,Tursiops_truncatus:2.19441300):0.88475700):0.54694500,(Sousa_chinensis:2.83298000,(Stenella_clymene:1.93481200,((Stenella_coeruleoalba:1.01116300,Stenella_frontalis:1.01116300):0.49557400,(Delphinus_tropicalis:1.26811400,(Delphinus_capensis:0.92486200,Delphinus_delphis:0.92486200):0.34325200):0.23862300):0.42807500):0.89816800):0.79313500):0.72960200):3.15867700):0.62866400):0.57298100):0.72197400):1.26475700):7.23665700):8.06057400):5.62152800):0.76913200):1.40834300):2.05884100);
\ No newline at end of file
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=ABySS-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load BBMap/36.62-intel-2016.u3-Java-1.8.0_71
# See examples at:
# http://seqanswers.com/forums/showthread.php?t=58221
reformat.sh in=sample1.fq out=processed.fq
This diff is collapsed.
#!/bin/bash
#SBATCH --partition=cloud
#SBATCH --time=2:00:00
#SBATCH --ntasks=1
mkdir -p data/ref_genome
curl -L -o data/ref_genome/ecoli_rel606.fasta.gz ftp://ftp.ncbi.nlm.nih.gov/genomes/all/GCA/000/017/985/GCA_000017985.1_ASM1798v1/GCA_000017985.1_ASM1798v1_genomic.fna.gz
sleep 30
gunzip data/ref_genome/ecoli_rel606.fasta.gz
curl -L -o sub.tar.gz https://downloader.figshare.com/files/14418248
sleep 60
tar xvf sub.tar.gz
mv sub/ data/trimmed_fastq_small
mkdir -p results/sam results/bam results/bcf results/vcf
module load BWA/0.7.17-intel-2017.u2
bwa index data/ref_genome/ecoli_rel606.fasta
bwa mem data/ref_genome/ecoli_rel606.fasta data/trimmed_fastq_small/SRR2584866_1.trim.sub.fastq data/trimmed_fastq_small/SRR2584866_2.trim.sub.fastq > results/sam/SRR2584866.aligned.sam
samtools view -S -b results/sam/SRR2584866.aligned.sam > results/bam/SRR2584866.aligned.bam
samtools sort -o results/bam/SRR2584866.aligned.sorted.bam results/bam/SRR2584866.aligned.bam
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
# Name and Partition
#SBATCH --job-name=CPMD-test.slurm
#SBATCH -p cloud
......
#!/bin/bash
#SBATCH --job-name=Cufflinks-test.slurm
#SBATCH -p cloud
# Multicore
#SBATCH --ntasks=1
#SBATCH --ntasks-per-node=2
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 1:00:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module purge
module load Cufflinks/2.2.1-GCC-4.9.2
# Set the Cufflinks environment
CUFFLINKS_OUTPUT="${PWD}"
cufflinks --quiet --num-threads $SLURM_NTASKS --output-dir $CUFFLINKS_OUTPUT sample.bam
"Cufflinks assembles transcripts, estimates their abundances, and tests for differential expression and regulation in RNA-Seq samples. It accepts aligned RNA-Seq reads and assembles the alignments into a parsimonious set of transcripts. Cufflinks then estimates the relative abundances of these transcripts based on how many reads support each one, taking into account biases in library preparation protocols."
Example BAM files from
http://hgdownload.cse.ucsc.edu/goldenPath/hg19/encodeDCC/wgEncodeUwRepliSeq/
#!/bin/bash
# Name and partition
#SBATCH --job-name=FFTW-test.slurm
#SBATCH -p cloud
# Run on single CPU. This can run with MPI if you have a bit problem.
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load FFTW/3.3.6-gompi-2017b
# Compile and execute
g++ -lfftw3 fftw_example.c -o fftw_example
./fftw_example > results.txt
# Example from : https://github.com/undees/fftw-example
/* Start reading here */
#include <fftw3.h>
#define NUM_POINTS 64
/* Never mind this bit */
#include <stdio.h>
#include <math.h>
#define REAL 0
#define IMAG 1
void acquire_from_somewhere(fftw_complex* signal) {
/* Generate two sine waves of different frequencies and
* amplitudes.
*/
int i;
for (i = 0; i < NUM_POINTS; ++i) {
double theta = (double)i / (double)NUM_POINTS * M_PI;
signal[i][REAL] = 1.0 * cos(10.0 * theta) +
0.5 * cos(25.0 * theta);
signal[i][IMAG] = 1.0 * sin(10.0 * theta) +
0.5 * sin(25.0 * theta);
}
}
void do_something_with(fftw_complex* result) {
int i;
for (i = 0; i < NUM_POINTS; ++i) {
double mag = sqrt(result[i][REAL] * result[i][REAL] +
result[i][IMAG] * result[i][IMAG]);
printf("%g\n", mag);
}
}
/* Resume reading here */
int main() {
fftw_complex signal[NUM_POINTS];
fftw_complex result[NUM_POINTS];
fftw_plan plan = fftw_plan_dft_1d(NUM_POINTS,
signal,
result,
FFTW_FORWARD,
FFTW_ESTIMATE);
acquire_from_somewhere(signal);
fftw_execute(plan);
do_something_with(result);
fftw_destroy_plan(plan);
return 0;
}
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=FreePascal-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load fpc/3.0.4
fpc hello.pas
./hello > results.txt
......@@ -2,8 +2,6 @@
#SBATCH --partition gpgpu
#SBATCH --account=test # Use a project ID that has access.
#SBATCH --qos=gpgu # Note that this qos may differ if you are from a non UoM institution
#SBATCH --gres=gpu:1
# For example if you wish to access up four GPUs in a single job use:
......
#!/bin/bash
# To give your job a name, replace "MyJob" with an appropriate name
#SBATCH --job-name=HMMER-test.slurm
#SBATCH -p cloud
# One task, multi-threaded by default
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module purge
module load HMMER/3.2.1-foss-2017b
# Build a profile from a basic Stockholm alignment file
hmmbuild globins4.hmm globins4.sto
# Searche a profile against a sequence database.
hmmsearch globins4.hmm globins45.fa > searchresults.txt
Programs part of HMMER
======================
hmmbuild build profile from input multiple alignment
hmmalign make multiple sequence alignment using a profile
hmmsearch search profile against sequence database
hmmscan search sequence against profile database
hmmpress prepare profile database for hmmscan
phmmer search single sequence against sequence database
jackhmmer iteratively search single sequence against database
nhmmer search DNA query against DNA sequence database
nhmmscan search DNA sequence against a DNA profile database
hmmfetch retrieve profile(s) from a profile file
hmmstat show summary statistics for a profile file
hmmemit generate (sample) sequences from a profile
hmmlogo produce a conservation logo graphic from a profile
hmmconvert convert between different profile file formats
hmmpgmd search daemon for the hmmer.org website
hmmpgmd_shard sharded search daemon for the hmmer.org website
makehmmerdb prepare an nhmmer binary database
hmmsim collect score distributions on random sequences
alimask add column mask to a multiple sequence alignment
# STOCKHOLM 1.0
HBB_HUMAN ........VHLTPEEKSAVTALWGKV....NVDEVGGEALGRLLVVYPWTQRFFESFGDLSTPDAVMGNPKVKAHGKKVL
HBA_HUMAN .........VLSPADKTNVKAAWGKVGA..HAGEYGAEALERMFLSFPTTKTYFPHF.DLS.....HGSAQVKGHGKKVA
MYG_PHYCA .........VLSEGEWQLVLHVWAKVEA..DVAGHGQDILIRLFKSHPETLEKFDRFKHLKTEAEMKASEDLKKHGVTVL
GLB5_PETMA PIVDTGSVAPLSAAEKTKIRSAWAPVYS..TYETSGVDILVKFFTSTPAAQEFFPKFKGLTTADQLKKSADVRWHAERII
HBB_HUMAN GAFSDGLAHL...D..NLKGTFATLSELHCDKL..HVDPENFRLLGNVLVCVLAHHFGKEFTPPVQAAYQKVVAGVANAL
HBA_HUMAN DALTNAVAHV...D..DMPNALSALSDLHAHKL..RVDPVNFKLLSHCLLVTLAAHLPAEFTPAVHASLDKFLASVSTVL
MYG_PHYCA TALGAILKK....K.GHHEAELKPLAQSHATKH..KIPIKYLEFISEAIIHVLHSRHPGDFGADAQGAMNKALELFRKDI
GLB5_PETMA NAVNDAVASM..DDTEKMSMKLRDLSGKHAKSF..QVDPQYFKVLAAVIADTVAAG.........DAGFEKLMSMICILL
HBB_HUMAN AHKYH......
HBA_HUMAN TSKYR......
MYG_PHYCA AAKYKELGYQG
GLB5_PETMA RSAY.......
//
>MYG_ESCGI
VLSDAEWQLVLNIWAKVEADVAGHGQDILIRLFKGHPETLEKFDKFKHLK
TEAEMKASEDLKKHGNTVLTALGGILKKKGHHEAELKPLAQSHATKHKIP
IKYLEFISDAIIHVLHSRHPGDFGADAQAAMNKALELFRKDIAAKYKELG
FQG
>MYG_HORSE
GLSDGEWQQVLNVWGKVEADIAGHGQEVLIRLFTGHPETLEKFDKFKHLK
TEAEMKASEDLKKHGTVVLTALGGILKKKGHHEAELKPLAQSHATKHKIP
IKYLEFISDAIIHVLHSKHPGNFGADAQGAMTKALELFRNDIAAKYKELG
FQG
>MYG_PROGU
GLSDGEWQLVLNVWGKVEGDLSGHGQEVLIRLFKGHPETLEKFDKFKHLK
AEDEMRASEELKKHGTTVLTALGGILKKKGQHAAELAPLAQSHATKHKIP
VKYLEFISEAIIQVLQSKHPGDFGADAQGAMSKALELFRNDIAAKYKELG
FQG
>MYG_SAISC
GLSDGEWQLVLNIWGKVEADIPSHGQEVLISLFKGHPETLEKFDKFKHLK
SEDEMKASEELKKHGTTVLTALGGILKKKGQHEAELKPLAQSHATKHKIP
VKYLELISDAIVHVLQKKHPGDFGADAQGAMKKALELFRNDMAAKYKELG
FQG
>MYG_LYCPI
GLSDGEWQIVLNIWGKVETDLAGHGQEVLIRLFKNHPETLDKFDKFKHLK
TEDEMKGSEDLKKHGNTVLTALGGILKKKGHHEAELKPLAQSHATKHKIP
VKYLEFISDAIIQVLQNKHSGDFHADTEAAMKKALELFRNDIAAKYKELG
FQG
>MYG_MOUSE
GLSDGEWQLVLNVWGKVEADLAGHGQEVLIGLFKTHPETLDKFDKFKNLK
SEEDMKGSEDLKKHGCTVLTALGTILKKKGQHAAEIQPLAQSHATKHKIP
VKYLEFISEIIIEVLKKRHSGDFGADAQGAMSKALELFRNDIAAKYKELG
FQG
>MYG_MUSAN
VDWEKVNSVWSAVESDLTAIGQNILLRLFEQYPESQNHFPKFKNKSLGEL
KDTADIKAQADTVLSALGNIVKKKGSHSQPVKALAATHITTHKIPPHYFT
KITTIAVDVLSEMYPSEMNAQVQAAFSGAFKIICSDIEKEYKAANFQG
>HBA_AILME
VLSPADKTNVKATWDKIGGHAGEYGGEALERTFASFPTTKTYFPHFDLSP
GSAQVKAHGKKVADALTTAVGHLDDLPGALSALSDLHAHKLRVDPVNFKL
LSHCLLVTLASHHPAEFTPAVHASLDKFFSAVSTVLTSKYR
>HBA_PROLO
VLSPADKANIKATWDKIGGHAGEYGGEALERTFASFPTTKTYFPHFDLSP
GSAQVKAHGKKVADALTLAVGHLDDLPGALSALSDLHAYKLRVDPVNFKL
LSHCLLVTLACHHPAEFTPAVHASLDKFFTSVSTVLTSKYR
>HBA_PAGLA
VLSSADKNNIKATWDKIGSHAGEYGAEALERTFISFPTTKTYFPHFDLSH
GSAQVKAHGKKVADALTLAVGHLEDLPNALSALSDLHAYKLRVDPVNFKL
LSHCLLVTLACHHPAEFTPAVHSALDKFFSAVSTVLTSKYR
>HBA_MACFA
VLSPADKTNVKAAWGKVGGHAGEYGAEALERMFLSFPTTKTYFPHFDLSH
GSAQVKGHGKKVADALTLAVGHVDDMPQALSALSDLHAHKLRVDPVNFKL
LSHCLLVTLAAHLPAEFTPAVHASLDKFLASVSTVLTSKYR
>HBA_MACSI
VLSPADKTNVKDAWGKVGGHAGEYGAEALERMFLSFPTTKTYFPHFDLSH
GSAQVKGHGKKVADALTLAVGHVDDMPQALSALSDLHAHKLRVDPVNFKL
LSHCLLVTLAAHLPAEFTPAVHASLDKFLASVSTVLTSKYR
>HBA_PONPY
VLSPADKTNVKTAWGKVGAHAGDYGAEALERMFLSFPTTKTYFPHFDLSH
GSAQVKDHGKKVADALTNAVAHVDDMPNALSALSDLHAHKLRVDPVNFKL
LSHCLLVTLAAHLPAEFTPAVHASLDKFLASVSTVLTSKYR
>HBA2_GALCR
VLSPTDKSNVKAAWEKVGAHAGDYGAEALERMFLSFPTTKTYFPHFDLSH
GSTQVKGHGKKVADALTNAVLHVDDMPSALSALSDLHAHKLRVDPVNFKL
LRHCLLVTLACHHPAEFTPAVHASLDKFMASVSTVLTSKYR
>HBA_MESAU
VLSAKDKTNISEAWGKIGGHAGEYGAEALERMFFVYPTTKTYFPHFDVSH
GSAQVKGHGKKVADALTNAVGHLDDLPGALSALSDLHAHKLRVDPVNFKL
LSHCLLVTLANHHPADFTPAVHASLDKFFASVSTVLTSKYR
>HBA2_BOSMU
VLSAADKGNVKAAWGKVGGHAAEYGAEALERMFLSFPTTKTYFPHFDLSH
GSAQVKGHGAKVAAALTKAVGHLDDLPGALSELSDLHAHKLRVDPVNFKL
LSHSLLVTLASHLPSDFTPAVHASLDKFLANVSTVLTSKYR
>HBA_ERIEU
VLSATDKANVKTFWGKLGGHGGEYGGEALDRMFQAHPTTKTYFPHFDLNP
GSAQVKGHGKKVADALTTAVNNLDDVPGALSALSDLHAHKLRVDPVNFKL
LSHCLLVTLALHHPADFTPAVHASLDKFLATVATVLTSKYR
>HBA_FRAPO
VLSAADKNNVKGIFGKISSHAEDYGAEALERMFITYPSTKTYFPHFDLSH
GSAQVKGHGKKVVAALIEAANHIDDIAGTLSKLSDLHAHKLRVDPVNFKL
LGQCFLVVVAIHHPSALTPEVHASLDKFLCAVGNVLTAKYR
>HBA_PHACO
VLSAADKNNVKGIFTKIAGHAEEYGAEALERMFITYPSTKTYFPHFDLSH
GSAQIKGHGKKVVAALIEAVNHIDDITGTLSKLSDLHAHKLRVDPVNFKL
LGQCFLVVVAIHHPSALTPEVHASLDKFLCAVGTVLTAKYR
>HBA_TRIOC
VLSANDKTNVKTVFTKITGHAEDYGAETLERMFITYPPTKTYFPHFDLHH
GSAQIKAHGKKVVGALIEAVNHIDDIAGALSKLSDLHAQKLRVDPVNFKL
LGQCFLVVVAIHHPSVLTPEVHASLDKFLCAVGNVLSAKYR
>HBA_ANSSE
VLSAADKGNVKTVFGKIGGHAEEYGAETLQRMFQTFPQTKTYFPHFDLQP
GSAQIKAHGKKVAAALVEAANHIDDIAGALSKLSDLHAQKLRVDPVNFKF
LGHCFLVVLAIHHPSLLTPEVHASMDKFLCAVATVLTAKYR
>HBA_COLLI
VLSANDKSNVKAVFAKIGGQAGDLGGEALERLFITYPQTKTYFPHFDLSH
GSAQIKGHGKKVAEALVEAANHIDDIAGALSKLSDLHAQKLRVDPVNFKL
LGHCFLVVVAVHFPSLLTPEVHASLDKFVLAVGTVLTAKYR
>HBAD_CHLME
MLTADDKKLLTQLWEKVAGHQEEFGSEALQRMFLTYPQTKTYFPHFDLHP
GSEQVRGHGKKVAAALGNAVKSLDNLSQALSELSNLHAYNLRVDPANFKL
LAQCFQVVLATHLGKDYSPEMHAAFDKFLSAVAAVLAEKYR
>HBAD_PASMO
MLTAEDKKLIQQIWGKLGGAEEEIGADALWRMFHSYPSTKTYFPHFDLSQ
GSDQIRGHGKKVVAALSNAIKNLDNLSQALSELSNLHAYNLRVDPVNFKF
LSQCLQVSLATRLGKEYSPEVHSAVDKFMSAVASVLAEKYR
>HBAZ_HORSE
SLTKAERTMVVSIWGKISMQADAVGTEALQRLFSSYPQTKTYFPHFDLHE
GSPQLRAHGSKVAAAVGDAVKSIDNVAGALAKLSELHAYILRVDPVNFKF
LSHCLLVTLASRLPADFTADAHAAWDKFLSIVSSVLTEKYR
>HBA4_SALIR
SLSAKDKANVKAIWGKILPKSDEIGEQALSRMLVVYPQTKAYFSHWASVA
PGSAPVKKHGITIMNQIDDCVGHMDDLFGFLTKLSELHATKLRVDPTNFK
ILAHNLIVVIAAYFPAEFTPEIHLSVDKFLQQLALALAEKYR
>HBB_ORNAN
VHLSGGEKSAVTNLWGKVNINELGGEALGRLLVVYPWTQRFFEAFGDLSS
AGAVMGNPKVKAHGAKVLTSFGDALKNLDDLKGTFAKLSELHCDKLHVDP
ENFNRLGNVLIVVLARHFSKDFSPEVQAAWQKLVSGVAHALGHKYH
>HBB_TACAC
VHLSGSEKTAVTNLWGHVNVNELGGEALGRLLVVYPWTQRFFESFGDLSS
ADAVMGNAKVKAHGAKVLTSFGDALKNLDNLKGTFAKLSELHCDKLHVDP
ENFNRLGNVLVVVLARHFSKEFTPEAQAAWQKLVSGVSHALAHKYH
>HBE_PONPY
VHFTAEEKAAVTSLWSKMNVEEAGGEALGRLLVVYPWTQRFFDSFGNLSS
PSAILGNPKVKAHGKKVLTSFGDAIKNMDNLKTTFAKLSELHCDKLHVDP
ENFKLLGNVMVIILATHFGKEFTPEVQAAWQKLVSAVAIALAHKYH
>HBB_SPECI
VHLSDGEKNAISTAWGKVHAAEVGAEALGRLLVVYPWTQRFFDSFGDLSS
ASAVMGNAKVKAHGKKVIDSFSNGLKHLDNLKGTFASLSELHCDKLHVDP
ENFKLLGNMIVIVMAHHLGKDFTPEAQAAFQKVVAGVANALAHKYH
>HBB_SPETO
VHLTDGEKNAISTAWGKVNAAEIGAEALGRLLVVYPWTQRFFDSFGDLSS
ASAVMGNAKVKAHGKKVIDSFSNGLKHLDNLKGTFASLSELHCDKLHVDP
ENFKLLGNMIVIVMAHHLGKDFTPEAQAAFQKVVAGVANALSHKYH
>HBB_EQUHE
VQLSGEEKAAVLALWDKVNEEEVGGEALGRLLVVYPWTQRFFDSFGDLSN
PAAVMGNPKVKAHGKKVLHSFGEGVHHLDNLKGTFAQLSELHCDKLHVDP
ENFRLLGNVLVVVLARHFGKDFTPELQASYQKVVAGVANALAHKYH
>HBB_SUNMU
VHLSGEEKACVTGLWGKVNEDEVGAEALGRLLVVYPWTQRFFDSFGDLSS
ASAVMGNPKVKAHGKKVLHSLGEGVANLDNLKGTFAKLSELHCDKLHVDP
ENFRLLGNVLVVVLASKFGKEFTPPVQAAFQKVVAGVANALAHKYH
>HBB_CALAR
VHLTGEEKSAVTALWGKVNVDEVGGEALGRLLVVYPWTQRFFESFGDLST
PDAVMNNPKVKAHGKKVLGAFSDGLTHLDNLKGTFAHLSELHCDKLHVDP
ENFRLLGNVLVCVLAHHFGKEFTPVVQAAYQKVVAGVANALAHKYH
>HBB_MANSP
VHLTPEEKTAVTTLWGKVNVDEVGGEALGRLLVVYPWTQRFFDSFGDLSS
PDAVMGNPKVKAHGKKVLGAFSDGLNHLDNLKGTFAQLSELHCDKLHVDP
ENFKLLGNVLVCVLAHHFGKEFTPQVQAAYQKVVAGVANALAHKYH
>HBB_URSMA
VHLTGEEKSLVTGLWGKVNVDEVGGEALGRLLVVYPWTQRFFDSFGDLSS
ADAIMNNPKVKAHGKKVLNSFSDGLKNLDNLKGTFAKLSELHCDKLHVDP
ENFKLLGNVLVCVLAHHFGKEFTPQVQAAYQKVVAGVANALAHKYH
>HBB_RABIT
VHLSSEEKSAVTALWGKVNVEEVGGEALGRLLVVYPWTQRFFESFGDLSS
ANAVMNNPKVKAHGKKVLAAFSEGLSHLDNLKGTFAKLSELHCDKLHVDP
ENFRLLGNVLVIVLSHHFGKEFTPQVQAAYQKVVAGVANALAHKYH
>HBB_TUPGL
VHLSGEEKAAVTGLWGKVDLEKVGGQSLGSLLIVYPWTQRFFDSFGDLSS
PSAVMSNPKVKAHGKKVLTSFSDGLNHLDNLKGTFAKLSELHCDKLHVDP
ENFRLLGNVLVRVLACNFGPEFTPQVQAAFQKVVAGVANALAHKYH
>HBB_TRIIN
VHLTPEEKALVIGLWAKVNVKEYGGEALGRLLVVYPWTQRFFEHFGDLSS
ASAIMNNPKVKAHGEKVFTSFGDGLKHLEDLKGAFAELSELHCDKLHVDP
ENFRLLGNVLVCVLARHFGKEFSPEAQAAYQKVVAGVANALAHKYH
>HBB_COLLI
VHWSAEEKQLITSIWGKVNVADCGAEALARLLIVYPWTQRFFSSFGNLSS
ATAISGNPNVKAHGKKVLTSFGDAVKNLDNIKGTFAQLSELHCDKLHVDP
ENFRLLGDILVIILAAHFGKDFTPECQAAWQKLVRVVAHALARKYH
>HBB_LARRI
VHWSAEEKQLITGLWGKVNVADCGAEALARLLIVYPWTQRFFASFGNLSS
PTAINGNPMVRAHGKKVLTSFGEAVKNLDNIKNTFAQLSELHCDKLHVDP
ENFRLLGDILIIVLAAHFAKDFTPDSQAAWQKLVRVVAHALARKYH
>HBB1_VAREX
VHWTAEEKQLICSLWGKIDVGLIGGETLAGLLVIYPWTQRQFSHFGNLSS
PTAIAGNPRVKAHGKKVLTSFGDAIKNLDNIKDTFAKLSELHCDKLHVDP
TNFKLLGNVLVIVLADHHGKEFTPAHHAAYQKLVNVVSHSLARRYH
>HBB2_XENTR
VHWTAEEKATIASVWGKVDIEQDGHDALSRLLVVYPWTQRYFSSFGNLSN
VSAVSGNVKVKAHGNKVLSAVGSAIQHLDDVKSHLKGLSKSHAEDLHVDP
ENFKRLADVLVIVLAAKLGSAFTPQVQAVWEKLNATLVAALSHGYF
>HBBL_RANCA
VHWTAEEKAVINSVWQKVDVEQDGHEALTRLFIVYPWTQRYFSTFGDLSS
PAAIAGNPKVHAHGKKILGAIDNAIHNLDDVKGTLHDLSEEHANELHVDP
ENFRRLGEVLIVVLGAKLGKAFSPQVQHVWEKFIAVLVDALSHSYH
>HBB2_TRICR
VHLTAEDRKEIAAILGKVNVDSLGGQCLARLIVVNPWSRRYFHDFGDLSS
CDAICRNPKVLAHGAKVMRSIVEATKHLDNLREYYADLSVTHSLKFYVDP
ENFKLFSGIVIVCLALTLQTDFSCHKQLAFEKLMKGVSHALGHGY
#!/bin/bash
# Partition and name
#SBATCH --job-name=HTSlib-test.slurm
#SBATCH -p physicaltest
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load HTSlib/1.9-intel-2018.u4
# Start the tabix binary from htslib
tabix -p gff sample.sorted.gff.gz;
tabix sample.sorted.gff.gz chr1:10,000,000-20,000,000;
......@@ -10,12 +10,14 @@ ssh <username>@spartan.hpc.unimelb.edu.au -X
sinteractive -p cloud --x11=first
xclock
# If you are running interactive jobs on GPU partitions you have to include the appropriate QOS commands.
# If you are running interactive jobs on GPU partitions you have to include the appropriate QOS commands or account.
sinteractive --x11=first --partition=shortgpgpu --gres=gpu:p100:1
sinteractive --x11=first --partition=deeplearn --qos=gpgpudeeplearn --gres=gpu:v100:1
sinteractive --partition=gpgpu --account=hpcadmingpgpu --gres=gpu:2
# If the user is not using a Linux local machine they will need to install an X-windows client, such as Xming for MS-Windows or X11 on Mac OSX from the XQuartz project.
......@@ -25,4 +27,8 @@ export http_proxy=http://wwwproxy.unimelb.edu.au:8000
export https_proxy=$http_proxy
export ftp_proxy=$http_proxy
# Or load the web_proxy module
module load web_proxy
# desktop builds for dual hex-core Xeons and Fermi GPUs
# use mpicxx or nvcc with its default compiler
# use default FFT support = KISS library
# build with one accelerator package
cpu: -d ../.. -j 16 -p none asphere molecule kspace rigid orig -o cpu file clean mpi
omp: -d ../.. -j 16 -p none asphere molecule kspace rigid omp orig -o omp file clean mpi
opt: -d ../.. -j 16 -p none asphere molecule kspace rigid opt orig -o opt file clean mpi
cuda_double: -d ../.. -j 16 -p none asphere molecule kspace rigid cuda orig -cuda mode=double arch=21 -o cuda_double lib-cuda file clean mpi
cuda_mixed: -d ../.. -j 16 -p none asphere molecule kspace rigid cuda orig -cuda mode=mixed arch=21 -o cuda_mixed lib-cuda file clean mpi
cuda_single: -d ../.. -j 16 -p none asphere molecule kspace rigid cuda orig -cuda mode=single arch=21 -o cuda_single lib-cuda file clean mpi
gpu_double: -d ../.. -j 16 -p none asphere molecule kspace rigid gpu orig -gpu mode=double arch=21 -o gpu_double lib-gpu file clean mpi
gpu_mixed: -d ../.. -j 16 -p none asphere molecule kspace rigid gpu orig -gpu mode=mixed arch=21 -o gpu_mixed lib-gpu file clean mpi
gpu_single: -d ../.. -j 16 -p none asphere molecule kspace rigid gpu orig -gpu mode=single arch=21 -o gpu_single lib-gpu file clean mpi
intel_cpu: -d ../.. -j 16 -p none asphere molecule kspace rigid intel omp orig -cc mpi wrap=icc -intel cpu -o intel_cpu file clean mpi
#intel_phi: -d ../.. -j 16 -p none asphere molecule kspace rigid intel omp orig -intel phi -o intel_phi file clean mpi
kokkos_omp: -d ../.. -j 16 -p none asphere molecule kspace rigid kokkos orig -kokkos omp -o kokkos_omp file clean mpi
kokkos_cuda: -d ../.. -j 16 -p none asphere molecule kspace rigid kokkos orig -cc nvcc wrap=mpi -kokkos cuda arch=21 -o kokkos_cuda file clean mpi
#kokkos_phi: -d ../.. -j 16 -p none asphere molecule kspace rigid kokkos orig -kokkos phi -o kokkos_phi file clean mpi
# build with all accelerator packages for CPU
all_cpu: -d ../.. -j 16 -p asphere molecule kspace rigid none opt omp intel kokkos orig -cc mpi wrap=icc -intel cpu -kokkos omp -o all_cpu file clean mpi
# build with all accelerator packages for GPU
all_gpu: -d ../.. -j 16 -p none asphere molecule kspace rigid omp gpu cuda kokkos orig -cc nvcc wrap=mpi -cuda mode=double arch=21 -gpu mode=double arch=21 -kokkos cuda arch=21 -o all_gpu lib-all file clean mpi
These are example scripts that can be run with any of
the acclerator packages in LAMMPS:
GPU, USER-INTEL, KOKKOS, USER-OMP, OPT
The easiest way to build LAMMPS with these packages
is via the flags described in Section 4 of the manual.
The easiest way to run these scripts is by using the appropriate
Details on the individual accelerator packages
can be found in doc/Section_accelerate.html.
---------------------
Build LAMMPS with one or more of the accelerator packages
Note that in addition to any accelerator packages, these packages also
need to be installed to run all of the example scripts: ASPHERE,
MOLECULE, KSPACE, RIGID.
These two targets will build a single LAMMPS executable with all the
CPU accelerator packages installed (USER-INTEL for CPU, KOKKOS for
OMP, USER-OMP, OPT) or all the GPU accelerator packages installed
(GPU, KOKKOS for CUDA):
For any build with GPU, or KOKKOS for CUDA, be sure to set
the arch=XX setting to the appropriate value for the GPUs and Cuda
environment on your system.
---------------------
Running with each of the accelerator packages
All of the input scripts have a default problem size and number of
timesteps:
in.lj = LJ melt with cutoff of 2.5 = 32K atoms for 100 steps
in.lj.5.0 = same with cutoff of 5.0 = 32K atoms for 100 steps
in.phosphate = 11K atoms for 100 steps
in.rhodo = 32K atoms for 100 steps
in.lc = 33K atoms for 100 steps (after 200 steps equilibration)
These can be reset using the x,y,z and t variables in the command
line. E.g. adding "-v x 2 -v y 2 -v z 4 -t 1000" to any of the run
command below would run a 16x larger problem (2x2x4) for 1000 steps.
Here are example run commands using each of the accelerator packages:
** CPU only
lmp_cpu < in.lj
mpirun -np 4 lmp_cpu -in in.lj
** OPT package
lmp_opt -sf opt < in.lj
mpirun -np 4 lmp_opt -sf opt -in in.lj
** USER-OMP package
lmp_omp -sf omp -pk omp 1 < in.lj
mpirun -np 4 lmp_omp -sf opt -pk omp 1 -in in.lj # 4 MPI, 1 thread/MPI
mpirun -np 2 lmp_omp -sf opt -pk omp 4 -in in.lj # 2 MPI, 4 thread/MPI
** GPU package
lmp_gpu_double -sf gpu < in.lj
mpirun -np 8 lmp_gpu_double -sf gpu < in.lj # 8 MPI, 8 MPI/GPU
mpirun -np 12 lmp_gpu_double -sf gpu -pk gpu 2 < in.lj # 12 MPI, 6 MPI/GPU
mpirun -np 4 lmp_gpu_double -sf gpu -pk gpu 2 tpa 8 < in.lj.5.0 # 4 MPI, 2 MPI/GPU
Note that when running in.lj.5.0 (which has a long cutoff) with the
GPU package, the "-pk tpa" setting should be > 1 (e.g. 8) for best
performance.
** KOKKOS package for OMP
lmp_kokkos_omp -k on t 1 -sf kk -pk kokkos neigh half < in.lj
mpirun -np 2 lmp_kokkos_omp -k on t 4 -sf kk < in.lj # 2 MPI, 4 thread/MPI
Note that when running with just 1 thread/MPI, "-pk kokkos neigh half"
was specified to use half neighbor lists which are faster when running
on just 1 thread.
** KOKKOS package for CUDA
lmp_kokkos_cuda -k on t 1 -sf kk < in.lj # 1 thread, 1 GPU
mpirun -np 2 lmp_kokkos_cuda -k on t 6 g 2 -sf kk < in.lj # 2 MPI, 6 thread/MPI, 1 MPI/GPU
** KOKKOS package for PHI
mpirun -np 1 lmp_kokkos_phi -k on t 240 -sf kk -in in.lj # 1 MPI, 240 threads/MPI
mpirun -np 30 lmp_kokkos_phi -k on t 8 -sf kk -in in.lj # 30 MPI, 8 threads/MPI
** USER-INTEL package for CPU
lmp_intel_cpu -sf intel < in.lj
mpirun -np 4 lmp_intl_cpu -sf intel < in.lj # 4 MPI
mpirun -np 4 lmp_intl_cpu -sf intel -pk omp 2 < in.lj # 4 MPI, 2 thread/MPI
** USER-INTEL package for PHI
lmp_intel_phi -sf intel -pk intel 1 omp 16 < in.lc # 1 MPI, 16 CPU thread/MPI, 1 Phi, 240 Phi thread/MPI
mpirun -np 4 lmp_intel_phi -sf intel -pk intel 1 omp 2 < in.lc # 4 MPI, 2 CPU threads/MPI, 1 Phi, 60 Phi thread/MPI
Note that there is currently no Phi support for pair_style lj/cut in
the USER-INTEL package.
This diff is collapsed.
# Gay-Berne benchmark
# biaxial ellipsoid mesogens in isotropic phase
# shape: 2 1.5 1
# cutoff 4.0 with skin 0.8
# NPT, T=2.4, P=8.0
variable x index 1
variable y index 1
variable z index 1
variable t index 100
variable i equal $x*32
variable j equal $y*32
variable k equal $z*32
units lj
atom_style ellipsoid
# create lattice of ellipsoids
lattice sc 0.22
region box block 0 $i 0 $j 0 $k
create_box 1 box
create_atoms 1 box
set type 1 mass 1.5
set type 1 shape 1 1.5 2
set group all quat/random 982381
compute rot all temp/asphere
group spheroid type 1
variable dof equal count(spheroid)+3
compute_modify rot extra/dof ${dof}
velocity all create 2.4 41787 loop geom
pair_style gayberne 1.0 3.0 1.0 4.0
pair_coeff 1 1 1.0 1.0 1.0 0.5 0.2 1.0 0.5 0.2
neighbor 0.8 bin
timestep 0.002
thermo 100
# equilibration run
fix 1 all npt/asphere temp 2.4 2.4 0.1 iso 5.0 8.0 0.1
compute_modify 1_temp extra/dof ${dof}
run 200
# dynamics run
reset_timestep 0
unfix 1
fix 1 all nve/asphere
run $t
# 3d Lennard-Jones melt
variable x index 1
variable y index 1
variable z index 1
variable t index 100
variable xx equal 20*$x
variable yy equal 20*$y
variable zz equal 20*$z
units lj
atom_style atomic
lattice fcc 0.8442
region box block 0 ${xx} 0 ${yy} 0 ${zz}
create_box 1 box
create_atoms 1 box
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
thermo 100
run $t
# 3d Lennard-Jones melt
variable x index 1
variable y index 1
variable z index 1
variable t index 100
variable xx equal 20*$x
variable yy equal 20*$y
variable zz equal 20*$z
units lj
atom_style atomic
lattice fcc 0.8442
region box block 0 ${xx} 0 ${yy} 0 ${zz}
create_box 1 box
create_atoms 1 box
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 5.0
pair_coeff 1 1 1.0 1.0
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
thermo 100
run $t
# GI-System
variable x index 1
variable y index 1
variable z index 1
variable t index 100
units metal
atom_style charge
read_data data.phosphate
replicate $x $y $z
pair_style lj/cut/coul/long 15.0
pair_coeff 1 1 0.0 0.29
pair_coeff 1 2 0.0 0.29
pair_coeff 1 3 0.000668 2.5738064
pair_coeff 2 2 0.0 0.29
pair_coeff 2 3 0.004251 1.91988674
pair_coeff 3 3 0.012185 2.91706967
kspace_style pppm 1e-5
neighbor 2.0 bin
thermo 100
timestep 0.001
fix 1 all npt temp 400 400 0.01 iso 1000.0 1000.0 1.0
run $t
# Rhodopsin model
variable x index 1
variable y index 1
variable z index 1
variable t index 100
units real
neigh_modify delay 5 every 1
atom_style full
bond_style harmonic
angle_style charmm
dihedral_style charmm
improper_style harmonic
pair_style lj/charmm/coul/long 8.0 10.0
pair_modify mix arithmetic
kspace_style pppm 1e-4
read_data ../../bench/data.rhodo
replicate $x $y $z
fix 1 all shake 0.0001 5 0 m 1.0 a 232
fix 2 all npt temp 300.0 300.0 100.0 &
z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1
special_bonds charmm
thermo 100
thermo_style multi
timestep 2.0
run $t
This LAMMPS simulation made specific use of work described in the
following references. See http://lammps.sandia.gov/cite.html
for details.
GPU package (short-range, long-range and three-body potentials):
@Article{Brown11,
author = {W. M. Brown, P. Wang, S. J. Plimpton, A. N. Tharrington},
title = {Implementing Molecular Dynamics on Hybrid High Performance Computers - Short Range Forces},
journal = {Comp.~Phys.~Comm.},
year = 2011,
volume = 182,
pages = {898--911}
}
@Article{Brown12,
author = {W. M. Brown, A. Kohlmeyer, S. J. Plimpton, A. N. Tharrington},
title = {Implementing Molecular Dynamics on Hybrid High Performance Computers - Particle-Particle Particle-Mesh},
journal = {Comp.~Phys.~Comm.},
year = 2012,
volume = 183,
pages = {449--459}
}
@Article{Brown13,
author = {W. M. Brown, Y. Masako},
title = {Implementing Molecular Dynamics on Hybrid High Performance Computers – Three-Body Potentials},
journal = {Comp.~Phys.~Comm.},
year = 2013,
volume = 184,
pages = {2785--2793}
}
LAMMPS (11 Aug 2017)
package gpu 1
package gpu 2
# 3d Lennard-Jones melt
variable x index 1
variable y index 1
variable z index 1
variable t index 100
variable xx equal 20*$x
variable xx equal 20*1
variable yy equal 20*$y
variable yy equal 20*1
variable zz equal 20*$z
variable zz equal 20*1
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 ${xx} 0 ${yy} 0 ${zz}
region box block 0 20 0 ${yy} 0 ${zz}
region box block 0 20 0 20 0 ${zz}
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (33.5919 33.5919 33.5919)
1 by 1 by 2 MPI processor grid
create_atoms 1 box
Created 32000 atoms
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
thermo 100
run $t
run 100
Per MPI rank memory allocation (min/avg/max) = 4.811 | 4.811 | 4.811 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733682 0 -4.6134357 -5.0197069
100 0.75745332 -5.7585059 0 -4.6223615 0.20726081
Loop time of 0.0620875 on 2 procs for 100 steps with 32000 atoms
Performance: 695791.827 tau/day, 1610.629 timesteps/s
91.4% CPU use with 2 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.022442 | 0.022491 | 0.022539 | 0.0 | 36.22
Neigh | 9.5367e-07 | 1.9073e-06 | 2.861e-06 | 0.0 | 0.00
Comm | 0.019505 | 0.0196 | 0.019695 | 0.1 | 31.57
Output | 4.7922e-05 | 5.4002e-05 | 6.0081e-05 | 0.0 | 0.09
Modify | 0.015807 | 0.015847 | 0.015887 | 0.0 | 25.52
Other | | 0.004095 | | | 6.59
Nlocal: 16000 ave 16001 max 15999 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Nghost: 13632.5 ave 13635 max 13630 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Neighs: 0 ave 0 max 0 min
Histogram: 2 0 0 0 0 0 0 0 0 0
Total # of neighbors = 0
Ave neighs/atom = 0
Neighbor list builds = 5
Dangerous builds not checked
Please see the log.cite file for references relevant to this simulation
Total wall time: 0:00:00
LAMMPS (1 Feb 2014)
# 3d Lennard-Jones melt
newton off
package gpu force/neigh 0 1 1
variable x index 2
variable y index 2
variable z index 2
variable xx equal 20*$x
variable xx equal 20*2
variable yy equal 20*$y
variable yy equal 20*2
variable zz equal 20*$z
variable zz equal 20*2
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 ${xx} 0 ${yy} 0 ${zz}
region box block 0 40 0 ${yy} 0 ${zz}
region box block 0 40 0 40 0 ${zz}
region box block 0 40 0 40 0 40
create_box 1 box
Created orthogonal box = (0 0 0) to (67.1838 67.1838 67.1838)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 256000 atoms
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut/gpu 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
thermo 100
run 1000
Memory usage per processor = 46.8462 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733683 0 -4.6133768 -5.0196737
100 0.75865617 -5.760326 0 -4.6223462 0.19586079
200 0.75643086 -5.7572859 0 -4.6226441 0.22641241
300 0.74927423 -5.7463997 0 -4.6224927 0.29737707
400 0.74049393 -5.7329259 0 -4.6221893 0.3776681
500 0.73092107 -5.7182622 0 -4.6218849 0.46900655
600 0.72320925 -5.7064076 0 -4.6215979 0.53444495
700 0.71560947 -5.6946702 0 -4.6212602 0.59905402
800 0.71306623 -5.6906095 0 -4.6210143 0.62859381
900 0.70675364 -5.6807352 0 -4.6206089 0.68471945
1000 0.7044073 -5.6771664 0 -4.6205596 0.70033364
Loop time of 21.016 on 1 procs for 1000 steps with 256000 atoms
Pair time (%) = 13.4638 (64.0646)
Neigh time (%) = 6.74725e-05 (0.000321052)
Comm time (%) = 1.09447 (5.20779)
Outpt time (%) = 0.0103211 (0.0491108)
Other time (%) = 6.44732 (30.6781)
Nlocal: 256000 ave 256000 max 256000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 69917 ave 69917 max 69917 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 0 ave 0 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 0
Ave neighs/atom = 0
Neighbor list builds = 50
Dangerous builds = 0
Please see the log.cite file for references relevant to this simulation
LAMMPS (1 Feb 2014)
# 3d Lennard-Jones melt
newton off
package gpu force/neigh 0 1 1
variable x index 2
variable y index 2
variable z index 2
variable xx equal 20*$x
variable xx equal 20*2
variable yy equal 20*$y
variable yy equal 20*2
variable zz equal 20*$z
variable zz equal 20*2
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 ${xx} 0 ${yy} 0 ${zz}
region box block 0 40 0 ${yy} 0 ${zz}
region box block 0 40 0 40 0 ${zz}
region box block 0 40 0 40 0 40
create_box 1 box
Created orthogonal box = (0 0 0) to (67.1838 67.1838 67.1838)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 256000 atoms
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut/gpu 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
thermo 100
run 1000
Memory usage per processor = 14.5208 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733683 0 -4.6133768 -5.0196737
100 0.75865617 -5.760326 0 -4.6223462 0.19586079
200 0.75643087 -5.7572859 0 -4.6226441 0.2264124
300 0.74927423 -5.7463997 0 -4.6224927 0.29737713
400 0.7404939 -5.7329258 0 -4.6221893 0.37766836
500 0.73092104 -5.7182626 0 -4.6218853 0.46900587
600 0.72320865 -5.7064076 0 -4.6215989 0.53444677
700 0.71560468 -5.6946635 0 -4.6212607 0.59907258
800 0.7130474 -5.6905859 0 -4.621019 0.62875333
900 0.70683795 -5.680864 0 -4.6206112 0.6839564
1000 0.70454326 -5.6773491 0 -4.6205384 0.69975744
Loop time of 8.72938 on 4 procs for 1000 steps with 256000 atoms
Pair time (%) = 5.30046 (60.7198)
Neigh time (%) = 5.78761e-05 (0.000663004)
Comm time (%) = 1.62433 (18.6076)
Outpt time (%) = 0.0129588 (0.14845)
Other time (%) = 1.79157 (20.5235)
Nlocal: 64000 ave 64066 max 63924 min
Histogram: 1 0 1 0 0 0 0 0 0 2
Nghost: 30535 ave 30559 max 30518 min
Histogram: 1 0 1 0 1 0 0 0 0 1
Neighs: 0 ave 0 max 0 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Total # of neighbors = 0
Ave neighs/atom = 0
Neighbor list builds = 50
Dangerous builds = 0
Please see the log.cite file for references relevant to this simulation
LAMMPS (27 May 2014)
KOKKOS mode is enabled (../lammps.cpp:468)
using 6 OpenMP thread(s) per MPI task
# 3d Lennard-Jones melt
variable x index 1
variable y index 1
variable z index 1
variable xx equal 20*$x
variable xx equal 20*1
variable yy equal 20*$y
variable yy equal 20*1
variable zz equal 20*$z
variable zz equal 20*1
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 ${xx} 0 ${yy} 0 ${zz}
region box block 0 20 0 ${yy} 0 ${zz}
region box block 0 20 0 20 0 ${zz}
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (33.5919 33.5919 33.5919)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 32000 atoms
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run 100
Memory usage per processor = 16.9509 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733681 0 -4.6134356 -5.0197073
100 0.7574531 -5.7585055 0 -4.6223613 0.20726105
Loop time of 0.57192 on 6 procs (1 MPI x 6 OpenMP) for 100 steps with 32000 atoms
Pair time (%) = 0.205416 (35.917)
Neigh time (%) = 0.112468 (19.665)
Comm time (%) = 0.174223 (30.4629)
Outpt time (%) = 0.000159025 (0.0278055)
Other time (%) = 0.0796535 (13.9274)
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 19657 ave 19657 max 19657 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 0 ave 0 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 2.40567e+06 ave 2.40567e+06 max 2.40567e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 2405666
Ave neighs/atom = 75.1771
Neighbor list builds = 5
Dangerous builds = 0
LAMMPS (27 May 2014)
KOKKOS mode is enabled (../lammps.cpp:468)
using 6 OpenMP thread(s) per MPI task
# 3d Lennard-Jones melt
variable x index 1
variable y index 1
variable z index 1
variable xx equal 20*$x
variable xx equal 20*1
variable yy equal 20*$y
variable yy equal 20*1
variable zz equal 20*$z
variable zz equal 20*1
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 ${xx} 0 ${yy} 0 ${zz}
region box block 0 20 0 ${yy} 0 ${zz}
region box block 0 20 0 20 0 ${zz}
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (33.5919 33.5919 33.5919)
1 by 1 by 2 MPI processor grid
create_atoms 1 box
Created 32000 atoms
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run 100
Memory usage per processor = 8.95027 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733681 0 -4.6134356 -5.0197073
100 0.7574531 -5.7585055 0 -4.6223613 0.20726105
Loop time of 0.689608 on 12 procs (2 MPI x 6 OpenMP) for 100 steps with 32000 atoms
Pair time (%) = 0.210953 (30.5903)
Neigh time (%) = 0.122991 (17.8349)
Comm time (%) = 0.25264 (36.6353)
Outpt time (%) = 0.000259042 (0.0375636)
Other time (%) = 0.102765 (14.9019)
Nlocal: 16000 ave 16001 max 15999 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Nghost: 13632.5 ave 13635 max 13630 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Neighs: 0 ave 0 max 0 min
Histogram: 2 0 0 0 0 0 0 0 0 0
FullNghs: 1.20283e+06 ave 1.20347e+06 max 1.2022e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Total # of neighbors = 2405666
Ave neighs/atom = 75.1771
Neighbor list builds = 5
Dangerous builds = 0
LAMMPS (27 May 2014)
KOKKOS mode is enabled (../lammps.cpp:468)
using 1 OpenMP thread(s) per MPI task
# 3d Lennard-Jones melt
variable x index 1
variable y index 1
variable z index 1
variable xx equal 20*$x
variable xx equal 20*1
variable yy equal 20*$y
variable yy equal 20*1
variable zz equal 20*$z
variable zz equal 20*1
package kokkos neigh half
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 ${xx} 0 ${yy} 0 ${zz}
region box block 0 20 0 ${yy} 0 ${zz}
region box block 0 20 0 20 0 ${zz}
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (33.5919 33.5919 33.5919)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 32000 atoms
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run 100
Memory usage per processor = 7.79551 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733681 0 -4.6134356 -5.0197073
100 0.7574531 -5.7585055 0 -4.6223613 0.20726105
Loop time of 2.29105 on 1 procs (1 MPI x 1 OpenMP) for 100 steps with 32000 atoms
Pair time (%) = 1.82425 (79.6249)
Neigh time (%) = 0.338632 (14.7806)
Comm time (%) = 0.0366232 (1.59853)
Outpt time (%) = 0.000144005 (0.00628553)
Other time (%) = 0.0914049 (3.98965)
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 19657 ave 19657 max 19657 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 1.20283e+06 ave 1.20283e+06 max 1.20283e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 1202833
Ave neighs/atom = 37.5885
Neighbor list builds = 5
Dangerous builds = 0
LAMMPS (27 May 2014)
KOKKOS mode is enabled (../lammps.cpp:468)
using 4 OpenMP thread(s) per MPI task
# 3d Lennard-Jones melt
variable x index 1
variable y index 1
variable z index 1
variable xx equal 20*$x
variable xx equal 20*1
variable yy equal 20*$y
variable yy equal 20*1
variable zz equal 20*$z
variable zz equal 20*1
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 ${xx} 0 ${yy} 0 ${zz}
region box block 0 20 0 ${yy} 0 ${zz}
region box block 0 20 0 20 0 ${zz}
region box block 0 20 0 20 0 20
create_box 1 box
Created orthogonal box = (0 0 0) to (33.5919 33.5919 33.5919)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 32000 atoms
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut 2.5
pair_coeff 1 1 1.0 1.0 2.5
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
run 100
Memory usage per processor = 13.2888 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733681 0 -4.6134356 -5.0197073
100 0.7574531 -5.7585055 0 -4.6223613 0.20726105
Loop time of 0.983697 on 4 procs (1 MPI x 4 OpenMP) for 100 steps with 32000 atoms
Pair time (%) = 0.767155 (77.9869)
Neigh time (%) = 0.14734 (14.9782)
Comm time (%) = 0.041466 (4.21532)
Outpt time (%) = 0.000172138 (0.0174991)
Other time (%) = 0.0275636 (2.80204)
Nlocal: 32000 ave 32000 max 32000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 19657 ave 19657 max 19657 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 0 ave 0 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
FullNghs: 2.40567e+06 ave 2.40567e+06 max 2.40567e+06 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 2405666
Ave neighs/atom = 75.1771
Neighbor list builds = 5
Dangerous builds = 0
LAMMPS (1 Feb 2014)
# 3d Lennard-Jones melt
newton off
package gpu force/neigh 0 1 1 threads_per_atom 8
variable x index 2
variable y index 2
variable z index 2
variable xx equal 20*$x
variable xx equal 20*2
variable yy equal 20*$y
variable yy equal 20*2
variable zz equal 20*$z
variable zz equal 20*2
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 ${xx} 0 ${yy} 0 ${zz}
region box block 0 40 0 ${yy} 0 ${zz}
region box block 0 40 0 40 0 ${zz}
region box block 0 40 0 40 0 40
create_box 1 box
Created orthogonal box = (0 0 0) to (67.1838 67.1838 67.1838)
1 by 1 by 1 MPI processor grid
create_atoms 1 box
Created 256000 atoms
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut/gpu 5.0
pair_coeff 1 1 1.0 1.0 5.0
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
thermo 100
run 1000
Memory usage per processor = 58.5717 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -7.1616931 0 -5.0017016 -5.6743465
100 0.75998441 -6.1430228 0 -5.0030506 -0.43702263
200 0.75772859 -6.1397321 0 -5.0031437 -0.40563811
300 0.75030002 -6.1286578 0 -5.0032122 -0.33104717
400 0.73999054 -6.1132463 0 -5.0032649 -0.24001424
500 0.73224838 -6.1016938 0 -5.0033255 -0.16524979
600 0.72455889 -6.0902001 0 -5.003366 -0.099949772
700 0.71911385 -6.0820798 0 -5.0034133 -0.046759186
800 0.71253787 -6.0722342 0 -5.0034316 0.0019671065
900 0.70835425 -6.0659819 0 -5.0034546 0.037482543
1000 0.70648171 -6.0631852 0 -5.0034668 0.057159495
Loop time of 53.1575 on 1 procs for 1000 steps with 256000 atoms
Pair time (%) = 45.4859 (85.5682)
Neigh time (%) = 7.9155e-05 (0.000148907)
Comm time (%) = 1.40304 (2.63941)
Outpt time (%) = 0.00999498 (0.0188026)
Other time (%) = 6.25847 (11.7734)
Nlocal: 256000 ave 256000 max 256000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 141542 ave 141542 max 141542 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 0 ave 0 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 0
Ave neighs/atom = 0
Neighbor list builds = 50
Dangerous builds = 0
Please see the log.cite file for references relevant to this simulation
LAMMPS (1 Feb 2014)
# 3d Lennard-Jones melt
newton off
package gpu force/neigh 0 1 1 threads_per_atom 8
variable x index 2
variable y index 2
variable z index 2
variable xx equal 20*$x
variable xx equal 20*2
variable yy equal 20*$y
variable yy equal 20*2
variable zz equal 20*$z
variable zz equal 20*2
units lj
atom_style atomic
lattice fcc 0.8442
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
region box block 0 ${xx} 0 ${yy} 0 ${zz}
region box block 0 40 0 ${yy} 0 ${zz}
region box block 0 40 0 40 0 ${zz}
region box block 0 40 0 40 0 40
create_box 1 box
Created orthogonal box = (0 0 0) to (67.1838 67.1838 67.1838)
1 by 2 by 2 MPI processor grid
create_atoms 1 box
Created 256000 atoms
mass 1 1.0
velocity all create 1.44 87287 loop geom
pair_style lj/cut/gpu 5.0
pair_coeff 1 1 1.0 1.0 5.0
neighbor 0.3 bin
neigh_modify delay 0 every 20 check no
fix 1 all nve
thermo 100
run 1000
Memory usage per processor = 20.382 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -7.1616931 0 -5.0017016 -5.6743465
100 0.75998441 -6.1430228 0 -5.0030506 -0.43702263
200 0.75772859 -6.1397321 0 -5.0031437 -0.40563811
300 0.75030002 -6.1286578 0 -5.0032122 -0.33104718
400 0.73999055 -6.1132463 0 -5.0032649 -0.24001425
500 0.73224835 -6.1016938 0 -5.0033256 -0.16524973
600 0.72455878 -6.0902 0 -5.0033661 -0.099949172
700 0.71911606 -6.0820833 0 -5.0034134 -0.046771469
800 0.71253754 -6.0722337 0 -5.0034316 0.0019725827
900 0.70832904 -6.0659437 0 -5.0034543 0.03758241
1000 0.70634002 -6.062973 0 -5.0034671 0.057951142
Loop time of 26.0448 on 4 procs for 1000 steps with 256000 atoms
Pair time (%) = 18.6673 (71.674)
Neigh time (%) = 6.55651e-05 (0.00025174)
Comm time (%) = 5.797 (22.2578)
Outpt time (%) = 0.0719919 (0.276416)
Other time (%) = 1.50839 (5.79152)
Nlocal: 64000 ave 64092 max 63823 min
Histogram: 1 0 0 0 0 0 1 0 0 2
Nghost: 64384.2 ave 64490 max 64211 min
Histogram: 1 0 0 0 0 0 1 0 1 1
Neighs: 0 ave 0 max 0 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Total # of neighbors = 0
Ave neighs/atom = 0
Neighbor list builds = 50
Dangerous builds = 0
Please see the log.cite file for references relevant to this simulation
LAMMPS (1 Feb 2014)
# GI-System
units metal
newton off
package gpu force/neigh 0 1 1
atom_style charge
read_data data.phosphate
orthogonal box = (33.0201 33.0201 33.0201) to (86.9799 86.9799 86.9799)
1 by 1 by 1 MPI processor grid
reading atoms ...
10950 atoms
reading velocities ...
10950 velocities
replicate 3 3 3
orthogonal box = (33.0201 33.0201 33.0201) to (194.899 194.899 194.899)
1 by 1 by 1 MPI processor grid
295650 atoms
pair_style lj/cut/coul/long/gpu 15.0
pair_coeff 1 1 0.0 0.29
pair_coeff 1 2 0.0 0.29
pair_coeff 1 3 0.000668 2.5738064
pair_coeff 2 2 0.0 0.29
pair_coeff 2 3 0.004251 1.91988674
pair_coeff 3 3 0.012185 2.91706967
kspace_style pppm/gpu 1e-5
neighbor 2.0 bin
thermo 100
timestep 0.001
fix 1 all npt temp 400 400 0.01 iso 1000.0 1000.0 1.0
run 200
PPPM initialization ...
G vector (1/distance) = 0.210051
grid = 108 108 108
stencil order = 5
estimated absolute RMS force accuracy = 0.000178801
estimated relative force accuracy = 1.24171e-05
using double precision FFTs
3d grid and FFT values/proc = 1520875 1259712
Memory usage per processor = 266.927 Mbytes
Step Temp E_pair E_mol TotEng Press Volume
0 400.30257 -2381941.6 0 -2366643.8 -449.96842 4242016.4
100 411.69681 -2392428.5 0 -2376695.3 7046.698 4308883.5
200 401.28392 -2394152.5 0 -2378817.2 3243.2685 4334284.4
Loop time of 154.943 on 1 procs for 200 steps with 295650 atoms
Pair time (%) = 12.0178 (7.75625)
Kspce time (%) = 80.3771 (51.8753)
Neigh time (%) = 0.0138304 (0.00892614)
Comm time (%) = 0.348981 (0.225232)
Outpt time (%) = 0.00180006 (0.00116176)
Other time (%) = 62.1834 (40.1331)
FFT time (% of Kspce) = 56.9885 (70.9013)
FFT Gflps 3d (1d only) = 1.24196 3.00739
Nlocal: 295650 ave 295650 max 295650 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 226982 ave 226982 max 226982 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 0 ave 0 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 0
Ave neighs/atom = 0
Neighbor list builds = 6
Dangerous builds = 0
unfix 1
Please see the log.cite file for references relevant to this simulation
LAMMPS (1 Feb 2014)
# GI-System
units metal
newton off
package gpu force/neigh 0 1 1
atom_style charge
read_data data.phosphate
orthogonal box = (33.0201 33.0201 33.0201) to (86.9799 86.9799 86.9799)
1 by 2 by 2 MPI processor grid
reading atoms ...
10950 atoms
reading velocities ...
10950 velocities
replicate 3 3 3
orthogonal box = (33.0201 33.0201 33.0201) to (194.899 194.899 194.899)
2 by 1 by 2 MPI processor grid
295650 atoms
pair_style lj/cut/coul/long/gpu 15.0
pair_coeff 1 1 0.0 0.29
pair_coeff 1 2 0.0 0.29
pair_coeff 1 3 0.000668 2.5738064
pair_coeff 2 2 0.0 0.29
pair_coeff 2 3 0.004251 1.91988674
pair_coeff 3 3 0.012185 2.91706967
kspace_style pppm/gpu 1e-5
neighbor 2.0 bin
thermo 100
timestep 0.001
fix 1 all npt temp 400 400 0.01 iso 1000.0 1000.0 1.0
run 200
PPPM initialization ...
G vector (1/distance) = 0.210051
grid = 108 108 108
stencil order = 5
estimated absolute RMS force accuracy = 0.000178801
estimated relative force accuracy = 1.24171e-05
using double precision FFTs
3d grid and FFT values/proc = 427915 314928
Memory usage per processor = 80.0769 Mbytes
Step Temp E_pair E_mol TotEng Press Volume
0 400.30257 -2381941.6 0 -2366643.8 -449.96842 4242016.4
100 411.69681 -2392428.5 0 -2376695.3 7046.698 4308883.5
200 401.28392 -2394152.5 0 -2378817.2 3243.2685 4334284.4
Loop time of 56.1151 on 4 procs for 200 steps with 295650 atoms
Pair time (%) = 4.55937 (8.12503)
Kspce time (%) = 34.5442 (61.5596)
Neigh time (%) = 0.00624901 (0.0111361)
Comm time (%) = 0.470437 (0.838343)
Outpt time (%) = 0.000446558 (0.000795789)
Other time (%) = 16.5344 (29.4651)
FFT time (% of Kspce) = 22.6526 (65.5758)
FFT Gflps 3d (1d only) = 3.12448 11.5533
Nlocal: 73912.5 ave 74223 max 73638 min
Histogram: 1 1 0 0 0 0 0 1 0 1
Nghost: 105257 ave 105797 max 104698 min
Histogram: 1 0 0 1 0 0 1 0 0 1
Neighs: 0 ave 0 max 0 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Total # of neighbors = 0
Ave neighs/atom = 0
Neighbor list builds = 6
Dangerous builds = 0
unfix 1
Please see the log.cite file for references relevant to this simulation
LAMMPS (1 Feb 2014)
# Rhodopsin model
newton off
package gpu force/neigh 0 1 1
variable x index 2
variable y index 2
variable z index 2
units real
neigh_modify delay 5 every 1
atom_style full
bond_style harmonic
angle_style charmm
dihedral_style charmm
improper_style harmonic
pair_style lj/charmm/coul/long/gpu 8.0 10.0
pair_modify mix arithmetic
kspace_style pppm/gpu 1e-4
read_data data.rhodo
orthogonal box = (-27.5 -38.5 -36.2676) to (27.5 38.5 36.2645)
1 by 1 by 1 MPI processor grid
reading atoms ...
32000 atoms
reading velocities ...
32000 velocities
scanning bonds ...
4 = max bonds/atom
scanning angles ...
18 = max angles/atom
scanning dihedrals ...
40 = max dihedrals/atom
scanning impropers ...
4 = max impropers/atom
reading bonds ...
27723 bonds
reading angles ...
40467 angles
reading dihedrals ...
56829 dihedrals
reading impropers ...
1034 impropers
4 = max # of 1-2 neighbors
12 = max # of 1-3 neighbors
24 = max # of 1-4 neighbors
26 = max # of special neighbors
replicate $x $y $z
replicate 2 $y $z
replicate 2 2 $z
replicate 2 2 2
orthogonal box = (-27.5 -38.5 -36.2676) to (82.5 115.5 108.797)
1 by 1 by 1 MPI processor grid
256000 atoms
221784 bonds
323736 angles
454632 dihedrals
8272 impropers
4 = max # of 1-2 neighbors
12 = max # of 1-3 neighbors
24 = max # of 1-4 neighbors
26 = max # of special neighbors
fix 1 all shake 0.0001 5 0 m 1.0 a 232
12936 = # of size 2 clusters
29064 = # of size 3 clusters
5976 = # of size 4 clusters
33864 = # of frozen angles
fix 2 all npt temp 300.0 300.0 100.0 z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1
special_bonds charmm
thermo 100
thermo_style multi
timestep 2.0
run 200
PPPM initialization ...
G vector (1/distance) = 0.245959
grid = 48 64 60
stencil order = 5
estimated absolute RMS force accuracy = 0.0410392
estimated relative force accuracy = 0.000123588
using double precision FFTs
3d grid and FFT values/proc = 237705 184320
Memory usage per processor = 760.048 Mbytes
---------------- Step 0 ----- CPU = 0.0000 (sec) ----------------
TotEng = 157024.0504 KinEng = 172792.6155 Temp = 301.1796
PotEng = -15768.5651 E_bond = 28164.9917 E_angle = 117224.0742
E_dihed = 61174.8491 E_impro = 3752.0273 E_vdwl = 10108.6323
E_coul = 1894295.6635 E_long = -2130488.8032 Press = 9562.1557
Volume = 2457390.7959
---------------- Step 100 ----- CPU = 36.3779 (sec) ----------------
TotEng = -233301.6813 KinEng = 123222.9259 Temp = 214.7790
PotEng = -356524.6072 E_bond = 13098.4672 E_angle = 56766.9111
E_dihed = 45556.8240 E_impro = 1313.9378 E_vdwl = -40863.9278
E_coul = 1705084.7672 E_long = -2137481.5867 Press = -1634.3912
Volume = 2522232.6302
---------------- Step 200 ----- CPU = 70.7784 (sec) ----------------
TotEng = -308342.0030 KinEng = 108937.4160 Temp = 189.8792
PotEng = -417279.4189 E_bond = 9579.0134 E_angle = 47373.6274
E_dihed = 39847.4817 E_impro = 967.6755 E_vdwl = -23635.2960
E_coul = 1646633.4711 E_long = -2138045.3918 Press = -1185.9327
Volume = 2554683.1533
Loop time of 70.7784 on 1 procs for 200 steps with 256000 atoms
Pair time (%) = 10.0374 (14.1815)
Bond time (%) = 27.2471 (38.4963)
Kspce time (%) = 7.19169 (10.1608)
Neigh time (%) = 5.43951 (7.68527)
Comm time (%) = 0.681534 (0.962912)
Outpt time (%) = 0.00139809 (0.0019753)
Other time (%) = 20.1798 (28.5112)
FFT time (% of Kspce) = 5.17983 (72.0253)
FFT Gflps 3d (1d only) = 1.72575 2.95071
Nlocal: 256000 ave 256000 max 256000 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Nghost: 161662 ave 161662 max 161662 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Neighs: 0 ave 0 max 0 min
Histogram: 1 0 0 0 0 0 0 0 0 0
Total # of neighbors = 0
Ave neighs/atom = 0
Ave special neighs/atom = 7.43187
Neighbor list builds = 31
Dangerous builds = 12
Please see the log.cite file for references relevant to this simulation
LAMMPS (1 Feb 2014)
# Rhodopsin model
newton off
package gpu force/neigh 0 1 1
variable x index 2
variable y index 2
variable z index 2
units real
neigh_modify delay 5 every 1
atom_style full
bond_style harmonic
angle_style charmm
dihedral_style charmm
improper_style harmonic
pair_style lj/charmm/coul/long/gpu 8.0 10.0
pair_modify mix arithmetic
kspace_style pppm/gpu 1e-4
read_data data.rhodo
orthogonal box = (-27.5 -38.5 -36.2676) to (27.5 38.5 36.2645)
1 by 2 by 2 MPI processor grid
reading atoms ...
32000 atoms
reading velocities ...
32000 velocities
scanning bonds ...
4 = max bonds/atom
scanning angles ...
18 = max angles/atom
scanning dihedrals ...
40 = max dihedrals/atom
scanning impropers ...
4 = max impropers/atom
reading bonds ...
27723 bonds
reading angles ...
40467 angles
reading dihedrals ...
56829 dihedrals
reading impropers ...
1034 impropers
4 = max # of 1-2 neighbors
12 = max # of 1-3 neighbors
24 = max # of 1-4 neighbors
26 = max # of special neighbors
replicate $x $y $z
replicate 2 $y $z
replicate 2 2 $z
replicate 2 2 2
orthogonal box = (-27.5 -38.5 -36.2676) to (82.5 115.5 108.797)
1 by 2 by 2 MPI processor grid
256000 atoms
221784 bonds
323736 angles
454632 dihedrals
8272 impropers
4 = max # of 1-2 neighbors
12 = max # of 1-3 neighbors
24 = max # of 1-4 neighbors
26 = max # of special neighbors
fix 1 all shake 0.0001 5 0 m 1.0 a 232
12936 = # of size 2 clusters
29064 = # of size 3 clusters
5976 = # of size 4 clusters
33864 = # of frozen angles
fix 2 all npt temp 300.0 300.0 100.0 z 0.0 0.0 1000.0 mtk no pchain 0 tchain 1
special_bonds charmm
thermo 100
thermo_style multi
timestep 2.0
run 200
PPPM initialization ...
G vector (1/distance) = 0.245959
grid = 48 64 60
stencil order = 5
estimated absolute RMS force accuracy = 0.0410392
estimated relative force accuracy = 0.000123588
using double precision FFTs
3d grid and FFT values/proc = 68635 46080
Memory usage per processor = 250.358 Mbytes
---------------- Step 0 ----- CPU = 0.0000 (sec) ----------------
TotEng = 157024.0504 KinEng = 172792.6155 Temp = 301.1796
PotEng = -15768.5651 E_bond = 28164.9917 E_angle = 117224.0742
E_dihed = 61174.8491 E_impro = 3752.0273 E_vdwl = 10108.6323
E_coul = 1894295.6635 E_long = -2130488.8032 Press = 9562.1557
Volume = 2457390.7959
---------------- Step 100 ----- CPU = 12.3409 (sec) ----------------
TotEng = -233301.6797 KinEng = 123222.9259 Temp = 214.7790
PotEng = -356524.6057 E_bond = 13098.4672 E_angle = 56766.9111
E_dihed = 45556.8240 E_impro = 1313.9378 E_vdwl = -40863.9278
E_coul = 1705084.7688 E_long = -2137481.5867 Press = -1634.3910
Volume = 2522232.6302
---------------- Step 200 ----- CPU = 23.6590 (sec) ----------------
TotEng = -308341.9699 KinEng = 108937.4196 Temp = 189.8792
PotEng = -417279.3895 E_bond = 9579.0134 E_angle = 47373.6274
E_dihed = 39847.4807 E_impro = 967.6755 E_vdwl = -23635.2996
E_coul = 1646633.5046 E_long = -2138045.3916 Press = -1185.9299
Volume = 2554683.1519
Loop time of 23.6591 on 4 procs for 200 steps with 256000 atoms
Pair time (%) = 4.81669 (20.3587)
Bond time (%) = 6.52579 (27.5826)
Kspce time (%) = 4.48765 (18.968)
Neigh time (%) = 1.3238 (5.5953)
Comm time (%) = 0.490551 (2.07342)
Outpt time (%) = 0.000454485 (0.00192098)
Other time (%) = 6.01414 (25.42)
FFT time (% of Kspce) = 1.77734 (39.6051)
FFT Gflps 3d (1d only) = 5.02949 11.6654
Nlocal: 64000 ave 64001 max 63999 min
Histogram: 1 0 0 0 0 2 0 0 0 1
Nghost: 70656.5 ave 70660 max 70654 min
Histogram: 1 0 0 2 0 0 0 0 0 1
Neighs: 0 ave 0 max 0 min
Histogram: 4 0 0 0 0 0 0 0 0 0
Total # of neighbors = 0
Ave neighs/atom = 0
Ave special neighs/atom = 7.43187
Neighbor list builds = 31
Dangerous builds = 12
Please see the log.cite file for references relevant to this simulation
LAMMPS (11 Aug 2017)
Lattice spacing in x,y,z = 1.6796 1.6796 1.6796
Created orthogonal box = (0 0 0) to (33.5919 33.5919 33.5919)
1 by 1 by 2 MPI processor grid
Created 32000 atoms
--------------------------------------------------------------------------
- Using acceleration for lj/cut:
- with 1 proc(s) per device.
--------------------------------------------------------------------------
Device 0: Tesla P100-PCIE-12GB, 56 CUs, 12/12 GB, 1.3 GHZ (Mixed Precision)
Device 1: Tesla P100-PCIE-12GB, 56 CUs, 1.3 GHZ (Mixed Precision)
--------------------------------------------------------------------------
Initializing Device and compiling on process 0...Done.
Initializing Devices 0-1 on core 0...Done.
Setting up Verlet run ...
Unit style : lj
Current step : 0
Time step : 0.005
Per MPI rank memory allocation (min/avg/max) = 4.811 | 4.811 | 4.811 Mbytes
Step Temp E_pair E_mol TotEng Press
0 1.44 -6.7733682 0 -4.6134357 -5.0197069
100 0.75745332 -5.7585059 0 -4.6223615 0.20726081
Loop time of 0.0620875 on 2 procs for 100 steps with 32000 atoms
Performance: 695791.827 tau/day, 1610.629 timesteps/s
91.4% CPU use with 2 MPI tasks x no OpenMP threads
MPI task timing breakdown:
Section | min time | avg time | max time |%varavg| %total
---------------------------------------------------------------
Pair | 0.022442 | 0.022491 | 0.022539 | 0.0 | 36.22
Neigh | 9.5367e-07 | 1.9073e-06 | 2.861e-06 | 0.0 | 0.00
Comm | 0.019505 | 0.0196 | 0.019695 | 0.1 | 31.57
Output | 4.7922e-05 | 5.4002e-05 | 6.0081e-05 | 0.0 | 0.09
Modify | 0.015807 | 0.015847 | 0.015887 | 0.0 | 25.52
Other | | 0.004095 | | | 6.59
Nlocal: 16000 ave 16001 max 15999 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Nghost: 13632.5 ave 13635 max 13630 min
Histogram: 1 0 0 0 0 0 0 0 0 1
Neighs: 0 ave 0 max 0 min
Histogram: 2 0 0 0 0 0 0 0 0 0
Total # of neighbors = 0
Ave neighs/atom = 0
Neighbor list builds = 5
Dangerous builds not checked
---------------------------------------------------------------------
Device Time Info (average):
---------------------------------------------------------------------
Data Transfer: 0.0096 s.
Data Cast/Pack: 0.0120 s.
Neighbor copy: 0.0000 s.
Neighbor build: 0.0029 s.
Force calc: 0.0056 s.
Device Overhead: 0.0029 s.
Average split: 1.0000.
Threads / atom: 4.
Max Mem / Proc: 22.78 MB.
CPU Driver_Time: 0.0031 s.
CPU Idle_Time: 0.0047 s.
---------------------------------------------------------------------
Please see the log.cite file for references relevant to this simulation
Total wall time: 0:00:00
#!/bin/bash
#SBATCH --account=hpcadmingpgpu # Use a project ID that has access.
#SBATCH --partition=gpgputest
#SBATCH --gres=gpu:2
#SBATCH --time=0:10:00
#SBATCH --ntasks=2
#SBATCH --cpus-per-task=2
#SBATCH --mem-per-cpu=4G
module load LAMMPS/20170811-intel-2017.u2-GCC-6.2.0-CUDA9.1
mpiexec lmp_intel_gpu -sf gpu -pk gpu 2 -in in.lj
#!/bin/bash
# Name and partition
#SBATCH --job-name=Mathematica-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:15:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load Mathematica/12.0.0
# Read and evaluate the .m file
# Example derived from: https://pages.uoregon.edu/noeckel/Mathematica.html
math -noprompt -run "<<test.m" > output.txt
Print["Hello, starting plots!"]
SetOptions[DensityPlot,DisplayFunction->Identity]
a=Table[DensityPlot[Sin[Sqrt[x^2 + y^2]]^2/(.001 + x^2 + y^2), {x, -13, 13}, {y, -13, 13}\
, Mesh -> False, PlotPoints -> 300], {i, 1, 20}]
Do[Export["a"<>ToString[i]<>".eps",a[[i]]],{i,1,20}]
Exit[]
......@@ -31,7 +31,7 @@ We use LMod to manage existing software installations. Because Spartan is a gene
This is particularly true of applications that are a language in their own right (e.g. Python, R, Matlab) or that need to be matched to those types of applications (e.g. Tensorflow needs to be compiled against a specific Python version, which then needs to be loaded alongside it when it is used).
LMod allows us to assert that certain modules are necessary to make other modules run, and then load them automatically. As such, you can (in almost all standard cases) simply load the application you wish to use and all required components will be loaded as part of that.
Typing `module avail` will show a complete list of currently installed software. You can do a simple search with this command by adding the search term to the end (e.g. `module avail Tensorflow`), or can do basic regex searches with the '-r' flag (e.g. `module -r avail '^Python'`).
Typing `module avail` will show a complete list of currently installed software. You can do a simple search with this command by adding the search term to the end (e.g. `module avail Tensorflow`), or can do basic regex searches with the '-r' flag (e.g. `module -r avail '^Python'`). Note that this is limited to your compiler options. To view all available software and adopt a mix of compilers you will need to source the old configuration system (`source /usr/local/module/spartan_old.sh`).
Typing `module list` will show a complete list of software currently loaded into your user environment. We load a few little modules into your environment when you log in by default, none of these are specifically necessary to use the cluster.
......
......@@ -58,8 +58,8 @@ Adding the `w` option will print out the selection line to a new file.
sed 's/ELM/LUV/gw selection.txt' gattaca.txt
Quoting
=======
Quoting & Variables
===================
Generally strong quotes are recommended for regular expressions. However, sed often wants to use weak quotes to include (for example) variable substitution.
......@@ -114,6 +114,10 @@ One can add new material to a file in such a manner with the insert option:
`sed '1,2 i\foo' file` or `sed '1,2i foo' file`
Select duplicate words in a line and remove.
`sed -e 's/\b\([a-z]\+\)[ ,\n]\1/\1/g' file`
Multiple Commands
=================
......
The re Package and Metacharacters
=================================
Python's re package can be used for regular expressions, and can be tested with similar metacharacters to those used in POSIX.
```
import re
string1 = "The quick brown fox jumps over the lazy dog"
startend = re.search("^The.*dog$", string1)
if (startend):
print("String starts with 'The' and ends with 'dog'")
else:
print("String does not start with 'The' and end with 'dog'")
```
Python's re package can be used for regular expressions, and can be tested with similar metacharacters to those used in POSIX. See `startend.py`
The following is a list of the most common metacharacters used in `re`.
Metacharacter | Meaning
:---------------|----------------------------------------------------------:
. Any character except new line.
^ Start of a string.
$ End of a string
* Zero or more repetitions of the preceeding RegEx.
? Zero or one repetitions of the preceeding RegEx; ab? will match either ‘a’ or ‘ab’
+ Matches one or more repetitions of the preceeding RegEx; ab+ will match ‘a’ followed by any non-zero number of ‘b’s; it will not match just ‘a’
*?, +?, ?? The '*', '+', and '?' qualifiers are all greedy; they match as much text as possible. Adding ? after the qualifier makes it perform the match in a minimal fashion with as few characters as possible will be matched.
{m} Specifies exactly m copies of the previous RegEx are matched.
{m,n} Match from m to n repetitions of the preceding RegEx, attempting to match as many repetitions as possible. For example, a{3,5} will match from 3 to 5 'a' characters.
{m,n}? Match from m to n repetitions of the preceding RegEx, with as few repetitions as possible.
\ Either escapes special characters (permitting you to match characters like '*', '?', etc), or signals a special sequence.
[] Used to indicate a set of characters, either individually (e.g., [ACGT]) or a range [A-Z]. Special characters lose their meaning in sets (e.g., [(+*)] will literally match (, +, *, or ). Characters that are not within a range can be matched by complementing the set with an initial ^ (e.g., [^5], not 5).To match a ']' inside a set, precede with a backslash or place at the beginning of the set.
| In A|B in a RegEx, match either A or B.
(...) Match RegEx inside parantheses as a group. Use \ to escape and match literal parantheses. or enclose them in a class.
. Any character except new line.
^ Start of a string.
$ End of a string
* Zero or more repetitions of the preceeding RegEx.
? Zero or one repetitions of the preceeding RegEx; ab? will match either ‘a’ or ‘ab’
+ Matches one or more repetitions of the preceeding RegEx; ab+ will match ‘a’ followed by any non-zero number of ‘b’s; it will not match just ‘a’
*?, +?, ?? The '*', '+', and '?' qualifiers are all greedy; they match as much text as possible. Adding ? after the qualifier makes it perform the match in a minimal fashion with as few characters as possible will be matched.
{m} Specifies exactly m copies of the previous RegEx are matched.
{m,n} Match from m to n repetitions of the preceding RegEx, attempting to match as many repetitions as possible. For example, a{3,5} will match from 3 to 5 'a' characters.
{m,n}? Match from m to n repetitions of the preceding RegEx, with as few repetitions as possible.
\ Either escapes special characters (permitting you to match characters like '*', '?', etc), or signals a special sequence.
[] Used to indicate a set of characters, either individually (e.g., [ACGT]) or a range [A-Z]. Special characters lose their meaning in sets (e.g., [(+*)] will literally match (, +, *, or ). Characters that are not within a range can be matched by complementing the set with an initial ^ (e.g., [^5], not 5).To match a ']' inside a set, precede with a backslash or place at the beginning of the set.
| In A|B in a RegEx, match either A or B.
(...) Match RegEx inside parantheses as a group. Use \ to escape and match literal parantheses. or enclose them in a class.
Special Sequences
The following is a list of common special sequences
\d Matches any decimal digit, equivalent to the class [0-9].
\D Matches any non-digit character, equivalent to the class [^0-9].
\s Matches any whitespace character, equivalent to the class [ \t\n\r\f\v].
\S Matches any non-whistespace.
\w Matches any alphanumeric, equivalent to the class [a-zA-Z0-9_]. \W Matches any non-alphanumeric
\d Matches any decimal digit, equivalent to the class [0-9].
\D Matches any non-digit character, equivalent to the class [^0-9].
\s Matches any whitespace character, equivalent to the class [ \t\n\r\f\v].
\S Matches any non-whistespace.
\w Matches any alphanumeric, equivalent to the class [a-zA-Z0-9_].
\W Matches any non-alphanumeric
Pattern Objects
......@@ -58,7 +48,7 @@ print regexsearch
Backslash Issues in Python
==========================
As the norm with out RegExes, the backslash is used to escape metacharaters, and a double backslash is used to for a literal backslash. However, Python uses the same character for the same purpose in string literals.
As the norm with RegExes, the backslash is used to escape metacharaters, and a double backslash is used to for a literal backslash. However, Python uses the same character for the same purpose in string literals.
Thus, in Python, to have a regular expression that matches, say, \documentclass (used in LaTeX), the backslash has to be escaped for re.compile() and then both backslashes have to be escaped for a string literal - resulting in *four* backslashes.
......@@ -145,6 +135,9 @@ splitseq = re.split("", contents, 1)
print(splitseq)
```
References
==========
......
=================
TABLE OF CONTENTS
=================
1. Accessing Spartan
2. Passwordless SSH
3. SSH Config Files
1. Accessing Spartan
====================
Access to Spartan is with SSH (secure shell). This allows remote accesss to computer systems that are more secure than previous protocols, such as rlogin and telnet which sent passwords over the network in plain-text.
SSH can be used with other networking services for file transfers, remote mounts, etc.
To access Spartan, you will need an SSH client. This is available on nearly all distributions of Linux and MacOS X. For MS-Windows you may need to download a client (e.g., PuTTY from http://putty.org)
From a terminal client use the command ...
$ ssh username@spartan.hpc.unimelb.edu.au
... and enter your password when prompted.
2. Passwordless SSH
====================
A passwordless SSH for Spartan will make your life easier. You won't even need to remember your password!
If you have a *nix system (e.g., UNIX, Linux, MacOS X) open up a terminal on your local system (not Spartan) and generate a keypair.
$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa):
Created directory '/home/user/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is:
43:51:43:a1:b5:fc:8b:b7:0a:3a:a9:b1:0f:66:73:a8 user@localhost
Now append the new public key to ~/.ssh/authorized_keys on Spartan (you'll be asked for your password one last time).
$ cat .ssh/id_rsa.pub | ssh username@spartan.hpc.unimelb.edu.au 'cat >> .ssh/authorized_keys'
Depending on your version of SSH you might also have to do the following changes:
Put the public key in .ssh/authorized_keys2
Change the permissions of .ssh to 700
Change the permissions of .ssh/authorized_keys2 to 640
You can now SSH to Spartan without having to enter your password!
3. SSH Config Files
===================
An SSH config file will also make your life easier. It allows you to create alises (i.e. shortcuts) for a given hostname.
Create the text file in your ~/.ssh directory with your preferred text editor, for example, nano.
nano .ssh/config
Enter the following (replacing username with your actual username of course!):
Host *
ServerAliveInterval 120
Host spartan
Hostname spartan.hpc.unimelb.edu.au
User username
Now to connect to Spartan, you need only type ssh spartan.
#!/bin/bash
# Partition and name
#SBATCH --job-name=Tcl-test.slurm
#SBATCH -p cloud
# Run on single CPU
#SBATCH --ntasks=1
# set your minimum acceptable walltime=days-hours:minutes:seconds
#SBATCH -t 0:05:00
# Specify your email address to be notified of progress.
# SBATCH --mail-user=youreamiladdress@unimelb.edu
# SBATCH --mail-type=ALL
# Load the environment variables
module load Tcl/8.6.9-GCC-8.2.0
# Print "Hello World" as an array
tclsh hello.tcl >> results.txt
puts "Hello, world!"
#!/bin/bash
#SBATCH --job-name="trimm_sample"
#SBATCH --partition=cloud
# A multithreaded application
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --time=0:15:00
module load Trimmomatic/0.36-Java-1.8.0_152
java -jar $EBROOTTRIMMOMATIC/trimmomatic-0.36.jar PE -threads 4 SRR2589044_1.fastq.gz SRR2589044_2.fastq.gz \
SRR2589044_1.trim.fastq.gz SRR2589044_1un.trim.fastq.gz \
SRR2589044_2.trim.fastq.gz SRR2589044_2un.trim.fastq.gz \
SLIDINGWINDOW:4:20 MINLEN:25 ILLUMINACLIP:NexteraPE-PE.fa:2:40:15
#!/bin/bash
#SBATCH --job-name="trimmloop_sample"
#SBATCH --partition=cloud
# A multithreaded application
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4
#SBATCH --time=0:15:00
module load Trimmomatic/0.36-Java-1.8.0_152
for infile in ./*_1.fastq.gz
do
base=$(basename ${infile} _1.fastq.gz)
java -jar $EBROOTTRIMMOMATIC/trimmomatic-0.36.jar PE -threads 4 ${infile} ${base}_2.fastq.gz \
${base}_1.trim.fastq.gz ${base}_1un.trim.fastq.gz \
${base}_2.trim.fastq.gz ${base}_2un.trim.fastq.gz \
SLIDINGWINDOW:4:20 MINLEN:25 ILLUMINACLIP:NexteraPE-PE.fa:2:40:15
done
Archtitecture Considerations
============================
Spartan login nodes have connection to the public Internet.
Spartan compute nodes do not have connection to the public Internet.
When trying to connect to the public Internet via a compute node, whether as part of a Slurm script or an interactive job, the task will fail.
e.g.,
[lev@spartan-rc035 ~]$ wget https://upload.wikimedia.org/wikipedia/commons/thumb/5/5b/Linux_kernel_map.png/800px-Linux_kernel_map.png
--2019-11-25 10:56:06-- https://upload.wikimedia.org/wikipedia/commons/thumb/5/5b/Linux_kernel_map.png/800px-Linux_kernel_map.png
Resolving upload.wikimedia.org (upload.wikimedia.org)... 103.102.166.240, 2001:df2:e500:ed1a::2:b
Connecting to upload.wikimedia.org (upload.wikimedia.org)|103.102.166.240|:443... failed: No route to host.
Connecting to upload.wikimedia.org (upload.wikimedia.org)|2001:df2:e500:ed1a::2:b|:443... failed: Network is unreachable.
In contrast, if the module web_proxy is loaded a proxy environment path will be established which allows connection to the public Internet.
Try:
sbatch fail.slurm
sbatch succeed.slurm
Run an interactive job and compare the the environment.
[lev@spartan-login1 ~]$ sinteractive
srun: job 12960294 queued and waiting for resources
srun: job 12960294 has been allocated resources
[lev@spartan-rc035 ~]$ module load web_proxy
[lev@spartan-rc035 ~]$ env | grep 'proxy=http'
[lev@spartan-rc035 ~]$ env | grep 'proxy=http'
http_proxy=http://wwwproxy.unimelb.edu.au:8000
ftp_proxy=http://wwwproxy.unimelb.edu.au:8000
https_proxy=http://wwwproxy.unimelb.edu.au:8000
#!/bin/bash
#SBATCH --partition=cloud
#SBATCH --time=0:2:00
wget https://upload.wikimedia.org/wikipedia/commons/thumb/5/5b/Linux_kernel_map.png/800px-Linux_kernel_map.png
#!/bin/bash
#SBATCH --partition=cloud
#SBATCH --time=0:2:00
module load web_proxy
wget https://upload.wikimedia.org/wikipedia/commons/thumb/5/5b/Linux_kernel_map.png/800px-Linux_kernel_map.png
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment