diff --git a/dunnart/README.md b/dunnart/README.md
index 76634b07e78183ef3fccee7068193c46c4472a05..509cdbd67a30dffc3d7bdf8a3b9e9dc982391870 100644
--- a/dunnart/README.md
+++ b/dunnart/README.md
@@ -551,38 +551,21 @@ Overlap replicate peaks for H3K27ac.
 
 # 10. Consensus peaks and grouping into putative enhancers and promoters
 
-Separate overlapped replicate peak files:
-```
-less H3K27ac_overlap.narrowPeak| grep "A-3" > H3K27ac_overlap_repA.narrowPeak
-
-less H3K27ac_overlap.narrowPeak| grep "B-3" > H3K27ac_overlap_repB.narrowPeak
-
-less H3K4me3_overlap.narrowPeak| grep "A-2" > H3K4me3_overlap_repA.narrowPeak
-
-less H3K4me3_overlap.narrowPeak| grep "B-2" > H3K4me3_overlap_repB.narrowPeak
-```
-
-Keep only the intersected region:
-```
-bedtools intersect -a H3K27ac_overlap_repA.narrowPeak -b H3K27ac_overlap_repB.narrowPeak > H3K27ac_consensus.narrowPeak
-
-bedtools intersect -a H3K4me3_overlap_repA.narrowPeak -b H3K4me3_overlap_repB.narrowPeak > H3K4me3_consensus.narrowPeak
-```
 
 Find uniquely H3K4me3 sites (i.e. peaks that don't overlap with H3K27ac):
 
 ```
-bedtools intersect -v -a macs2/H3K4me3_consensus.narrowPeak -b macs2/H3K27ac_consensus.narrowPeak > H3K4me3_only.narrowPeak
+bedtools intersect -v -a H3K4me3_overlap.narrowPeak -b H3K27ac_overlap.narrowPeak > H3K4me3_only.narrowPeak
 ```
 
 Find unique H3K27ac sites (i.e. peaks that don't overlap with H3K4me3):
 ```
-bedtools intersect -v macs2/H3K27ac_consensus.narrowPeak -b macs2/H3K4me3_consensus.narrowPeak > H3K27ac_only.narrowPeak
+bedtools intersect -v -a H3K27ac_overlap.narrowPeak -b H3K4me3_overlap.narrowPeak > H3K27ac_only.narrowPeak
 ```
 
 Find peaks common between H3K27ac & H3K4me3 with a reciprocal overlap of at least 50%.
 ```
-bedtools intersect -f 0.5 -r -a macs2/H3K4me3_consensus.narrowPeak -b macs2/H3K27ac_consensus.narrowPeak > H3K4me3_and_H3K27ac.narrowPeak
+bedtools intersect -f 0.5 -r -a H3K4me3_overlap.narrowPeak -b H3K27ac_overlap.narrowPeak > H3K4me3_and_H3K27ac.narrowPeak
 ```
 
 # 11. Find overlap with TWARs
@@ -598,38 +581,38 @@ blastn -task blastn -num_threads 4 -db dunnart_pseudochr_vs_mSarHar1.11_v1.fasta
 ## Intersect with H3K4me3 and H3K27ac
 
 ```
-intersectBed -a H3K4me3_consensus.narrowPeak -b dunnart_TWARs.bed > twar_H3K4me3_overlap.bed
+intersectBed -wb -a H3K4me3_overlap.narrowPeak -b dunnart_TWARs.bed > twar_H3K4me3_overlap.bed
 ```
 
 ```
-intersectBed -a H3K27ac_consensus.narrowPeak -b dunnart_TWARs.bed > twar_H3K27ac_overlap.bed
+intersectBed -a H3K27ac_overlap.narrowPeak -b dunnart_TWARs.bed > twar_H3K27ac_overlap.bed
 ```
 
+```
+intersectBed -a H3K4me3_only.narrowPeak -b dunnart_TWARs.bed > twar_H3K4me3_only_overlap.bed
+```
 
-# 12. HOMER annotate peaks
+```
+intersectBed -a H3K27ac_only.narrowPeak -b dunnart_TWARs.bed > twar_H3K27ac_only_overlap.bed
+```
+
+```
+intersectBed -a H3K4me3_and_H3K27ac.narrowPeak -b dunnart_TWARs.bed > twar_H3K27ac_and_H3K4me3_overlap.bed
+```
 
-### How Basic Annotation Works
-The process of annotating peaks/regions is divided into two primary parts.  The first determines the distance to the nearest TSS and assigns the peak to that gene.  The second determines the genomic annotation of the region occupied by the center of the peak/region.
 
-### Distance to the nearest TSS
+# 12. ChIPseeker annotate peaks
 
-By default, `annotatePeaks.pl` loads a file in the "/path-to-homer/data/genomes/<genome>/<genome>.tss" that contains the positions of RefSeq transcription start sites.  It uses these positions to determine the closest TSS, reporting the distance (negative values mean upstream of the TSS, positive values mean downstream), and various annotation information linked to locus including alternative identifiers (unigene, entrez gene, ensembl, gene symbol etc.).  This information is also used to link gene-specific information (see below) to a peak/region, such as gene expression.
 
-### Genomic Annotation
 
-To annotate the location of a given peak in terms of important genomic features, `annotatePeaks.pl` calls a separate program (assignGenomeAnnotation) to efficiently assign peaks to one of millions of possible annotations genome wide.  Two types of output are provided.  The first is "Basic Annotation" that includes whether a peak is in the TSS (transcription start site), TTS (transcription termination site), Exon (Coding), 5' UTR Exon, 3' UTR Exon, Intronic, or Intergenic, which are common annotations that many researchers are interested in.  A second round of "Detailed Annotation" also includes more detailed annotation, also considering repeat elements and CpG islands.  Since some annotation overlap, a priority is assign based on the following (in case of ties it's random [i.e. if there are two overlapping repeat element annotations]):
 
-* TSS (by default defined from -1kb to +100bp)
-* TTS (by default defined from -100 bp to +1kb)
-* CDS Exons
-* 5' UTR Exons
-* 3' UTR Exons
-* CpG Islands
-* Repeats
-* Introns
-* Intergenic
+# Calling peaks after normalising to 10M reads
 
-Although HOMER doesn't allow you to explicitly change the definition of the region that is the TSS (-1kb to +100bp), you can "do it yourself" by sorting the annotation output in EXCEL by the "Distance to nearest TSS" column, and selecting those within the range you are interested in.
+This is so we can compare to mouse peaks.
+Based on https://davemcg.github.io/post/easy-bam-downsampling/ script for doing this. 
 
+```
+bash subsample.sh
+```
 
-# 13. ChIPseeker annotate peaks
+Run script in the directory with the files you want to subsample.
diff --git a/dunnart/Snakefile b/dunnart/Snakefile
index 1222513b6835f41ae28832188a2e1f280163e12d..ba382bdd9377b7dd192257eb8497bc82f8cc5516 100644
--- a/dunnart/Snakefile
+++ b/dunnart/Snakefile
@@ -46,62 +46,64 @@ all_samples = IPS + unique_inputs
 
 rule all:
     input:
-        expand("results/qc/{sample}_R1_fastqc.html", sample=all_samples),
-        expand("results/qc/{sample}_R2_fastqc.html", sample=all_samples),
-        expand("results/qc/{sample}_R1_fastqc.zip", sample=all_samples),
-        expand("results/qc/{sample}_R2_fastqc.zip", sample=all_samples),
-        expand("results/bowtie2/{sample}.sorted.bam", sample=all_samples),
-        expand("results/bowtie2/{sample}.sorted.bai", sample=all_samples),
-        expand("results/bowtie2/{sample}_PPq30.sorted.bam", sample=all_samples),
-        expand("results/bowtie2/{sample}_PPq30.sorted.dupmark.bam", sample=all_samples),
-        expand("results/bowtie2/{sample}_PPq30.sorted.dedup.bam", sample=all_samples),
-        expand("results/bowtie2/{sample}_PPq30.sorted.dedup.bai", sample=all_samples),
-        "results/bowtie2/H3K4me3_pooled_PPq30.sorted.dedup.bam",
-        "results/bowtie2/H3K27ac_pooled_PPq30.sorted.dedup.bam",
-        "logs/H3K4me3.mergeBAM",
-        "logs/H3K27ac.mergeBAM",
-        expand("results/qc/{sample}.unfiltered.flagstat.qc", sample=all_samples),
-        expand("results/qc/{sample}.dedup.flagstat.qc", sample=all_samples),
-        expand("results/qc/{sample}.dupmark.flagstat.qc", sample=all_samples),
-        expand("results/qc/{sample}.PPq30.flagstat.qc", sample=all_samples),
-        expand("results/qc/{sample}.ccurve.txt", sample=all_samples),
-        expand("results/qc/{sample}.extrap.txt", sample=all_samples),
-        expand("logs/{sample}.ccurve.preseq", sample=all_samples),
-        expand("logs/{sample}.extrap.preseq", sample=all_samples),
-        expand("results/qc/{sample}_est_lib_complex_metrics.txt", sample=all_samples),
-        expand("logs/{sample}.picardLibComplexity", sample=all_samples),
-        expand("results/qc/{sample}.pbc.qc", sample=all_samples),
-        expand("results/bowtie2/{sample}_PPq30.sorted.tmp.bam",sample=all_samples),
-        expand("results/bowtie2/{sample}_PPq30.sorted.dupmark_downSampled.bam",sample=all_samples),
-        "results/qc/multibamsum.npz",
-        "results/qc/multibamsum.tab",
-        "results/qc/pearsoncor_multibamsum.png",
-        "results/qc/pearsoncor_multibamsum_matrix.txt",
-        expand("results/qc/{sample}.SeqDepthNorm.bw", sample=all_samples),
-        "results/qc/multiBAM_fingerprint.png",
-        "results/qc/multiBAM_fingerprint_metrics.txt",
-        "results/qc/multiBAM_fingerprint_rawcounts.txt",
-        "results/qc/bamPEFragmentSize_hist.png",
-        "results/qc/bamPEFragmentSize_rawcounts.tab",
-        expand("results/bowtie2/{sample}_R1_trimmed_q30.bam", sample=all_samples),
-        expand("logs/{sample}_filt_15Mreads.SE.spp.log", sample=all_samples),
-        expand("results/qc/{sample}_filt_15Mreads.SE.cc.qc", sample=all_samples),
-        expand("results/qc/{sample}_filt_15Mreads.SE.cc.plot.pdf", sample=all_samples),
-        expand("results/macs2/{case}_vs_{control}_macs2_peaks.narrowPeak", zip, case=IPS, control=INPUTS),
-        expand("results/macs2/{case}_vs_{control}_macs2_peaks.xls", zip, case=IPS, control=INPUTS),
-        expand("results/macs2/{case}_vs_{control}_macs2_summits.bed", zip, case=IPS, control=INPUTS),
-        expand("results/qc/{case}-vs-{control}-narrowpeak-count_mqc.json", zip, case=IPS, control=INPUTS),
-        expand("results/bowtie2/{case}.bedpe", case=IPS),
-        expand("logs/{case}.bamToBed", case=IPS),
-        expand("results/qc/{case}_vs_{control}.frip.txt", case=IPS, control=INPUTS),
-        "results/macs2/H3K4me3_pooled_macs2_peaks.narrowPeak",
-        "results/macs2/H3K27ac_pooled_macs2_peaks.narrowPeak",
-        "results/macs2/H3K4me3_overlap.narrowPeak",
-        "results/macs2/H3K27ac_overlap.narrowPeak",
-        "results/qc/H3K4me3_overlap.frip",
-        "results/qc/H3K27ac_overlap.frip"
-        # directory("results/multiqc/multiqc_report_data/"),
-        # "results/multiqc/multiqc_report.html"
+        # expand("results_10M/qc/{sample}_R1_fastqc.html", sample=all_samples),
+        # expand("results_10M/qc/{sample}_R2_fastqc.html", sample=all_samples),
+        # expand("results_10M/qc/{sample}_R1_fastqc.zip", sample=all_samples),
+        # expand("results_10M/qc/{sample}_R2_fastqc.zip", sample=all_samples),
+        # expand("results_10M/bowtie2/{sample}.sorted.bam", sample=all_samples),
+        # expand("results_10M/bowtie2/{sample}.sorted.bai", sample=all_samples),
+        # expand("results_10M/bowtie2/{sample}_PPq30.sorted.bam", sample=all_samples),
+        # expand("results_10M/bowtie2/{sample}_PPq30.sorted.dupmark.bam", sample=all_samples),
+        # expand("results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bam", sample=all_samples),
+        # expand("results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bai", sample=all_samples),
+        "results_10M/bowtie2/H3K4me3_pooled_PPq30.sorted.dedup.bam",
+        "results_10M/bowtie2/H3K27ac_pooled_PPq30.sorted.dedup.bam",
+         "results_10M/bowtie2/input_pooled_PPq30.sorted.dedup.bam",
+        "results_10M/logs/H3K4me3.mergeBAM",
+        "results_10M/logs/H3K27ac.mergeBAM",
+        # expand("results_10M/qc/{sample}.unfiltered.flagstat.qc", sample=all_samples),
+        #expand("results_10M/qc/{sample}.dedup.flagstat.qc", sample=all_samples),
+        # expand("results_10M/qc/{sample}.dupmark.flagstat.qc", sample=all_samples),
+        # expand("results_10M/qc/{sample}.PPq30.flagstat.qc", sample=all_samples),
+        #expand("results_10M/qc/{sample}.ccurve.txt", sample=all_samples),
+        # expand("results_10M/qc/{sample}.extrap.txt", sample=all_samples),
+        # expand("results_10M/logs/{sample}.ccurve.preseq", sample=all_samples),
+        # expand("results_10M/logs/{sample}.extrap.preseq", sample=all_samples),
+        # expand("results_10M/qc/{sample}_est_lib_complex_metrics.txt", sample=all_samples),
+        # expand("results_10M/logs/{sample}.picardLibComplexity", sample=all_samples),
+        # expand("results_10M/qc/{sample}.pbc.qc", sample=all_samples),
+        # expand("results_10M/bowtie2/{sample}_PPq30.sorted.tmp.bam",sample=all_samples),
+        # expand("results_10M/bowtie2/{sample}_PPq30.sorted.dupmark_downSampled.bam",sample=all_samples),
+        "results_10M/qc/multibamsum.npz",
+        "results_10M/qc/multibamsum.tab",
+        "results_10M/qc/pearsoncor_multibamsum.png",
+        "results_10M/qc/pearsoncor_multibamsum_matrix.txt",
+        expand("results_10M/qc/{sample}.SeqDepthNorm.bw", sample=all_samples),
+        "results_10M/qc/multiBAM_fingerprint.png",
+        "results_10M/qc/multiBAM_fingerprint_metrics.txt",
+        "results_10M/qc/multiBAM_fingerprint_rawcounts.txt",
+        "results_10M/qc/bamPEFragmentSize_hist.png",
+        "results_10M/qc/bamPEFragmentSize_rawcounts.tab",
+        # expand("results_10M/bowtie2/{sample}_R1_trimmed_q30.bam", sample=all_samples),
+        # expand("results_10M/logs/{sample}_filt_15Mreads.SE.spp.log", sample=all_samples),
+        # expand("results_10M/qc/{sample}_filt_15Mreads.SE.cc.qc", sample=all_samples),
+        # expand("results_10M/qc/{sample}_filt_15Mreads.SE.cc.plot.pdf", sample=all_samples),
+        expand("results_10M/macs2/{case}_vs_{control}_macs2_peaks.narrowPeak", zip, case=IPS, control=INPUTS),
+        expand("results_10M/macs2/{case}_vs_{control}_macs2_peaks.xls", zip, case=IPS, control=INPUTS),
+        expand("results_10M/macs2/{case}_vs_{control}_macs2_summits.bed", zip, case=IPS, control=INPUTS),
+        #expand("results_10M/qc/{case}-vs-{control}-narrowpeak-count_mqc.json", zip, case=IPS, control=INPUTS),
+        # expand("results_10M/bowtie2/{case}.bedpe", case=IPS),
+        # expand("results_10M/ggplot(promoter_annot, aes(x=distanceToTSS, y=width)) +
+        #expand("results_10M/logs/{case}.bamToBed", case=IPS),
+        # expand("results_10M/qc/{case}_vs_{control}.frip.txt", case=IPS, control=INPUTS),
+        # "results_10M/macs2/H3K4me3_pooled_macs2_peaks.narrowPeak",
+        # "results_10M/macs2/H3K27ac_pooled_macs2_peaks.narrowPeak",
+        # "results_10M/macs2/H3K4me3_overlap.narrowPeak",
+        # "results_10M/macs2/H3K27ac_overlap.narrowPeak",
+        # "results_10M/qc/H3K4me3_overlap.frip",
+        # "results_10M/qc/H3K27ac_overlap.frip"
+        # directory("results_10M/multiqc/multiqc_report_data/"),
+        # "results_10M/multiqc/multiqc_report.html"
 # ===============================================================================================
 #  1. FASTQC
 # ===============================================================================================
@@ -110,14 +112,14 @@ rule fastqc:
     input:
         ["rawdata/{sample}_R1.fastq.gz", "rawdata/{sample}_R2.fastq.gz"]
     output:
-        "results/qc/{sample}_R1_fastqc.html",
-        "results/qc/{sample}_R2_fastqc.html",
-        "results/qc/{sample}_R1_fastqc.zip",
-        "results/qc/{sample}_R2_fastqc.zip"
+        "results_10M/qc/{sample}_R1_fastqc.html",
+        "results_10M/qc/{sample}_R2_fastqc.html",
+        "results_10M/qc/{sample}_R1_fastqc.zip",
+        "results_10M/qc/{sample}_R2_fastqc.zip"
     log:
-        "logs/{sample}.fastqc"
+        "results_10M/logs/{sample}.fastqc"
     shell:
-        "fastqc {input} -t 6 --extract --outdir=results/qc/ 2> {log}"
+        "fastqc {input} -t 6 --extract --outdir=results_10M/qc/ 2> {log}"
 
 # ===============================================================================================
 #  2. ALIGNMENT
@@ -129,15 +131,16 @@ rule align:
         R1="rawdata/{sample}_R1.fastq.gz",
         R2="rawdata/{sample}_R2.fastq.gz"
     output:
-        "results/bowtie2/{sample}.sorted.bam"
+        "results_10M/bowtie2/{sample}.sorted.bam"
     params:
         index="genomes/Scras_dunnart_assem1.0_pb-ont-illsr_flyeassem_red-rd-scfitr2_pil2xwgs2_60chr"
     log:
-        "logs/{sample}.align"
+        "results_10M/logs/{sample}.align"
     shell:
         "bowtie2 --threads 8 -q -X 2000 --very-sensitive -x {params.index} -1 {input.R1} -2 {input.R2} \
         |  samtools view -u -h  - |  samtools sort -o {output}  - 2> {log}"
 
+
 # ===============================================================================================
 #  3. FILTERING
 #   > remove unmapped, mate unmapped
@@ -155,59 +158,59 @@ rule align:
 
 rule filter:
     input:
-        "results/bowtie2/{sample}.sorted.bam"
+        "results_10M/bowtie2/{sample}.sorted.bam"
     output:
-        "results/bowtie2/{sample}_PPq30.sorted.bam"
+        "results_10M/bowtie2/{sample}_PPq30.sorted.bam"
     log:
-        "logs/{sample}.dedup"
+        "results_10M/logs/{sample}.dedup"
     shell:
         "samtools view -b -F 1804 -q 30 -f 2 {input} | samtools sort -o {output} - 2> {log}"
 
 rule markDups:
     input:
-        "results/bowtie2/{sample}_PPq30.sorted.bam"
+        "results_10M/bowtie2/{sample}_PPq30.sorted.bam"
     output:
-        bam="results/bowtie2/{sample}_PPq30.sorted.dupmark.bam",
-        dupQC="results/bowtie2/{sample}.dup.qc"
+        bam="results_10M/bowtie2/{sample}_PPq30.sorted.dupmark.bam",
+        dupQC="results_10M/bowtie2/{sample}.dup.qc"
     log:
-        "logs/{sample}.dupmark"
+        "results_10M/logs/{sample}.dupmark"
     shell:
         "picard MarkDuplicates I={input} O={output.bam} \
         METRICS_FILE={output.dupQC} REMOVE_DUPLICATES=FALSE ASSUME_SORTED=true 2> {log}"
 
 rule dedup:
     input:
-        "results/bowtie2/{sample}_PPq30.sorted.dupmark.bam"
+        "results_10M/bowtie2/{sample}_PPq30.sorted.dupmark.bam"
     output:
-        "results/bowtie2/{sample}_PPq30.sorted.dedup.bam"
+        "results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bam"
     log:
-        "logs/{sample}.dedup"
+        "results_10M/logs/{sample}.dedup"
     shell:
         "samtools view -F 1804 -f 2 -b {input} | samtools sort -o {output} - 2> {log}"
 
 rule indexBam:
     input:
-        "results/bowtie2/{sample}_PPq30.sorted.dedup.bam"
+        "results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bam"
     output:
-        "results/bowtie2/{sample}_PPq30.sorted.dedup.bai"
+        "results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bai"
     log:
-        "logs/{sample}.indexBam"
+        "results_10M/logs/{sample}.indexBam"
     shell:
         "samtools index -c {input} {output} 2> {log}"
 
 rule mergeBAMreplicates:
     input:
-        H3K4me3 = ["results/bowtie2/A-2_H3K4me3_PPq30.sorted.dedup.bam",  "results/bowtie2/B-2_H3K4me3_PPq30.sorted.dedup.bam"],
-        H3K27ac = ["results/bowtie2/A-3_H3K27ac_PPq30.sorted.dedup.bam", "results/bowtie2/B-3_H3K27ac_PPq30.sorted.dedup.bam"],
-        control = ["results/bowtie2/A-1_input_PPq30.sorted.dedup.bam", "results/bowtie2/B-1_input_PPq30.sorted.dedup.bam"]
+        H3K4me3 = ["results_10M/bowtie2/A-2_H3K4me3_PPq30.sorted.dedup.bam",  "results_10M/bowtie2/B-2_H3K4me3_PPq30.sorted.dedup.bam"],
+        H3K27ac = ["results_10M/bowtie2/A-3_H3K27ac_PPq30.sorted.dedup.bam", "results_10M/bowtie2/B-3_H3K27ac_PPq30.sorted.dedup.bam"],
+        control = ["results_10M/bowtie2/A-1_input_PPq30.sorted.dedup.bam", "results_10M/bowtie2/B-1_input_PPq30.sorted.dedup.bam"]
     output:
-        H3K4me3 = "results/bowtie2/H3K4me3_pooled_PPq30.sorted.dedup.bam",
-        H3K27ac = "results/bowtie2/H3K27ac_pooled_PPq30.sorted.dedup.bam",
-        control = "results/bowtie2/input_pooled_PPq30.sorted.dedup.bam"
+        H3K4me3 = "results_10M/bowtie2/H3K4me3_pooled_PPq30.sorted.dedup.bam",
+        H3K27ac = "results_10M/bowtie2/H3K27ac_pooled_PPq30.sorted.dedup.bam",
+        control = "results_10M/bowtie2/input_pooled_PPq30.sorted.dedup.bam"
     log:
-        H3K4me3 = "logs/H3K4me3.mergeBAM",
-        H3K27ac = "logs/H3K27ac.mergeBAM",
-        control = "logs/input.mergeBAM"
+        H3K4me3 = "results_10M/logs/H3K4me3.mergeBAM",
+        H3K27ac = "results_10M/logs/H3K27ac.mergeBAM",
+        control = "results_10M/logs/input.mergeBAM"
     run:
         shell("samtools merge {output.H3K4me3} {input.H3K4me3} 2> {log.H3K4me3}")
         shell("samtools merge {output.H3K27ac} {input.H3K27ac} 2> {log.H3K27ac}")
@@ -223,79 +226,73 @@ rule mergeBAMreplicates:
 
 rule mappingStats:
     input:
-        a="results/bowtie2/{sample}_PPq30.sorted.dedup.bam",
-        b="results/bowtie2/{sample}_PPq30.sorted.dupmark.bam",
-        c="results/bowtie2/{sample}_PPq30.sorted.bam",
-        d="results/bowtie2/{sample}.sorted.bam"
+        a="results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bam",
+        b="results_10M/bowtie2/{sample}_PPq30.sorted.dupmark.bam",
+        c="results_10M/bowtie2/{sample}_PPq30.sorted.bam",
+        d="results_10M/bowtie2/{sample}.sorted.bam"
     output:
-        a="results/qc/{sample}.dedup.flagstat.qc",
-        b="results/qc/{sample}.dupmark.flagstat.qc",
-        c="results/qc/{sample}.PPq30.flagstat.qc",
-        d="results/qc/{sample}.unfiltered.flagstat.qc",
+        a="results_10M/qc/{sample}.dedup.flagstat.qc",
+        b="results_10M/qc/{sample}.dupmark.flagstat.qc",
+        c="results_10M/qc/{sample}.PPq30.flagstat.qc",
+        d="results_10M/qc/{sample}.unfiltered.flagstat.qc",
     run:
         shell("samtools flagstat {input.a} > {output.a}")
         shell("samtools flagstat {input.b} > {output.b}")
         shell("samtools flagstat {input.c} > {output.c}")
         shell("samtools flagstat {input.d} > {output.d}")
 
+
 rule downsample_bam:
     input:
-        "results/bowtie2/{sample}_PPq30.sorted.dupmark.bam"
+        "results_10M/bowtie2/{sample}_PPq30.sorted.dupmark.bam"
     output:
-        "results/bowtie2/{sample}_PPq30.sorted.dupmark_downSampled.bam"
+        "results_10M/bowtie2/{sample}_PPq30.sorted.dupmark_downSampled.bam"
     log:
-        "logs/{sample}.downsample"
+        "results_10M/logs/{sample}.downsample"
     shell:
         "picard DownsampleSam I={input} O={output} P=0.35 \
          2> {log}"
 
-# A-1_dupmark = 87165286
-# B-1_dupmark = 72303316
-# A-2_dupmark = 62678360
-# A-3_dupmark = 86503078
-# B-2_dupmark = 67048994
-# B-3_dupmark = 66615704
-
 rule preseq:
     input:
-        "results/bowtie2/{sample}_PPq30.sorted.dupmark.bam"
+        "results_10M/bowtie2/{sample}_PPq30.sorted.dupmark.bam"
     output:
-        ccurve = "results/qc/{sample}.ccurve.txt",
-        extrap = "results/qc/{sample}.extrap.txt"
+        ccurve = "results_10M/qc/{sample}.ccurve.txt",
+        extrap = "results_10M/qc/{sample}.extrap.txt"
     log:
-        ccurve = "logs/{sample}.ccurve.preseq",
-        extrap = "logs/{sample}.extrap.preseq"
+        ccurve = "results_10M/logs/{sample}.ccurve.preseq",
+        extrap = "results_10M/logs/{sample}.extrap.preseq"
     run:
         shell("preseq lc_extrap -v -output {output.extrap} -pe -bam {input} 2> {log.extrap}")
         shell("preseq c_curve -v -output {output.ccurve} -pe -bam {input} 2> {log.ccurve}")
 
 rule get_picard_complexity_metrics:
     input:
-        "results/bowtie2/{sample}_PPq30.sorted.dupmark.bam"
+        "results_10M/bowtie2/{sample}_PPq30.sorted.dupmark.bam"
     output:
-        "results/qc/{sample}_est_lib_complex_metrics.txt"
+        "results_10M/qc/{sample}_est_lib_complex_metrics.txt"
     log:
-        "logs/{sample}.downSampled.picardLibComplexity"
+        "results_10M/logs/{sample}.downSampled.picardLibComplexity"
     shell:
         "picard -Xmx6G EstimateLibraryComplexity INPUT={input} OUTPUT={output} USE_JDK_DEFLATER=TRUE USE_JDK_INFLATER=TRUE VERBOSITY=ERROR"
 
 rule sort_name:
     input:
-        "results/bowtie2/{sample}_PPq30.sorted.dupmark_downSampled.bam"
+        "results_10M/bowtie2/{sample}_PPq30.sorted.dupmark_downSampled.bam"
     output:
-        tmp = "results/bowtie2/{sample}_PPq30.sorted.dupmark_downSampled.tmp.bam"
+        tmp = "results_10M/bowtie2/{sample}_PPq30.sorted.dupmark_downSampled.tmp.bam"
     log:
-        "logs/{sample}.pbc.sort"
+        "results_10M/logs/{sample}.pbc.sort"
     run:
         shell("samtools sort -n {input} -o {output.tmp} 2> {log}")
 
 rule estimate_lib_complexity:
     input:
-        "results/bowtie2/{sample}_PPq30.sorted.dupmark_downSampled.tmp.bam"
+        "results_10M/bowtie2/{sample}_PPq30.sorted.dupmark_downSampled.tmp.bam"
     output:
-        qc = "results/qc/{sample}.pbc.qc",
+        qc = "results_10M/qc/{sample}.pbc.qc",
     log:
-        "logs/{sample}.pbc"
+        "results_10M/logs/{sample}.pbc"
     shell:
         """
         bedtools bamtobed -bedpe -i {input} \
@@ -325,14 +322,14 @@ rule estimate_lib_complexity:
 
 rule deeptools_summary:
     input:
-        bam = expand(["results/bowtie2/{sample}_PPq30.sorted.dedup.bam"], sample=all_samples),
-        bai = expand(["results/bowtie2/{sample}_PPq30.sorted.dedup.bai"], sample=all_samples)
+        bam = expand(["results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bam"], sample=all_samples),
+        bai = expand(["results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bai"], sample=all_samples)
     output:
-        sum="results/qc/multibamsum.npz",
-        counts="results/qc/multibamsum.tab"
+        sum="results_10M/qc/multibamsum.npz",
+        counts="results_10M/qc/multibamsum.tab"
     threads: 32
     log:
-        "logs/multisummary.deepTools"
+        "results_10M/logs/multisummary.deepTools"
     shell:
         " multiBamSummary bins \
         -p {threads} \
@@ -342,12 +339,12 @@ rule deeptools_summary:
         --outRawCounts {output.counts} 2> {log}"
 
 rule deeptools_correlation:
-    input: "results/qc/multibamsum.npz"
+    input: "results_10M/qc/multibamsum.npz"
     output:
-        fig="results/qc/pearsoncor_multibamsum.png",
-        matrix="results/qc/pearsoncor_multibamsum_matrix.txt"
+        fig="results_10M/qc/pearsoncor_multibamsum.png",
+        matrix="results_10M/qc/pearsoncor_multibamsum_matrix.txt"
     log:
-        "logs/correlation.deepTools"
+        "results_10M/logs/correlation.deepTools"
     shell:
         "plotCorrelation \
         --corData {input} \
@@ -361,12 +358,12 @@ rule deeptools_correlation:
 
 rule deeptools_coverage:
     input:
-        bam ="results/bowtie2/{sample}_PPq30.sorted.dedup.bam",
-        bai ="results/bowtie2/{sample}_PPq30.sorted.dedup.bai"
+        bam ="results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bam",
+        bai ="results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bai"
     output:
-        "results/qc/{sample}.SeqDepthNorm.bw"
+        "results_10M/qc/{sample}.SeqDepthNorm.bw"
     log:
-        "logs/{sample}_coverage.deepTools"
+        "results_10M/logs/{sample}_coverage.deepTools"
     shell:
         "bamCoverage \
         --bam {input.bam} \
@@ -378,15 +375,15 @@ rule deeptools_coverage:
 
 rule deeptools_fingerprint:
     input:
-        bam = expand(["results/bowtie2/{sample}_PPq30.sorted.dedup.bam"], sample=all_samples),
-        bai = expand(["results/bowtie2/{sample}_PPq30.sorted.dedup.bai"], sample=all_samples)
+        bam = expand(["results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bam"], sample=all_samples),
+        bai = expand(["results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bai"], sample=all_samples)
     output:
-        fig="results/qc/multiBAM_fingerprint.png",
-        metrics="results/qc/multiBAM_fingerprint_metrics.txt",
-        rawcounts="results/qc/multiBAM_fingerprint_rawcounts.txt"
+        fig="results_10M/qc/multiBAM_fingerprint.png",
+        metrics="results_10M/qc/multiBAM_fingerprint_metrics.txt",
+        rawcounts="results_10M/qc/multiBAM_fingerprint_rawcounts.txt"
     threads: 32
     log:
-        "logs/fingerprint.deepTools"
+        "results_10M/logs/fingerprint.deepTools"
     shell:
         "plotFingerprint -p {threads} \
         -b {input.bam} \
@@ -399,13 +396,13 @@ rule deeptools_fingerprint:
 
 rule deeptools_bamPEFragmentSize:
     input:
-        bam = expand(["results/bowtie2/{sample}_PPq30.sorted.dedup.bam"], sample=all_samples),
-        bai = expand(["results/bowtie2/{sample}_PPq30.sorted.dedup.bai"], sample=all_samples)
+        bam = expand(["results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bam"], sample=all_samples),
+        bai = expand(["results_10M/bowtie2/{sample}_PPq30.sorted.dedup.bai"], sample=all_samples)
     output:
-        fig="results/qc/bamPEFragmentSize_hist.png",
-        rawcounts="results/qc/bamPEFragmentSize_rawcounts.tab"
+        fig="results_10M/qc/bamPEFragmentSize_hist.png",
+        rawcounts="results_10M/qc/bamPEFragmentSize_rawcounts.tab"
     log:
-        "logs/bamPEFragmentSize.deepTools"
+        "results_10M/logs/bamPEFragmentSize.deepTools"
     shell:
         "bamPEFragmentSize \
         -hist {output.fig} \
@@ -424,7 +421,7 @@ rule trim_read1:
     input:
         "rawdata/{sample}_R1.fastq.gz"
     output:
-        "results/qc/{sample}_R1_trimmed.fastq.gz"
+        "results_10M/qc/{sample}_R1_trimmed.fastq.gz"
     run:
         shell("python scripts/trimfastq.py {input} 50 | gzip -nc > {output}")
 
@@ -432,13 +429,13 @@ rule trim_read1:
 
 rule align_trimmed_read1:
     input:
-        "results/qc/{sample}_R1_trimmed.fastq.gz"
+        "results_10M/qc/{sample}_R1_trimmed.fastq.gz"
     output:
-        "results/bowtie2/{sample}_R1_trimmed.bam"
+        "results_10M/bowtie2/{sample}_R1_trimmed.bam"
     params:
         index="genomes/Scras_dunnart_assem1.0_pb-ont-illsr_flyeassem_red-rd-scfitr2_pil2xwgs2_60chr"
     log:
-        "logs/{sample}_align_trimmed_read1.log"
+        "results_10M/logs/{sample}_align_trimmed_read1.log"
     shell:
         "bowtie2 -x {params.index} -U {input} 2> {log} | \
         samtools view -Su - | samtools sort -o {output} - 2> {log}"
@@ -447,19 +444,19 @@ rule align_trimmed_read1:
 
 rule filter_sort_trimmed_alignment:
     input:
-        "results/bowtie2/{sample}_R1_trimmed.bam"
+        "results_10M/bowtie2/{sample}_R1_trimmed.bam"
     output:
-        bam = "results/bowtie2/{sample}_R1_trimmed_q30.bam"
+        bam = "results_10M/bowtie2/{sample}_R1_trimmed_q30.bam"
     log:
-        "logs/{sample}_align_trimmed_read1_filter.log"
+        "results_10M/logs/{sample}_align_trimmed_read1_filter.log"
     run:
         shell("samtools view -F 1804 -q 30 -b {input} -o {output.bam}")
 
 rule bamtobed_crossC:
     input:
-        "results/bowtie2/{sample}_R1_trimmed_q30.bam"
+        "results_10M/bowtie2/{sample}_R1_trimmed_q30.bam"
     output:
-        tagAlign = "results/bed/{sample}_R1_trimmed_q30_SE.tagAlign.gz"
+        tagAlign = "results_10M/bed/{sample}_R1_trimmed_q30_SE.tagAlign.gz"
     shell:
         """
         bedtools bamtobed -i {input} | \
@@ -471,10 +468,10 @@ rule bamtobed_crossC:
 ## Estimate read length from first 100 reads
 rule subsample_aligned_reads:
     input:
-        "results/bed/{sample}_R1_trimmed_q30_SE.tagAlign.gz"
+        "results_10M/bed/{sample}_R1_trimmed_q30_SE.tagAlign.gz"
     output:
-        subsample = "results/bed/{sample}.filt.sample.15Mreads.SE.tagAlign.gz",
-        tmp = "results/bed/{sample}_R1_trimmed_q30_SE.tagAlign.tmp"
+        subsample = "results_10M/bed/{sample}.filt.sample.15Mreads.SE.tagAlign.gz",
+        tmp = "results_10M/bed/{sample}_R1_trimmed_q30_SE.tagAlign.tmp"
     params:
         nreads= 15000000
     run:
@@ -492,12 +489,12 @@ rule subsample_aligned_reads:
 
 rule cross_correlation_SSP:
     input:
-        "results/bed/{sample}.filt.sample.15Mreads.SE.tagAlign.gz"
+        "results_10M/bed/{sample}.filt.sample.15Mreads.SE.tagAlign.gz"
     output:
-        CC_SCORES_FILE="results/qc/{sample}_filt_15Mreads.SE.cc.qc",
-        CC_PLOT_FILE="results/qc/{sample}_filt_15Mreads.SE.cc.plot.pdf"
+        CC_SCORES_FILE="results_10M/qc/{sample}_filt_15Mreads.SE.cc.qc",
+        CC_PLOT_FILE="results_10M/qc/{sample}_filt_15Mreads.SE.cc.plot.pdf"
     log:
-        "logs/{sample}_filt_15Mreads.SE.spp.log"
+        "results_10M/logs/{sample}_filt_15Mreads.SE.spp.log"
     params:
         EXCLUSION_RANGE_MIN=-500,
         EXCLUSION_RANGE_MAX=60
@@ -518,50 +515,50 @@ rule cross_correlation_SSP:
 
 rule call_peaks_macs2:
     input:
-        control = "results/bowtie2/{control}_PPq30.sorted.dedup.bam",
-        case = "results/bowtie2/{case}_PPq30.sorted.dedup.bam"
+        control = "results_10M/bowtie2/{control}_PPq30.sorted.dedup.bam",
+        case = "results_10M/bowtie2/{case}_PPq30.sorted.dedup.bam"
     output:
-        "results/macs2/{case}_vs_{control}_macs2_peaks.xls",
-        "results/macs2/{case}_vs_{control}_macs2_summits.bed",
-        "results/macs2/{case}_vs_{control}_macs2_peaks.narrowPeak",
+        "results_10M/macs2/{case}_vs_{control}_macs2_peaks.xls",
+        "results_10M/macs2/{case}_vs_{control}_macs2_summits.bed",
+        "results_10M/macs2/{case}_vs_{control}_macs2_peaks.narrowPeak",
     log:
-        "logs/{case}_vs_{control}_call_peaks_macs2.log"
+        "results_10M/logs/{case}_vs_{control}_call_peaks_macs2.log"
     params:
-        name = "{case}_vs_{control}_macs2",
+        name = "{case}_vs_{control}_macs2_P10-2",
     shell:
         " macs2 callpeak -f BAMPE -t {input.case} \
         -c {input.control} --keep-dup all \
-        --outdir results/macs2/ -p 0.01 \
+        --outdir results_10M/macs2/ -p 0.01 \
         -n {params.name} \
         -g 2740338543 2> {log} "
 
 rule call_peaks_macs2_pooled_replicates:
     input:
-        H3K4me3 = "results/bowtie2/H3K4me3_pooled_PPq30.sorted.dedup.bam",
-        H3K27ac = "results/bowtie2/H3K27ac_pooled_PPq30.sorted.dedup.bam",
-        input = "results/bowtie2/input_pooled_PPq30.sorted.dedup.bam"
+        H3K4me3 = "results_10M/bowtie2/H3K4me3_pooled_PPq30.sorted.dedup.bam",
+        H3K27ac = "results_10M/bowtie2/H3K27ac_pooled_PPq30.sorted.dedup.bam",
+        input = "results_10M/bowtie2/input_pooled_PPq30.sorted.dedup.bam"
     output:
-        "results/macs2/H3K4me3_pooled_macs2_peaks.xls",
-        "results/macs2/H3K4me3_pooled_macs2_summits.bed",
-        "results/macs2/H3K4me3_pooled_macs2_peaks.narrowPeak",
-        "results/macs2/H3K27ac_pooled_macs2_peaks.xls",
-        "results/macs2/H3K27ac_pooled_macs2_summits.bed",
-        "results/macs2/H3K27ac_pooled_macs2_peaks.narrowPeak"
+        "results_10M/macs2/H3K4me3_pooled_macs2_peaks.xls",
+        "results_10M/macs2/H3K4me3_pooled_macs2_summits.bed",
+        "results_10M/macs2/H3K4me3_pooled_macs2_peaks.narrowPeak",
+        "results_10M/macs2/H3K27ac_pooled_macs2_peaks.xls",
+        "results_10M/macs2/H3K27ac_pooled_macs2_summits.bed",
+        "results_10M/macs2/H3K27ac_pooled_macs2_peaks.narrowPeak"
     log:
-        H3K4me3 ="logs/H3K4me3_pooled_call_peaks_macs2.log",
-        H3K27ac ="logs/H3K27ac_pooled_call_peaks_macs2.log"
+        H3K4me3 ="results_10M/logs/H3K4me3_pooled_call_peaks_macs2.log",
+        H3K27ac ="results_10M/logs/H3K27ac_pooled_call_peaks_macs2.log"
     params:
         H3K4me3 = "H3K4me3_pooled_macs2",
         H3K27ac = "H3K27ac_pooled_macs2"
     run:
         shell(" macs2 callpeak -f BAMPE -t {input.H3K4me3} \
         -c {input.input} --keep-dup all \
-        --outdir results/macs2/ -p 0.01 \
+        --outdir results_10M/macs2/ -p 0.01 \
         -n {params.H3K4me3} \
         -g 2740338543 2> {log.H3K4me3} ")
         shell("macs2 callpeak -f BAMPE -t {input.H3K27ac} \
         -c {input.input} --keep-dup all \
-        --outdir results/macs2/ -p 0.01 \
+        --outdir results_10M/macs2/ -p 0.01 \
         -n {params.H3K27ac} \
         -g 2740338543 2> {log.H3K27ac} ")
 
@@ -574,9 +571,9 @@ rule call_peaks_macs2_pooled_replicates:
 # peak counts in a format that multiqc can handle
 rule get_narrow_peak_counts_for_multiqc:
     input:
-        peaks = "results/macs2/{case}_vs_{control}_macs2_peaks.narrowPeak"
+        peaks = "results_10M/macs2/{case}_vs_{control}_macs2_peaks.narrowPeak"
     output:
-        "results/qc/{case}-vs-{control}-narrowpeak-count_mqc.json"
+        "results_10M/qc/{case}-vs-{control}-narrowpeak-count_mqc.json"
     params:
         peakType = "narrowPeak"
     shell:
@@ -588,11 +585,11 @@ rule get_narrow_peak_counts_for_multiqc:
 ## Convert BAM to tagAlign file for calculating FRiP QC metric (Fraction of reads in peaks)
 rule bamToBed:
     input:
-        "results/bowtie2/{case}_PPq30.sorted.dedup.bam"
+        "results_10M/bowtie2/{case}_PPq30.sorted.dedup.bam"
     output:
-        "results/bowtie2/{case}.bedpe"
+        "results_10M/bowtie2/{case}.bedpe"
     log:
-        "logs/{case}.bamToBed"
+        "results_10M/logs/{case}.bamToBed"
     shell:
         "samtools sort -n {input} | bedtools bamtobed -bedpe -mate1 -i - > {output}"
 
@@ -600,10 +597,10 @@ rule bamToBed:
 ## Fraction of reads in peaks
 rule frip:
     input:
-        bed = "results/bowtie2/{case}.bedpe",
-        peak = "results/macs2/{case}_vs_{control}_macs2_peaks.narrowPeak"
+        bed = "results_10M/bowtie2/{case}.bedpe",
+        peak = "results_10M/macs2/{case}_vs_{control}_macs2_peaks.narrowPeak"
     output:
-        "results/qc/{case}_vs_{control}.frip.txt"
+        "results_10M/qc/{case}_vs_{control}.frip.txt"
     shell:
         "python2.7 scripts/encode_frip.py {input.bed} {input.peak} > {output}"
 
@@ -617,35 +614,35 @@ rule frip:
 
 rule overlap_peaks_H3K4me3:
     input:
-        peak1="results/macs2/A-2_H3K4me3_vs_A-1_input_macs2_peaks.narrowPeak",
-        peak2="results/macs2/B-2_H3K4me3_vs_B-1_input_macs2_peaks.narrowPeak",
-        pooled="results/macs2/H3K4me3_pooled_macs2_peaks.narrowPeak"
+        peak1="results_10M/macs2/A-2_H3K4me3_vs_A-1_input_macs2_peaks.narrowPeak",
+        peak2="results_10M/macs2/B-2_H3K4me3_vs_B-1_input_macs2_peaks.narrowPeak",
+        pooled="results_10M/macs2/H3K4me3_pooled_macs2_peaks.narrowPeak"
     output:
-        "results/macs2/H3K4me3_overlap.narrowPeak"
+        "results_10M/macs2/H3K4me3_overlap.narrowPeak"
     shell:
         "python2.7 scripts/overlap_peaks.py {input.peak1} {input.peak2} {input.pooled} {output}"
 
 
 rule overlap_peaks_H3K27ac:
     input:
-        peak1="results/macs2/A-3_H3K27ac_vs_A-1_input_macs2_peaks.narrowPeak",
-        peak2="results/macs2/B-3_H3K27ac_vs_B-1_input_macs2_peaks.narrowPeak",
-        pooled="results/macs2/H3K27ac_pooled_macs2_peaks.narrowPeak"
+        peak1="results_10M/macs2/A-3_H3K27ac_vs_A-1_input_macs2_peaks.narrowPeak",
+        peak2="results_10M/macs2/B-3_H3K27ac_vs_B-1_input_macs2_peaks.narrowPeak",
+        pooled="results_10M/macs2/H3K27ac_pooled_macs2_peaks.narrowPeak"
     output:
-        "results/macs2/H3K27ac_overlap.narrowPeak"
+        "results_10M/macs2/H3K27ac_overlap.narrowPeak"
     shell:
         "python2.7 scripts/overlap_peaks.py {input.peak1} {input.peak2} {input.pooled} {output}"
 
 ## Fraction of reads in peaks
 rule overlap_frip:
     input:
-        H3K4me3bam = "results/bowtie2/H3K4me3_pooled_PPq30.sorted.dedup.bam",
-        H3K27acbam = "results/bowtie2/H3K27ac_pooled_PPq30.sorted.dedup.bam",
-        H3K4me3bed = "results/macs2/H3K4me3_overlap.narrowPeak",
-        H3K27acbed = "results/macs2/H3K27ac_overlap.narrowPeak"
+        H3K4me3bam = "results_10M/bowtie2/H3K4me3_pooled_PPq30.sorted.dedup.bam",
+        H3K27acbam = "results_10M/bowtie2/H3K27ac_pooled_PPq30.sorted.dedup.bam",
+        H3K4me3bed = "results_10M/macs2/H3K4me3_overlap.narrowPeak",
+        H3K27acbed = "results_10M/macs2/H3K27ac_overlap.narrowPeak"
     output:
-        H3K4me3frip = "results/qc/H3K4me3_overlap.frip",
-        H3K27acfrip = "results/qc/H3K27ac_overlap.frip"
+        H3K4me3frip = "results_10M/qc/H3K4me3_overlap.frip",
+        H3K27acfrip = "results_10M/qc/H3K27ac_overlap.frip"
     run:
         shell("python2.7 scripts/encode_frip.py {input.H3K4me3bam} {input.H3K4me3bed} > {output.H3K4me3frip}")
         shell("python2.7 scripts/encode_frip.py {input.H3K27acbam} {input.H3K27acbed} > {output.H3K27acfrip}")
@@ -657,45 +654,45 @@ rule overlap_frip:
 # rule multiqc:
 #     input:
 #         # fastqc
-#         expand("results/qc/{sample}_R1_fastqc.html", sample=all_samples),
-#         expand("results/qc/{sample}_R2_fastqc.html", sample=all_samples),
-#         expand("results/qc/{sample}_R1_fastqc.zip", sample=all_samples),
-#         expand("results/qc/{sample}_R2_fastqc.zip", sample=all_samples),
+#         expand("results_10M/qc/{sample}_R1_fastqc.html", sample=all_samples),
+#         expand("results_10M/qc/{sample}_R2_fastqc.html", sample=all_samples),
+#         expand("results_10M/qc/{sample}_R1_fastqc.zip", sample=all_samples),
+#         expand("results_10M/qc/{sample}_R2_fastqc.zip", sample=all_samples),
 #         # bowtie2
-#         expand("logs/{sample}.align", sample=all_samples),
-#         expand("results/qc/{sample}.flagstat.qc", sample=all_samples),
+#         expand("results_10M/logs/{sample}.align", sample=all_samples),
+#         expand("results_10M/qc/{sample}.flagstat.qc", sample=all_samples),
 #         # preseq
-#         expand("results/qc/{sample}.ccurve.txt", sample=all_samples),
-#         expand("results/qc/{sample}.extrap.txt", sample=all_samples),
+#         expand("results_10M/qc/{sample}.ccurve.txt", sample=all_samples),
+#         expand("results_10M/qc/{sample}.extrap.txt", sample=all_samples),
 #         # deepTools
-#         "results/deeptools/multibamsum.npz",
-#         "results/deeptools/multibamsum.tab",
-#         "results/deeptools/pearsoncor_multibamsum.png",
-#         "results/deeptools/pearsoncor_multibamsum_matrix.txt",
-#         expand("results/deeptools/{sample}.SeqDepthNorm.bw", sample=all_samples),
-#         "results/deeptools/multiBAM_fingerprint.png",
-#         "results/deeptools/multiBAM_fingerprint_metrics.txt",
-#         "results/deeptools/multiBAM_fingerprint_rawcounts.txt",
-#         "results/deeptools/plot_coverage.png",
-#         "results/deeptools/plot_coverage_rawcounts.tab",
-#         "results/deeptools/bamPEFragmentSize_hist.png",
-#         "results/deeptools/bamPEFragmentSize_rawcounts.tab",
+#         "results_10M/deeptools/multibamsum.npz",
+#         "results_10M/deeptools/multibamsum.tab",
+#         "results_10M/deeptools/pearsoncor_multibamsum.png",
+#         "results_10M/deeptools/pearsoncor_multibamsum_matrix.txt",
+#         expand("results_10M/deeptools/{sample}.SeqDepthNorm.bw", sample=all_samples),
+#         "results_10M/deeptools/multiBAM_fingerprint.png",
+#         "results_10M/deeptools/multiBAM_fingerprint_metrics.txt",
+#         "results_10M/deeptools/multiBAM_fingerprint_rawcounts.txt",
+#         "results_10M/deeptools/plot_coverage.png",
+#         "results_10M/deeptools/plot_coverage_rawcounts.tab",
+#         "results_10M/deeptools/bamPEFragmentSize_hist.png",
+#         "results_10M/deeptools/bamPEFragmentSize_rawcounts.tab",
 #         # phantomPeaks
-#         expand("results/phantomPeaks/{sample}.spp.pdf", sample = IPS),
-#         expand("results/phantomPeaks/{sample}.spp.Rdata", sample = IPS),
-#         expand("results/phantomPeaks/{sample}.spp.out", sample = IPS),
+#         expand("results_10M/phantomPeaks/{sample}.spp.pdf", sample = IPS),
+#         expand("results_10M/phantomPeaks/{sample}.spp.Rdata", sample = IPS),
+#         expand("results_10M/phantomPeaks/{sample}.spp.out", sample = IPS),
 #         # macs2
-#         expand("results/macs2/{case}_vs_{control}_macs2_peaks.narrowPeak", zip, case=IPS, control=INPUTS),
-#         expand("results/macs2/{case}_vs_{control}_macs2_peaks.xls", zip, case=IPS, control=INPUTS),
-#         expand("results/macs2/{case}_vs_{control}_macs2_summits.bed", zip, case=IPS, control=INPUTS),
-#         expand("results/macs2/{case}-vs-{control}-narrowpeak-count_mqc.json", zip, case=IPS, control=INPUTS),
-#         expand("results/frip/{case}_vs_{control}.frip.txt", case=IPS, control=INPUTS)
+#         expand("results_10M/macs2/{case}_vs_{control}_macs2_peaks.narrowPeak", zip, case=IPS, control=INPUTS),
+#         expand("results_10M/macs2/{case}_vs_{control}_macs2_peaks.xls", zip, case=IPS, control=INPUTS),
+#         expand("results_10M/macs2/{case}_vs_{control}_macs2_summits.bed", zip, case=IPS, control=INPUTS),
+#         expand("results_10M/macs2/{case}-vs-{control}-narrowpeak-count_mqc.json", zip, case=IPS, control=INPUTS),
+#         expand("results_10M/frip/{case}_vs_{control}.frip.txt", case=IPS, control=INPUTS)
 #     output:
-#         directory("results/multiqc/multiqc_report_data/"),
-#         "results/multiqc/multiqc_report.html",
+#         directory("results_10M/multiqc/multiqc_report_data/"),
+#         "results_10M/multiqc/multiqc_report.html",
 #     conda:
 #         "envs/multiqc.yaml"
 #     log:
-#         "logs/multiqc.log"
+#         "results_10M/logs/multiqc.log"
 #     wrapper:
 #         "0.31.1/bio/multiqc"