- Robin Meyers (@robinmeyers)
- Edits: dfporter
If you simply want to use this workflow, download and extract the latest release. If you intend to modify and further extend this workflow or want to work under version control, fork this repository as outlined in Advanced.
Clone this repositiory into a new directory
$ git clone git@github.com:robinmeyers/irclip-v2-snakemake my-irclip-experiment
$ cd my-irclip-experiment
Create and activate the conda environment
$ conda env create -q -f=envs/conda.yaml -n irclip-v2-snakemake
$ conda activate irclip-v2-snakemake
Run the snakemake on test data
$ snakemake -j1 --directory .test
Examine the outputs of the workflow in the directory .test/outs/
You will need a STAR index build of the reference genome. Indicate the index location in config.yaml. You can build this index as follows:
wget ftp://ftp.ebi.ac.uk/pub/databases/gencode/Gencode_human/release_37/GRCh38.primary_assembly.genome.fa.gz indexes/star/genome/
wget ftp://ftp.ebi.ac.uk/pub/databases/gencode/Gencode_human/release_37/gencode.v37.annotation.gtf.gz indexes/star/genome/
cd indexes/star/genome/
gunzip *
# This builds the index with a 70 GB limit to RAM usage.
STAR --limitGenomeGenerateRAM 70000000000 --runThreadN 12 --runMode genomeGenerate --genomeDir ./ --genomeFastaFiles GRCh38.primary_assembly.genome.fa --sjdbGTFfile gencode.v37.annotation.gtf --sjdbOverhang 75
The config.yaml file also needs to include the location of a genomic features gtf and genomic fasta.
By default these are:
feature_gtf: indexes/star/genome/gencode.v37.annotation.gtf
genomic_fasta: indexes/star/genome/GRCh38.primary_assembly.genome.fa
These file is downloaded as part of building the STAR index (see above).
Create a samplesheet based on the template at .test/samples.csv
.
Configure the workflow using the file config.yaml
.
Ensure the correct conda environment is active with
$ conda activate irclip-v2-snakemake
Test your configuration by performing a dry-run via
$ snakemake -n
Execute the workflow locally via
$ snakemake --cores $N
using $N
cores or run it in a cluster environment via
$ snakemake --jobs $N --cluster qsub
For SLURM, you'll need to generate a .profile
$snakemake --jobs $N --profile slurm --cluster-config cluster.json
If you installed the workflow by cloning the github repo, you can pull latest updates to workflow with
$ git pull --rebase
This will require you to first commit any changes you made to your configuration file before pulling new updates.
Then simply rerun the snakemake
command.
The following recipe provides established best practices for running and extending this workflow in a reproducible way.
- Fork the repo to a personal or lab account.
- Clone the fork to the desired working directory for the concrete project/run on your machine.
- Create a new branch (the project-branch) within the clone and switch to it. The branch will contain any project-specific modifications (e.g. to configuration, but also to code).
- Modify the config, and any necessary sheets (and probably the workflow) as needed.
- Commit any changes and push the project-branch to your fork on github.
- Run the analysis.
- Optional: Merge back any valuable and generalizable changes to the upstream repo via a pull request. This would be greatly appreciated.
- Optional: Push results (plots/tables) to the remote branch on your fork.
- Optional: Create a self-contained workflow archive for publication along with the paper (snakemake --archive).
- Optional: Delete the local clone/workdir to free space.
Tests cases are in the subfolder .test
. They are automtically executed via continuous integration with Travis CI.