Skip to content

Commit

Permalink
Merge branch 'CW-3110_docs_update' into 'dev'
Browse files Browse the repository at this point in the history
CW-3110 docs and resources

Closes CW-3110

See merge request epi2melabs/workflows/wf-cas9!49
  • Loading branch information
nrhorner committed Dec 20, 2023
2 parents 6b619e8 + ace73e8 commit c1b01ec
Show file tree
Hide file tree
Showing 27 changed files with 497 additions and 229 deletions.
6 changes: 3 additions & 3 deletions .gitlab-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ variables:
# Only common file inputs and option values need to be given here
# (not things such as -profile)
CI_FLAVOUR: "new"
NF_WORKFLOW_OPTS: "--fastq test_data/fastq \
NF_WORKFLOW_OPTS: "-executor.\\$$local.memory 16GB --fastq test_data/fastq \
--reference_genome test_data/grch38/grch38_chr19_22.fa.gz \
--targets test_data/targets.bed --full_report --threads 4"

Expand All @@ -34,12 +34,12 @@ docker-run:
- if: $MATRIX_NAME == "multi-sample"
variables:
NF_WORKFLOW_OPTS:
"--fastq test_data/fastq \
"-executor.\\$$local.memory 16GB --fastq test_data/fastq \
--reference_genome test_data/grch38/grch38_chr19_22.fa.gz \
--targets test_data/targets.bed --full_report --threads 4"
- if: $MATRIX_NAME == "one-sample"
variables:
NF_WORKFLOW_OPTS:
"--fastq 'test_data/fastq/sample_2/sample_2 ontarget.fastq.gz' \
"-executor.\\$$local.memory 16GB --fastq 'test_data/fastq/sample_2/sample_2 ontarget.fastq.gz' \
--reference_genome test_data/grch38/grch38_chr19_22.fa.gz \
--targets test_data/targets.bed --full_report --threads 4"
12 changes: 2 additions & 10 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,22 +1,14 @@
repos:
- repo: local
hooks:
- id: docs_schema
name: docs_schema
entry: parse_docs -p docs -e .md -s intro links -oj nextflow_schema.json
language: python
always_run: true
pass_filenames: false
additional_dependencies:
- epi2melabs
- id: docs_readme
name: docs_readme
entry: parse_docs -p docs -e .md -s header intro quickstart links -ot README.md
entry: parse_docs -p docs -e .md -s 01_brief_description 02_introduction 03_compute_requirements 04_install_and_run 05_related_protocols 06_inputs 07_outputs 08_pipeline_overview 09_troubleshooting 10_FAQ 11_other -ot README.md -od output_definition.json -ns nextflow_schema.json
language: python
always_run: true
pass_filenames: false
additional_dependencies:
- epi2melabs
- epi2melabs>=0.0.50
- id: build_models
name: build_models
entry: datamodel-codegen --strict-nullable --base-class workflow_glue.results_schema_helpers.BaseModel --use-schema-description --disable-timestamp --input results_schema.yml --input-file-type openapi --output bin/workflow_glue/results_schema.py
Expand Down
232 changes: 170 additions & 62 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,105 +1,213 @@
# wf-cas9

wf-cas9 is a [nextflow](https://www.nextflow.io/) workflow
for the multiplexed analysis of Oxford Nanopore Cas9 enrichment sequencing.
# Cas9 enrichment workflow

Summarise the results of Cas9 enrichment sequencing.



## Introduction

<!---This section of documentation typically contains a list of things the workflow can perform also any other intro.--->
The ONT Cas9 sequencing kit allows the enrichment of genomic
regions of interest by amplifying target regions from adapters ligated to Cas9 cleavage sites.
The purpose of this workflow is to assess the effectiveness of such Cas9 enrichment,
but it can be applied to other enrichment approaches. The workflow outputs
help assess the effectiveness
of the enrichement strategy and can be used to diagnose issues such as poorly performing probes.
help assess the effectiveness of the enrichement strategy and can be used to diagnose issues such as poorly performing probes.

Inputs to the workflow are: a reference genome file, FASTQ reads from enrichment sequencing,
and a BED file detailing the regions of interests (targets).
The main outputs are a report containing summary statistics and plots which give an overview of
the enrichment, and a BAM file with target-overlapping reads.
This workflow can be used for the following:

The main steps of the workflow are alignemnt of reads to the genome using
[minimap2](https://github.com/lh3/minimap2) and the analaysis
of read-target overlap with [bedtools](https://github.com/arq5x/bedtools2).
+ To obtain simple Cas9 enrichment sequencing summaries.
+ Statistics of the coverage of each target.
+ Plots of coverage across each target.
+ Identification of off-target hot-spots



## Compute requirements

Recommended requirements:

+ CPUs = 16
+ Memory = 64 GB

Minimum requirements:

+ CPUs = 6
+ Memory = 16 GB

## Quickstart
Approximate run time: Approximately 30 minutes to process a single sample of 100K reads wit the minimum requirements.

The workflow uses [nextflow](https://www.nextflow.io/) to manage compute and
software resources, as such nextflow will need to be installed before attempting
to run the workflow.
ARM processor support: True

The workflow can currently be run using either
[Docker](https://www.docker.com/products/docker-desktop) or
[singularity](https://docs.sylabs.io/guides/latest/user-guide/) to provide isolation of
the required software. Both methods are automated out-of-the-box provided
either docker or singularity is installed.

It is not required to clone or download the git repository in order to run the workflow.
For more information on running EPI2ME Labs workflows [visit out website](https://labs.epi2me.io/wfindex).

**Workflow options**

To obtain the workflow, having installed `nextflow`, users can run:
## Install and run

```
nextflow run epi2me-labs/wf-cas9 --help
```
to see the options for the workflow.
<!---Nextflow text remains the same across workflows, update example cmd and demo data sections.--->
These are instructions to install and run the workflow on command line. You can also access the workflow via the [EPI2ME application](https://labs.epi2me.io/downloads/).

The workflow uses [Nextflow](https://www.nextflow.io/) to manage compute and software resources, therefore nextflow will need to be installed before attempting to run the workflow.

The workflow can currently be run using either [Docker](https://www.docker.com/products/docker-desktop) or
[Singularity](https://docs.sylabs.io/guides/3.0/user-guide/index.html) to provide isolation of
the required software. Both methods are automated out-of-the-box provided
either docker or singularity is installed. This is controlled by the [`-profile`](https://www.nextflow.io/docs/latest/config.html#config-profiles) parameter as exemplified below.

The main inputs are:
* Folder of FASTQ reads.
* Genome reference file.
* Target BED file with 4 columns:
* chromosome
* start
* end
* target_name
It is not required to clone or download the git repository in order to run the workflow.
More information on running EPI2ME workflows can be found on our [website](https://labs.epi2me.io/wfindex).

The following command can be used to obtain the workflow. This will pull the repository in to the assets folder of nextflow and provide a list of all parameters available for the workflow as well as an example command:

To test on a small dataset with two targets and two chromosomes:
first download and unpack the demo data
```shell
```
nextflow run epi2me-labs/wf-cas9 –help
```
A demo dataset is provided for testing of the workflow. It can be downloaded using:
```
wget https://ont-exd-int-s3-euwst1-epi2me-labs.s3.amazonaws.com/wf-cas9/wf-cas9-demo.tar.gz \
&& tar -xvf wf-cas9-demo.tar.gz

```shell
```
The workflow can be run with the demo data using:
```
nextflow run epi2me-labs/wf-cas9 \
--fastq wf-cas9-demo/fastq/ \
--reference_genome wf-cas9-demo/grch38/grch38_chr19_22.fa.gz \
--targets wf-cas9-demo/targets.bed
```
For further information about running a workflow on the cmd line see https://labs.epi2me.io/wfquickstart/



## Related protocols

<!---Hyperlinks to any related protocols that are directly related to this workflow, check the community for any such protocols.--->

This workflow is designed to take input sequences that have been produced from [Oxford Nanopore Technologies](https://nanoporetech.com/) devices.

Find related protocols in the [Nanopore community](https://community.nanoporetech.com/docs/).



## Inputs

### Input Options

| Nextflow parameter name | Type | Description | Help | Default |
|--------------------------|------|-------------|------|---------|
| fastq | string | FASTQ files to use in the analysis. | This accepts one of three cases: (i) the path to a single FASTQ file; (ii) the path to a top-level directory containing FASTQ files; (iii) the path to a directory containing one level of sub-directories which in turn contain FASTQ files. In the first and second case, a sample name can be supplied with `--sample`. In the last case, the data is assumed to be multiplexed with the names of the sub-directories as barcodes. In this case, a sample sheet can be provided with `--sample_sheet`. | |
| reference_genome | string | FASTA reference file. | Full path to a FASTA reference genome file conating the target regions of interest. | |
| targets | string | A tab-delimited BED file of target regions. | Each row should contain the following fields: chromosome/contig name, start and end of target region, and a name to identify the target. An example row would look like this: `chr19 13204400 13211100 SCA6` | |
| analyse_unclassified | boolean | Analyse unclassified reads from input directory. By default the workflow will not process reads in the unclassified directory. | If selected and if the input is a multiplex directory the workflow will also process the unclassified directory. | False |


### Sample Options

| Nextflow parameter name | Type | Description | Help | Default |
|--------------------------|------|-------------|------|---------|
| sample_sheet | string | A CSV file used to map barcodes to sample aliases. The sample sheet can be provided when the input data is a directory containing sub-directories with FASTQ files. | The sample sheet is a CSV file with, minimally, columns named `barcode` and `alias`. Extra columns are allowed. A `type` column is required for certain workflows and should have the following values; `test_sample`, `positive_control`, `negative_control`, `no_template_control`. | |
| sample | string | A single sample name for non-multiplexed data. Permissible if passing a single .fastq(.gz) file or directory of .fastq(.gz) files. | | |


### Output Options

| Nextflow parameter name | Type | Description | Help | Default |
|--------------------------|------|-------------|------|---------|
| out_dir | string | Directory for output of all workflow results. | | output |
| full_report | boolean | Select this option to write a full report that contains plots giving a graphical representation of coverage at each target region. | In cases where there are many target to visualise, the report loading time can be slow and so it's is recommended to set `full_report` to false in such cases. | False |


### Miscellaneous Options

| Nextflow parameter name | Type | Description | Help | Default |
|--------------------------|------|-------------|------|---------|
| disable_ping | boolean | Enable to prevent sending a workflow ping. | | False |






## Outputs

Outputs files may be aggregated including information for all samples or provided per sample. Per-sample files will be prefixed with respective aliases and represented below as {{ alias }}.

| Title | File path | Description | Per sample or aggregated |
|-------|-----------|-------------|--------------------------|
| workflow report | ./wf-cas9-report.html | Report for all samples | aggregated |
| Sample summary table | ./sample_summary.csv | Summary statistics for each sample. | aggregated |
| Target summary table | ./target_summary.csv | Summary statistics for each sample-target combination. | aggregated |
| Per file read stats | ./{{ alias }}/{{ alias }}_per-read-stats.tsv.gz | A TSV with per-file read statistics, including all samples. | per-sample |
| On-target BAM | .//{{ alias }}/{{ alias }}_on_target.bam | BAM file containing alignments that map to one of the given targets. | per-sample |
| On-target BED | .//{{ alias }}/{{ alias }}_on_target.bed | BED file containing summarising alignments that map to one of the given targets. | per-sample |
| On-target FASTQ | .//{{ alias }}/{{ alias }}_on_target.fastq | FASTQ file containing reads that map to one of the given targets. | per-sample |




## Pipeline overview

<!---High level numbered list of main steps of the workflow and hyperlink to any tools used. If multiple workflows/different modes perhaps have subheadings and numbered steps. Use nested numbering or bullets where required.--->
### 1. Concatenate input files and generate per read stats.

The [fastcat/bamstats](https://github.com/epi2me-labs/fastcat) tool is used to concatenate multifile samples to be processed by the workflow.
### 2. Align read to the reference genome.
The reads are then aligned to the reference genome supplied by the user using [minimap2](https://github.com/lh3/minimap2).

### 3. Generate target coverage data.
This stage of the workflow generates coverage data for each of targets that are used to make
the plots and tables in the report.

First, the reference genome is split into consecutive windows of 100bp. The coverage statistics are
calcaulted across these windows.

The user must supply a tab-delinted BED file (`targets`) detailing the genomic locations of the targets of interest.
Here is an example file describing two targets.

```
chr19 13204400 13211100 SCA6
chr22 45791500 45799400 SCA10
```
The columns are:
+ chromosome
+ start position
+ end position
+ target name

*Note*: the file does not contain column names.

With the target locations defined, [bedtools](https://bedtools.readthedocs.io/en/latest/) is used to generate
information regarding the alignment coverage at each of the targets and also of
background reads (see section 4) per strand.

### 4. Background identification
In a sequencing enrichment experiment, it can useful to know if there are any off-target genomic regions (regions that are not defined in the `targets` file) that are being preferentially encountered. This information can be used to inform primer design.

We define off-target regions here as any region not within 1kb of a defined target.
Hot-spots are further defined as contiguous regions of off-target alignemnts containing at least 10 reads.





## Troubleshooting

<!---Any additional tips.--->
+ If the workflow fails please run it with the demo data set to ensure the workflow itself is working. This will help us determine if the issue is related to the environment, input parameters or a bug.
+ See how to interpret some common nextflow exit codes [here](https://labs.epi2me.io/trouble-shooting/).



## FAQ's

<!---Frequently asked questions, pose any known limitations as FAQ's.--->

If your question is not answered here, please report any issues or suggestions on the [github issues](https://github.com/epi2me-labs/wf-template/issues) page or start a discussion on the [community](https://community.nanoporetech.com/).

**Workflow outputs**

The primary outputs of the workflow include:

* A folder per sample containing:
* BAM file filtered to contain reads overlapping with targets (*_on_target.bam).
* BED file with alignment information for on-target reads (*on_target.bed).
* BED file containing windowed coverage for each target (*target_cov.bed).
* A simple text file providing a summary of sequencing reads (*.stats).
* sample_summary.csv - read and alignment summary for each sample.
* target_summary.csv - read and alignment summary for reads overlapping each target.
* A combined HTML report detailing the primary findings of the workflow across all samples including:
* Sequencing quality plots.
* Tables summarising targeted sequencing results.
* Plots of stranded coverage at each target.
* Histograms of on and off-target coverage for each sample.
* Off-target hotspot region tables.
## Related blog posts


See the [EPI2ME website](https://labs.epi2me.io/) for lots of other resources and blog posts.


## Useful links

* [nextflow](https://www.nextflow.io/)
* [docker](https://www.docker.com/products/docker-desktop)
* [singularity](https://docs.sylabs.io/guides/3.5/user-guide/introduction.html)
28 changes: 21 additions & 7 deletions bin/workflow_glue/build_tables.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,14 +51,14 @@ def main(args):

frames = []

df_read_to_taget = pd.read_csv(
df_read_to_target = pd.read_csv(
args.read_to_target, sep='\t',
names=['chr', 'start', 'end', 'read_id', 'target', 'sample_id'],
index_col=False)

read_stats_df = pd.read_csv(args.aln_summary, sep='\t', index_col=False)

df_read_to_taget = df_read_to_taget.merge(
df_read_to_target = df_read_to_target.merge(
read_stats_df[['name', 'read_length']],
left_on='read_id', right_on='name')

Expand All @@ -69,19 +69,33 @@ def main(args):
df = df.drop(columns=['sample_id'])
if len(df) == 0:
continue
df_read_to_taget = df_read_to_taget.astype({
df_read_to_target = df_read_to_target.astype({
'start': int,
'end': int,
'read_length': int
})
read_len = df_read_to_taget.groupby(['target'])[['read_length']].mean()
read_len.columns = ['mean_read_length']
read_len = (
df_read_to_target[['target', 'read_length']]
.groupby(['target'])
.agg(mean_read_length=('read_length', 'mean'))
)

if len(read_len) > 0:
df = df.merge(read_len, left_on='target', right_index=True)
else:
df['mean_read_length'] = 0

kbases = df_read_to_taget.groupby(['target']).sum()[['read_length']] / 1000
df_read_to_target['align_len'] = (
df_read_to_target['end'] - df_read_to_target['start']
)

# Kbases is the approximate number of bases mapping to a target.
# Deletions and insertions within the reads will mean the actual value may
# vary slightly
kbases = (
df_read_to_target[['target', 'align_len']]
.groupby(['target']).sum() / 1000
)
kbases.columns = ['kbases']
if len(kbases) > 0:
df = df.merge(kbases, left_on='target', right_index=True)
Expand Down Expand Up @@ -144,5 +158,5 @@ def main(args):

sample_summary.to_csv('sample_summary.csv')

read_target_summary_table = read_target_summary(df_read_to_taget)
read_target_summary_table = read_target_summary(df_read_to_target)
read_target_summary_table.to_csv('read_target_summary.tsv', sep='\t')
Loading

0 comments on commit c1b01ec

Please sign in to comment.