Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large scale inference docs #994

Open
wants to merge 10 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added docs/_static/ancestor_grouping.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/_static/example_flow.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/_toc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ parts:
- caption: Inference
chapters:
- file: inference
- file: large_scale
- caption: Interfaces
chapters:
- file: api
Expand Down
26 changes: 26 additions & 0 deletions docs/api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,11 @@ File formats
Sample data
+++++++++++

.. autoclass:: tsinfer.VariantData
:members:
:inherited-members:


.. autoclass:: tsinfer.SampleData
:members:
:inherited-members:
Expand Down Expand Up @@ -60,6 +65,27 @@ Running inference

.. autofunction:: tsinfer.post_process

*****************
Batched inference
*****************

.. autofunction:: tsinfer.match_ancestors_batch_init

.. autofunction:: tsinfer.match_ancestors_batch_groups

.. autofunction:: tsinfer.match_ancestors_batch_group_partition

.. autofunction:: tsinfer.match_ancestors_batch_group_finalise

.. autofunction:: tsinfer.match_ancestors_batch_finalise

.. autofunction:: tsinfer.match_samples_batch_init

.. autofunction:: tsinfer.match_samples_batch_partition

.. autofunction:: tsinfer.match_samples_batch_finalise


*****************
Container classes
*****************
Expand Down
2 changes: 1 addition & 1 deletion docs/inference.md
Original file line number Diff line number Diff line change
Expand Up @@ -300,4 +300,4 @@ The final phase of a `tsinfer` inference consists of a number steps:
section
2. Describe the structure of the output tree sequences; how the
nodes are mapped, what the time values mean, etc.
:::
:::
177 changes: 177 additions & 0 deletions docs/large_scale.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,177 @@
---
jupytext:
text_representation:
extension: .md
format_name: myst
format_version: 0.12
jupytext_version: 1.9.1
kernelspec:
display_name: Python 3
language: python
name: python3
---

:::{currentmodule} tsinfer
:::

(sec_large_scale)=

# Large Scale Inference

tsinfer scales well and has been successfully used with datasets up to half a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you provide some example citations here? If I wanted to use tsinfer for large datasets, I would probably also want to see the pubs it was used in.
(I assume "has been successfully used" means it's been published)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Publications are in preparation

million samples. Here we detail considerations and tips for each step of the
inference process to help you scale up your analysis. A snakemake pipeline
which implements this parallelisation scheme is available at https://github.com/benjeffery/tsinfer-snakemake.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: It would be good to make this a link.

Suggested change
which implements this parallelisation scheme is available at https://github.com/benjeffery/tsinfer-snakemake.
which implements this parallelisation scheme is available at <https://github.com/benjeffery/tsinfer-snakemake>.

It would also be good to have a small README in that repo giving a little summary of what it is and how to run it, and possibly redirecting here for more info.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

edcf3ff

Yes, documenting that repo is a big item on my todo list.


(sec_large_scale_ancestor_generation)=

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in this intro section, before going into more detail, it could be helpful to give an overview about what is "hard" about large scale inference. What counts has "large scale" and if you don't do the following suggestions, where might you see your inference be slow/get stuck/run out of memory...? And then give a brief overview of how you get around these problems.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added more context in c6b5e1e

## Data preparation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section is a little sparse. If I hadn't used VCF zarr format before, I'd be a little intimidated on how to use it. Can you include a short example?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a TODO in abc170f as there will be general tutorial on this as it applies to small scale too.


For large scale inference the data must be in [VCF Zarr](https://github.com/sgkit-dev/vcf-zarr-spec)
format, read by the {class}`VariantData` class. [bio2zarr](https://github.com/sgkit-dev/bio2zarr)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any docs on VariantData

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are todo for this release, I think they are in the code, but need autodocing.

is recommended for conversion from VCF. [sgkit](https://github.com/sgkit-dev/sgkit) can then
be used to perform initial filtering.


## Ancestor generation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you include plots of expected run time and expected memory use?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really, as it depends a lot on what your data looks like. If things don't match well you need a lot more RAM.


Ancestor generation is generally the fastest step in inference and is not yet
parallelised out-of-core in tsinfer. However it scales well on machines with
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not obvious to me what "parallelised out-of-core" means

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added an extra sentence fragment to explain my CS jargon in 587c110

many cores and hyperthreading via the `num_threads` argument to
{meth}`generate_ancestors`. The limiting factor is often that the
entire genotype array for the contig being inferred needs to fit in RAM.
This is the high-water mark for memory usage in tsinfer.
Note the `genotype_encoding` argument, setting this to
{class}`tsinfer.GenotypeEncoding.ONE_BIT` reduces the memory footprint of
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to say that this can't be used if there is missing data?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 6c02df1

the genotype array by a factor of 8, for a surprisingly small increase in
runtime. With this encoding, the RAM needed is roughly
`num_sites * num_samples * ploidy / 8 bytes.` However this encoding
only supports biallelic sites, with no missingness.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you give a quick example? Something like - if you are generating ancestors for --- sites, --- samples, --- ploidy, and running on a 32G machine, using 16 threads with genotype_encoding = tsinfer.GenotypeEncoding.ONE_BIT, we can expect the genotype array to fit in memory. But we would expect ---- sites to hit a memory error.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The formula is exact, so people should be able to work it out?


## Ancestor matching
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read this section through once and don't feel like I fully understood enough to start implementing. I understood that the ancestor matching can be slow and can be parallelized by using partitions. After reading once through, my main question was is this parallezation on a multicore single node, or can this parallezation be done on single cores on multiple machines, and what is the max parallezation I can get out of it?

It's possible, with more careful multiple readings of this section, I'd figure out the answer to those questions. But, goal is probably for the reader (with my background) to be able to read it once through carefully and understand.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a bit extra in d6725ce
It's a complicated scheme, unfortunately.


Ancestor matching is one of the more time consuming steps of inference. It
proceeds in groups, progressively growing the tree sequence with younger
ancestors. At each stage the parallelism is limited to the number of ancestors
whose possible inheritors are already matched, as all possible inheritors
of a sample must be matched in an earlier group. For a typical human data set
the number of samples per group varies from single digits up to approximately
the number of samples.
The plot below shows the number of ancestors matched in each group for a typical
human data set, earlier groups are older ancestors:

```{figure} _static/ancestor_grouping.png
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be worth indicating that the group number is ordered by time, so that group 0 represents the oldest ancestors?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 6c02df1

:width: 80%
```

There are five tsinfer API methods that can be used to parallelise ancestor
matching.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

List those five tsinfer API methods here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


Initially {meth}`match_ancestors_batch_init` should be called to
set up the batch matching and to determine the groupings of ancestors.
This method writes a file `metadata.json` to the `work_dir` that contains
a JSON encoded dictionary with configuration for later steps, and a key
`ancestor_grouping` which is a list of dictionaries, each containing the
list of ancestors in that group (key:`ancestors`) and a proposed partioning of
those ancestors into sets that can be matched in parallel (key:`partitions`).
The dictionary is also returned by the method.
The partitioning is controlled by the `min_work_per_job` and `max_num_partitions`
arguments. Ancestors are placed in a partition until the sum of their lengths exceeds
`min_work_per_job`, when a new partition is started. However, the number of partitions
is not allowed to exceed `max_num_partitions`. It is suggested to set `max_num_partitions`
to around 3-4x the number of worker nodes available, and `min_work_per_job` to around
2,000,000 for a typical human data set.

Each group is then matched in turn, either by calling {meth}`match_ancestors_batch_groups`
to match without partitioning, or by calling {meth}`match_ancestors_batch_group_partition`
many times in parallel followed by a single call to {meth}`match_ancestors_batch_group_finalise`.
Each call to {meth}`match_ancestors_batch_groups` or {meth}`match_ancestors_batch_group_finalise`
outputs the tree sequence to `work_dir`, which is then used by the next group. The length of
the `ancestor_grouping` in the metadata dictionary determines the group numbers that these methods
will need to be called for, and the length of the `partitions` list in each group determines
the number of calls to {meth}`match_ancestors_batch_group_partition` that are needed (if any).

{meth}`match_ancestors_batch_groups` matches groups, without partitioning, from
`group_index_start` (inclusively) to `group_index_end` (exclusively). Combining
many groups into one call reduces the overhead from job submission and start
up times, but note on job failure the process can only be resumed from the
last `group_index_end`.

To match a single group in parallel, call {meth}`match_ancestors_batch_group_partition`
once for each partition listed in the `ancestor_grouping[group_index]['partitions']` list,
incrementing `partition_index`. This will match the ancestors, placing the match data in
the `working_dir`. Once all are complete a single call to
{meth}`match_ancestors_batch_group_finalise` will then insert the matches and
output the tree sequence to `work_dir`.

Each call to {meth}`match_ancestors_batch_groups` and {meth}`match_ancestors_batch_group_finalise` results in a tree sequence being written to `work_dir`.
These tree sequences are essentially checkpoints from with the batch matching workflow
can be resumed on job failure.

Finally after the final group, call {meth}`match_ancestors_batch_finalise` to
combine the groups into a single tree sequence.

The partitioning in `metadata.json` does not have to be used for every group. As early groups are
not matching to a large tree sequence it is often faster to not partition the first half of the
groups, depending on job set up and queueing time on your cluster.

Calls to {meth}`match_ancestors_batch_group_partition` will only use a single core, but
{meth}`match_ancestors_batch_groups` will use as many cores as `num_threads` is set to
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
{meth}`match_ancestors_batch_groups` will use as many cores as `num_threads` is set to
{meth}`match_ancestors_batch_groups` will use as many cores as `num_threads` is set to.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Therefore this value and cluster resources requested should be scaled with the number of ancestors,
which can be read from the metadata dictionary.

As an example of how the API methods can be used together, suppose the metadata dictionary
created by {meth}`match_ancestors_batch_init` contains the following:

```python
{
"ancestor_grouping": [
{"ancestors": [0, ... 9], "partitions": None},
{
"ancestors": [10, ... 15],
"partitions": [[10, 11, 12], [13, 14, 15]]
},
{"ancestors": [16, ... 19], "partitions": None},
{"ancestors": [20, ... 25], "partitions": None},
{"ancestors": [26, ... 30], "partitions": None},
{
"ancestors": [31, ... 41],
"partitions": [[31, 32, 33, 34, 35, 36], [37, 38, 39, 40, 41]]
},
{"ancestors": [42, ... 45], "partitions": None},
{"ancestors": [46, ... 50], "partitions": None},
{
"ancestors": [51, ... 65],
"partitions": [
[51, 52, 53, 54],
[55, 56, 57, 58],
[59, 60, 61, 62, 63, 64, 65]
]
},
]
}
```
Then the flow could look like the following diagram: (calls on the same horizontal line can be
done in parallel, note that method names are shortened):

```{figure} _static/example_flow.svg
:width: 80%
```

Note that groups 1, 5 and 8 can be partitioned, but only groups 5 and 8 are actually partitioned in this example, as stated above partitioning for groups is optional. Groups 0-4 are matched in one call, groups 6 and 7 are matched in two calls, but
could have been matched in one. By splitting 6 and 7 the flow makes an additional resume point in the case of job failure at the cost of job start up and queueing time.


## Sample matching

Sample matching is far simpler than ancestor matching as it is essentially the same as a single group
of ancestors. There are three API methods that work together to enable distributed sample matching.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

List here what those three methods are.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

{meth}`match_samples_batch_init` should be called to set up the batch matching and to determine the
groupings of samples. Similar to {meth}`match_ancestors_batch_init` is has a `min_work_per_job` and
`max_num_partitions` arguments to control the level of parallelism. The method writes a file
`metadata.json` to the directory `work_dir` that contains a JSON encoded dictionary with
configuration for later steps. This is also returned by the call. The `num_partitions` key in
this dictionary is the number of times {meth}`match_samples_batch_partition` will need
to be called, with each partition index as the `partition_index` argument. These calls can happen
in parallel and write match data to the `work_dir` which is then used by
{meth}`match_samples_batch_finalise` to output the tree sequence.
2 changes: 1 addition & 1 deletion tsinfer/formats.py
Original file line number Diff line number Diff line change
Expand Up @@ -2308,7 +2308,7 @@ class VariantData(SampleData):
the inference process will have ``inferred_ts.num_samples`` equal to double
the number returned by ``VariantData.num_samples``.

:param Union(str, zarr.hierarchy.Group) path_or_zarr: The input dataset in
:param Union(str, zarr.Group) path_or_zarr: The input dataset in
`VCF Zarr <https://github.com/sgkit-dev/vcf-zarr-spec>`_ format.
This can either a path to the Zarr dataset saved on disk, or the
Zarr object itself.
Expand Down
Loading
Loading