Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Large scale inference docs #994

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

benjeffery
Copy link
Member

Fixes #840

@benjeffery
Copy link
Member Author

@hyanwong Can I have a read-through here?

Copy link

codecov bot commented Feb 4, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 93.17%. Comparing base (e3b2155) to head (d6725ce).
Report is 1 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #994      +/-   ##
==========================================
- Coverage   93.17%   93.17%   -0.01%     
==========================================
  Files          18       18              
  Lines        6374     6369       -5     
  Branches     1088     1088              
==========================================
- Hits         5939     5934       -5     
  Misses        296      296              
  Partials      139      139              
Flag Coverage Δ
C 93.17% <100.00%> (-0.01%) ⬇️
python 95.52% <100.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@jeromekelleher jeromekelleher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

:param int min_work_per_job: The minimum amount of work (as a count of genotypes) to
allocate to a single parallel job. If the amount of work in a group of ancestors
exceeds this level it will be broken up into parallel partitions, subject to
the constriant of `max_num_partitions`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo, constriant

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 6c02df1

@benjeffery benjeffery mentioned this pull request Feb 5, 2025
Copy link
Member

@hyanwong hyanwong left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great works, thanks @benjeffery

It's all quite complicated, and I'm not sure I have a feel for how all the parts fit together, but the descriptions are detailed enough that I could follow them without problems.

I guess at some future point we might want a schematic, but that can wait for now. I reckon you can merge this and get someone (e.g. Duncan or Savita?) to try it out.

entire genotype array for the contig being inferred needs to fit in RAM.
This is the high-water mark for memory usage in tsinfer.
Note the `genotype_encoding` argument, setting this to
{class}`tsinfer.GenotypeEncoding.ONE_BIT` reduces the memory footprint of
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to say that this can't be used if there is missing data?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 6c02df1

The plot below shows the number of ancestors matched in each group for a typical
human data set:

```{figure} _static/ancestor_grouping.png
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be worth indicating that the group number is ordered by time, so that group 0 represents the oldest ancestors?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 6c02df1

{meth}`match_ancestors_batch_group_finalise` will then insert the matches and
output the tree sequence to `work_dir`.

At anypoint the process can be resumed from the last successfully completed call to
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"anypoint" -> "any point"


At anypoint the process can be resumed from the last successfully completed call to
{meth}`match_ancestors_batch_groups`. As the tree sequences in `work_dir` checkpoint the
progress.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand / can parse this last sentence

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 6c02df1 hopefully makes more sense now.

@benjeffery
Copy link
Member Author

Thanks @hyanwong

I guess at some future point we might want a schematic

I think I'll add a quick one now

@benjeffery
Copy link
Member Author

benjeffery commented Feb 6, 2025

I guess at some future point we might want a schematic

Added in d039461

Let me know if it makes more sense now!

@hyanwong
Copy link
Member

hyanwong commented Feb 7, 2025

Looks great, thanks.

Copy link
Member

@agladstein agladstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here are some initial thoughts.

tsinfer scales well and has been successfully used with datasets up to half a
million samples. Here we detail considerations and tips for each step of the
inference process to help you scale up your analysis. A snakemake pipeline
which implements this parallelisation scheme is available at https://github.com/benjeffery/tsinfer-snakemake.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: It would be good to make this a link.

Suggested change
which implements this parallelisation scheme is available at https://github.com/benjeffery/tsinfer-snakemake.
which implements this parallelisation scheme is available at <https://github.com/benjeffery/tsinfer-snakemake>.

It would also be good to have a small README in that repo giving a little summary of what it is and how to run it, and possibly redirecting here for more info.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

edcf3ff

Yes, documenting that repo is a big item on my todo list.


# Large Scale Inference

tsinfer scales well and has been successfully used with datasets up to half a
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you provide some example citations here? If I wanted to use tsinfer for large datasets, I would probably also want to see the pubs it was used in.
(I assume "has been successfully used" means it's been published)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Publications are in preparation


(sec_large_scale_ancestor_generation)=

## Data preparation
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section is a little sparse. If I hadn't used VCF zarr format before, I'd be a little intimidated on how to use it. Can you include a short example?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a TODO in abc170f as there will be general tutorial on this as it applies to small scale too.

## Data preparation

For large scale inference the data must be in [VCF Zarr](https://github.com/sgkit-dev/vcf-zarr-spec)
format, read by the {class}`VariantData` class. [bio2zarr](https://github.com/sgkit-dev/bio2zarr)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see any docs on VariantData

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are todo for this release, I think they are in the code, but need autodocing.

which implements this parallelisation scheme is available at https://github.com/benjeffery/tsinfer-snakemake.

(sec_large_scale_ancestor_generation)=

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in this intro section, before going into more detail, it could be helpful to give an overview about what is "hard" about large scale inference. What counts has "large scale" and if you don't do the following suggestions, where might you see your inference be slow/get stuck/run out of memory...? And then give a brief overview of how you get around these problems.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added more context in c6b5e1e

the genotype array by a factor of 8, for a surprisingly small increase in
runtime. With this encoding, the RAM needed is roughly
`num_sites * num_samples * ploidy / 8 bytes.` However this encoding
only supports biallelic sites, with no missingness.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you give a quick example? Something like - if you are generating ancestors for --- sites, --- samples, --- ploidy, and running on a 32G machine, using 16 threads with genotype_encoding = tsinfer.GenotypeEncoding.ONE_BIT, we can expect the genotype array to fit in memory. But we would expect ---- sites to hit a memory error.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The formula is exact, so people should be able to work it out?

groups, depending on job set up and queueing time on your cluster.

Calls to {meth}`match_ancestors_batch_group_partition` will only use a single core, but
{meth}`match_ancestors_batch_groups` will use as many cores as `num_threads` is set to
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
{meth}`match_ancestors_batch_groups` will use as many cores as `num_threads` is set to
{meth}`match_ancestors_batch_groups` will use as many cores as `num_threads` is set to.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

`num_sites * num_samples * ploidy / 8 bytes.` However this encoding
only supports biallelic sites, with no missingness.

## Ancestor matching
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I read this section through once and don't feel like I fully understood enough to start implementing. I understood that the ancestor matching can be slow and can be parallelized by using partitions. After reading once through, my main question was is this parallezation on a multicore single node, or can this parallezation be done on single cores on multiple machines, and what is the max parallezation I can get out of it?

It's possible, with more careful multiple readings of this section, I'd figure out the answer to those questions. But, goal is probably for the reader (with my background) to be able to read it once through carefully and understand.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added a bit extra in d6725ce
It's a complicated scheme, unfortunately.

```

There are five tsinfer API methods that can be used to parallelise ancestor
matching.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

List those five tsinfer API methods here.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

## Sample matching

Sample matching is far simpler than ancestor matching as it is essentially the same as a single group
of ancestors. There are three API methods that work together to enable distributed sample matching.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

List here what those three methods are.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@benjeffery
Copy link
Member Author

Thanks for all the feedback @agladstein! Hopefully I have addressed it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add "Inferring large datasets" documentation
4 participants