-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add interval logic for l2g features #812
base: dev
Are you sure you want to change the base?
Conversation
# feature will be the same for any gene associated with a studyLocus) | ||
local_max.withColumn( | ||
"regional_maximum", | ||
f.max(local_feature_name).over(Window.partitionBy("studyLocusId")), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it maximum? According to the table and what we discussed it should be mean?
https://docs.google.com/spreadsheets/d/1wUs1AprRCCGItZmgDhc1fF5BtwCSosdzFv4NQ8V6Dtg/edit?gid=452826388#gid=452826388
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the changes Jack!!!
The logic to build the features looks good! Please see my comments, but they are more along the lines of how we process the interval data in the L2G step.
I suggested processing all interval sources to make the process simpler, but since the code is accommodated to take source names and paths individually and changing it is a mess, it's also fine to leave it like that as long as the interval_paths parameter is correctly configured.
The implemented changes wouldn't run, because of the creation of a Interval dataset with a mismatching schema. I would encourage you to:
- add any features you add to the
test_l2g_feature_matrix.py
suite, to make sure that the code doesnt crash - In the same file, add a semantic test for the common logic
- Update the documentation pages
- Pull dev branch to bring the changes to the feature matrix step
…1_l2g_intervals
…1_l2g_intervals
…d test for interval features
…1_l2g_intervals
…1_l2g_intervals
…1_l2g_intervals
…1_l2g_intervals
…1_l2g_intervals
""" | ||
return Intervals( | ||
_df=( | ||
self.df.alias("interval") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given the intervals are not big datasets (my assumption) is there a possibility to broadcast them before the join?
).alias("vi"), | ||
on=[ | ||
f.col("vi.vi_chromosome") == f.col("interval.chromosome"), | ||
f.col("vi.position").between( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not the equi join, so the join will be less optimized.
features_input_loader = L2GFeatureInputLoader( | ||
variant_index=variant_index, | ||
colocalisation=coloc, | ||
study_index=studies, | ||
study_locus=credible_set, | ||
target_index=target_index, | ||
intervals=intervals, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to make sure that if the intervals are not passed (None), then:
- If the
feature_list
contains the interval features, they should be dropped, since we can not compute them - The warning should be logged that the interval features can not be computed.
if target_index_path | ||
else None | ||
) | ||
self.intervals = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly here, if the intervals nor target index are not required, all features depending on these datasets should be filtered out before running the predictions or training.
f.col("variantInLocus.posteriorProbability").alias("posteriorProbability"), | ||
) | ||
# Filter for PP > 0.001 | ||
.filter(f.col("posteriorProbability") > 0.001) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make this value optional with default 0.001 in the function signature.
@@ -216,6 +216,17 @@ class LDBasedClumpingConfig(StepConfig): | |||
_target_: str = "gentropy.ld_based_clumping.LDBasedClumpingStep" | |||
|
|||
|
|||
@dataclass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have to register this step to be able to use it in the command line interface. See the gentropy.config.Config.register_config method
@@ -18,10 +18,16 @@ | |||
"nullable": false, | |||
"type": "string" | |||
}, | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variant and genes could be in the nested structure, so we preserve the storage. Similar as in the pseudo code below:
ArrayType(
StructType(
[
StructField(StringType(), "geneId"),
StructField(
ArrayType(StringType()), "variantId"
)
]
)
)
@xyg123 there are still some comments to be addressed before we can try to set this up for the production.
Moreover I want to explore the performance of the non-equi join on the intervals x variantIndex, since it could be a bottleneck for the process. There are a few enhancements that could be done depending on the size of the intervals itself:
|
✨ Context
Adding interval based features to the l2g model, based on the feature list (opentargets/issues#3521).
opentargets/issues#3512
🛠 What does this PR implement
🙈 Missing
More features from anderson + thurman.
🚦 Before submitting
dev
branch?make test
)?poetry run pre-commit run --all-files
)?