-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pipeline setup step 2: Fill forecast.py #93
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to my review for #92, I think we should get this to pass pre-commit and root out errors before merging.
If split_train_test
breaks when given a single polars data frame, let's fix it. It looks like preprocess.py intends to always return a single data frame, not a list. This can easily remain true even after preprocess.py knows how to process multiple data sets, because it can just concatenate them into one before returning it. So we can just edit split_train_test
to expect only a single data frame.
Likewise, if NaNs in the data cause problems, let's find out where they come from and fix them if possible. I don't recall missing values in the raw data - I wonder if there are further issues lurking in our code that cause the NaNs.
Normally, such things could be separate issues for separate PRs, but if they prevent us from testing the content of this PR properly, then I think they should be solved here.
Please also see my separate comment on the way to increment the forecast date.
scripts/forecast.py
Outdated
cumulative_projections.with_columns(estimate_type=pl.lit("cumulative")), | ||
incident_projections.with_columns(estimate_type=pl.lit("incident")), | ||
] | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question I had on #92 - why combine cumulative and incident uptake into a single data frame? We could just return the cumulative data and derive the incident data in a future step if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in 862faf8
scripts/forecast.py
Outdated
clean_data, | ||
grouping_factors=config["groups"], | ||
forecast_start=config["timeframe"]["start"], | ||
forecast_end=forecast_date, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't it be forecast_start
that is being incremented?
As written here, with forecast_end
being incremented, we are asking questions like:
- suppose we have data through Nov 1, 2024 - how well can we estimate uptake through May 1, 2025?
- suppose we have data through Nov 1, 2024 - how well can we estimate uptake through May 8, 2025?
- suppose we have data through Nov 1, 2024 - how well can we estimate uptake through May 15, 2025?
- and so on...
These questions do not require fitting the model multiple times in a loop! Just fit it once for the latestforecast_end
date (e.g. May 29, 2025) and look up the projections "along the way" for all the intermediate dates (May 22, May 15, May 8, May 1, etc.).
But instead, I think we had intended to increment forecast_start
to ask questions like:
- suppose we have data through Nov 1, 2024 - how well can we estimate uptake through May 29, 2025?
- suppose we have data through Nov 8, 2024 - how well can we estimate uptake through May 29, 2025?
- suppose we have data through Nov 15, 2024 - how well can we estimate uptake through May 29, 2025?
- and so on...
In this case, because the starting date for forecasting keeps getting pushed later, the training data set keeps expanding, so the model does need to be refit multiple times in a loop.
So the latter is what we should be doing here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in 857c080
For now, I would suggest to merge the PRs when the questions are well-addressed. Once the entire pipeline is merged, we can test it and open PRs if needed! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added one more comment regarding cumulative vs. incident. I think it will be useful to only pass cumulative data around, converting to incident only at the point where incident is needed (e.g. fitting the linear model, calculating mspe, etc.). But take a look and let me know what you think!
scripts/forecast.py
Outdated
] | ||
) | ||
# Note that here returns incident projections only, for evaluation | ||
return incident_projections |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should return cumulative projections here. The reason is that incident data for a partial season loses information. Suppose you are making forecasts from Jan 1 - May 31. Cumulative forecasts contain all the incident information, but incident forecasts "forget" what the total cumulative uptake had been up to Jan 1. So if you convert to incident here, it will be harder to convert back to cumulative later (e.g. for calculating end-of-season absolute error) - you'll have to look up the observed data on Jan 1 again. But if you keep the cumulative version here, it is easy to convert to incident later (e.g. for calculating mspe).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. After all our discussions, my thinking now is that the inputs and outputs from the models should be in cumulative space (although many of the models might internally convert back and forth with incident)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just looked back at your eval.py code, and I realized that end-of-season absolute error is, more specifically the error in cumulative uptake since the forecast date. There is no error before the forecast date - that data is known. So your approach makes more sense to me now.
I still propose we follow the convention only to pass cumulative data around and only to convert to incident at the points it's required, but I wanted you to know that I resolved some of my own confusion!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I will use this recursive branching method when it comes to incremental change in the future! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me.
This mismatch might be causing some problems about line breaks that differ between GitHub and local VSCode
This PR fixes some errors when calling `make all`. The changes are only made to run the code successfully, and not guarantee if it introduces errors in the output. --------- Co-authored-by: Scott Olesen <[email protected]>
- Remove the concept of datasets - Only return the cumulative uptake data - Rename "rollout" to "rollout***s***" everywhere, to emphasize that you can insert multiple dates
This PR adds forecast function and run the forecast sequentially given the time frame in config.yaml. The results are saved as parquet data in the designated path. While this function hasn't been tested because of several errors:
Error: "typeFloat64 is incompatible with expected type String" is returned in
split_train_test
. This happens when the uptake data is a single polars DataFrame (instead of a uptake data list).The NaN in training data prevents the fitting of
LinearIncidenceUptakeModel
, checks or methods (impute NA, or delete) are needed to ensure fitting can be run.