Replies: 2 comments 1 reply
-
There is a .get_new_params() to randomly generate new args to try, FYI. I have seen this problem due to overfitting on bad validation samples, or using the wrong metrics in metric weighting. Basically over time, it is finding a better fit to the metric or validation periods, but that doesn't necessarily mean it's a better outcome forecast. Try seeing if either different validation method, more validations (or less validations) and different metric weightings helps. |
Beta Was this translation helpful? Give feedback.
0 replies
-
Glad to hear I am not just going mad :-).
Thank you for the suggestions.
On Wed, Nov 2, 2022 at 1:01 PM Colin Catlin ***@***.***> wrote:
There is a .get_new_params() to randomly generate new args to try, FYI.
I have seen this problem due to overfitting on bad validation samples, or
using the wrong metrics in metric weighting. Basically over time, it is
finding a better fit to the metric or validation periods, but that doesn't
necessarily mean it's a better outcome forecast. Try seeing if either
different validation method, more validations (or less validations) and
different metric weightings helps.
—
Reply to this email directly, view it on GitHub
<#143 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQGFRNIDWC652MNXOHW223LWGKM73ANCNFSM6AAAAAARVJAZQI>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
Adam Behr
Master's Student & Research Assistant
Department of Civil, Construction, and Environmental Engineering
North Carolina State University
(334) 306-1022
|
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am trying to hone in on the AutoTS parameters that give the most reliable forecasting performance, since I have some code that iterates through hundreds of datasets to generate forecasts. In the first version, I set num_generations to 5 (other AutoTS parameters were default) to be able to speed up the process, and about 95% of the forecasts were realistic / good. Unfortunately, I had to manually check through the forecasts to identify the ones that went haywire (clearly unrealistic forecast) and rerun those.
To try to avoid having to manually check and rerun some forecasts, I increased the num_generations to 50 or 100 hoping this would cause things to consistently converge on a realistic solution. However, it seems like the opposite is the case. I noticed that the opposite is happening: it seems like with very high num_generations, the frequency of unrealistic solutions increases. Does this make logical sense to anyone? And does anyone have any pointers for increasing the reliability of forecasts?
Beta Was this translation helpful? Give feedback.
All reactions