- A Simple Example
- Validation and Cross Validation
- Another Example
- Model Lists
- Deployment
- Running Just One Model
- Metrics
- Ensembles
- Installation
- Caveats
- Adding Regressors
- Simulation Forecasting
- Models
# also: _hourly, _daily, _weekly, or _yearly
from autots.datasets import load_monthly
df_long = load_monthly(long=True)
from autots import AutoTS
model = AutoTS(
forecast_length=3,
frequency='infer',
ensemble='simple',
max_generations=5,
num_validations=2,
)
model = model.fit(df_long, date_col='datetime', value_col='value', id_col='series_id')
# Print the name of the best model
print(model)
There are two shapes/styles of pandas.DataFrame
which are accepted.
The first is long data, like that out of an aggregated sales-transaction table containing three columns identified to .fit()
as date_col {pd.Datetime}, value_col {the numeric or categorical data of interest}, and id_col {id string, if multiple series are provided}
.
Alternatively, the data may be in a wide format where the index is a pandas.DatetimeIndex
, and each column is a distinct data series.
If horizontal style ensembles are used, series_ids/column names will be coerced to strings.
The simplest way to improve accuracy is to increase the number of generations max_generations=15
. Each generation tries new models, taking additional time but improving the accuracy. The nature of genetic algorithms, however, means there is no consistent improvement for each generation, and large number of generations will often only result in minimal performance gains.
Another approach that may improve accuracy is to set ensemble='all'
. Ensemble parameter expects a single string, and can for example be 'simple,dist'
, or 'horizontal'
. As this means storing more details of every model, this takes more time and memory.
A handy parameter for when your data is expected to always be 0 or greater (such as unit sales) is to set no_negatives=True
. This forces forecasts to be greater than or equal to 0.
A similar function is constraint=2.0
. What this does is prevent the forecast from leaving historic bounds set by the training data. In this example, the forecasts would not be allowed to go above max(training data) + 2.0 * st.dev(training data)
, as well as the reverse on the minimum side. A constraint of 0
would constrain forecasts to historical mins and maxes.
Another convenience function is drop_most_recent=1
specifing the number of most recent periods to drop. This can be handy with monthly data, where often the most recent month is incomplete.
drop_data_older_than_periods
provides similar functionality but drops the oldest data to speed up the process on large datasets.
remove_leading_zeroes=True
is useful for data where leading zeroes represent a process which has not yet started.
When working with many time series, it can be helpful to take advantage of subset=100
. Subset specifies the interger number of time series to test models on, and can be useful with many related time series (1000's of customer's sales). Usually the best model on a 100 related time series is very close to that tested on many thousands (or more) of series.
Subset takes advantage of weighting, more highly-weighted series are more likely to be selected. Weighting is used with multiple time series to tell the evaluator which series are most important. Series weights are assumed to all be equal to 1, values need only be passed in when a value other than 1 is desired. Note for weighting, larger weights = more important.
Probably the most likely thing to cause trouble is having a lot of NaN/missing data. Especially a lot of missing data in the most recent available data.
Using appropriate cross validation (backwards
especially if NaN is common in older data but not recent data) can help.
Dropping series which are mostly missing, or using prefill_na=0
(or other value) can also help.
There are some basic things to beware of that can commonly lead to poor results:
- Bad data (sudden drops or missing values) in the most recent data is the single most common cause of bad forecasts here. As many models use the most recent data as a jumping off point, error in the most recent data points can have an oversized effect on forecasts.
- Misrepresentative cross-validation samples. Models are chosen on performance in cross validation. If the validations don't accurately represent the series, a poor model may be chosen. Choose a good method and as many validations as possible.
Cross validation helps assure that the optimal model is stable over the dynamics of a time series. Cross validation can be tricky in time series data due to the necessity of preventing data leakage from future data points.
Firstly, all models are initially validated on the most recent piece of data. This is done because the most recent data will generally most closely resemble the forecast future.
With very small datasets, there may be not be enough data for cross validation, in which case num_validations
may be set to 0. This can also speed up quick tests.
In general, the safest approach is to have as many validations as possible, as long as there is sufficient data for it.
Here are the available methods:
Backwards cross validation is the safest method and works backwards from the most recent data. First the most recent forecast_length samples are taken, then the next most recent forecast_length samples, and so on. This makes it more ideal for smaller or fast-changing datasets.
Even cross validation slices the data into equal chunks. For example, num_validations=3
would split the data into equal, progressive thirds (less the original validation sample). The final validation results would then include four pieces, the results on the three cross validation samples as well as the original validation sample.
Seasonal validation is supplied as 'seasonal n'
ie 'seasonal 364'
. This is a variation on backwards
validation and offers the best performance of all validation methods if an appropriate period is supplied.
It trains on the most recent data as usual, then valdations are n
periods back from the datetime of the forecast would be.
For example with daily data, forecasting for a month ahead, and n=364
, the first test might be on May 2021, with validation on June 2020 and June 2019, the final forecast then of June 2021.
Similarity automatically finds the data sections most similar to the most recent data that will be used for prediction. This is the best general purpose choice but currently can be sensitive to messy data.
Custom allows validations of any type. If used, .fit() needs validation_indexes
passed - a list of pd.DatetimeIndex's, tail of forecast_length of each is used as test (which should be of the same length as num_validations
+ 1).
backwards
, even
and seasonal
validation all perform initial evaluation on the most recent split of data. custom
performs initial evaluation on the first index in the list provided, while similarity
acts on the closest distance segment first.
Only a subset of models are taken from initial validation to cross validation. The number of models is set such as models_to_validate=10
.
If a float in 0 to 1 is provided, it is treated as a % of models to select.
If you suspect your most recent data is not fairly representative of the whole, it would be a good idea to increase this parameter.
However, increasing this value above, say, 0.35
(ie 35%) is unlikely to have much benefit, due to the similarity of many model parameters.
While NaN values are handled, model selection will suffer if any series have large numbers of NaN values in any of the generated train/test splits.
Most commonly, this may occur where some series have a very long history, while others in the same dataset only have very recent data.
In these cases, avoid the even
cross validation and use one of the other validation methods.
Here, we are forecasting the traffice along Interstate 94 between Minneapolis and St Paul in Minnesota. This is a great dataset to demonstrate a recommended way of including external variables - by including them as time series with a lower weighting.
Here weather data is included - winter and road construction being the major influencers for traffic and will be forecast alongside the traffic volume. These additional series carry information to models such as RollingRegression
, VARMAX
, and VECM
.
Also seen in use here is the model_list
.
from autots import AutoTS
from autots.datasets import load_hourly
df_wide = load_hourly(long=False)
# here we care most about traffic volume, all other series assumed to be weight of 1
weights_hourly = {'traffic_volume': 20}
model_list = [
'LastValueNaive',
'GLS',
'ETS',
'AverageValueNaive',
]
model = AutoTS(
forecast_length=49,
frequency='infer',
prediction_interval=0.95,
ensemble=['simple', 'horizontal-min'],
max_generations=5,
num_validations=2,
validation_method='seasonal 168',
model_list=model_list,
transformer_list='all',
models_to_validate=0.2,
drop_most_recent=1,
n_jobs='auto',
)
model = model.fit(
df_wide,
weights=weights_hourly,
)
prediction = model.predict()
forecasts_df = prediction.forecast
# prediction.long_form_results()
if model.best_model_ensemble == 2:
model.plot_horizontal()
Probabilistic forecasts are available for all models, but in many cases are just data-based estimates in lieu of model estimates.
upper_forecasts_df = prediction.upper_forecast
lower_forecasts_df = prediction.lower_forecast
By default, most available models are tried. For a more limited subset of models, a custom list can be passed in, or more simply, a string, one of 'probabilistic', 'multivariate', 'fast', 'superfast', or 'all'
.
A table of all available models is below.
On large multivariate series, DynamicFactor
and VARMAX
can be impractically slow.
Take a look at the production_example.py
Many models can be reverse engineered with (relative) simplicity outside of AutoTS by placing the choosen parameters into Statsmodels or other underlying package.
Following the model training, the top models can be exported to a .csv
or .json
file, then on next run only those models will be tried.
This allows for improved fault tolerance (by relying not on one, but several possible models and underlying packages), and some flexibility in switching models as the time series evolve.
One thing to note is that, as AutoTS is still under development, template formats are likely to change and be incompatible with future package versions.
# after fitting an AutoTS model
example_filename = "example_export.csv" # .csv/.json
model.export_template(example_filename, models='best',
n=15, max_per_model_class=3)
# on new training
model = AutoTS(forecast_length=forecast_length,
frequency='infer', max_generations=0,
num_validations=0, verbose=0)
model = model.import_template(example_filename, method='only') # method='add on'
print("Overwrite template is: {}".format(str(model.initial_template)))
While the above version of deployment, with evolving templates and cross_validation on every run, is the recommended deployment, it is also possible to run a single fixed model.
Coming from the deeper internals of AutoTS, this function can only take the wide
style data (there is a long_to_wide function available).
Data must already be fairly clean - all numerics (or np.nan).
This will run Ensembles.
from autots import load_daily, model_forecast
df = load_daily(long=False) # long or non-numeric data won't work with this function
df_forecast = model_forecast(
model_name="AverageValueNaive",
model_param_dict={'method': 'Mean'},
model_transform_dict={
'fillna': 'mean',
'transformations': {'0': 'DifferencedTransformer'},
'transformation_params': {'0': {}}
},
df_train=df,
forecast_length=12,
frequency='infer',
prediction_interval=0.9,
no_negatives=False,
# future_regressor_train=future_regressor_train2d,
# future_regressor_forecast=future_regressor_forecast2d,
random_seed=321,
verbose=0,
n_jobs="auto",
)
df_forecast.forecast.head(5)
The model.predict()
of AutoTS class runs the model given by three stored attributes:
model.best_model_name,
model.best_model_params,
model.best_model_transformation_params
If you overwrite these, it will accordingly change the forecast output.
There are a number of available metrics, all combined together into a 'Score' which evaluates the best model. The 'Score' that compares models can easily be adjusted by passing through custom metric weights dictionary. Higher weighting increases the importance of that metric, while 0 removes that metric from consideration. Weights must be numbers greater than or equal to 0. This weighting is not to be confused with series weighting, which effects how equally any one metric is applied to all the series.
metric_weighting = {
'smape_weighting': 10,
'mae_weighting': 1,
'rmse_weighting': 5,
'made_weighting': 0,
'containment_weighting': 0,
'runtime_weighting': 0,
'spl_weighting': 1,
'contour_weighting': 1,
}
model = AutoTS(
forecast_length=forecast_length,
frequency='infer',
metric_weighting=metric_weighting,
)
It is best to usually use several metrics. Often the best sMAPE model, for example, is only slightly better in sMAPE than the next place model, but that next place model has a much better MAE and RMSE.
Horizontal style ensembles use metric_weighting
for series selection, but only the values passed for mae, rmse, contour, spl
. If all of these are 0, mae is used for selection.
sMAPE
is generally the most versatile metric across multiple series, but doesn't handle forecasts with lots of zeroes well.
SPL
is Scaled Pinball Loss or Quantile Loss and is the optimal metric for assessing upper/lower quantile forecast accuracies.
Containment
measures the percent of test data that falls between the upper and lower forecasts, and is more human readable than SPL. Also called coverage_fraction
.
Contour
is a unique measure. It is designed to help choose models which when plotted visually appear similar to the actual. As such, it measures the % of points where the forecast and actual both went in the same direction, either both up or both down, but not the magnitude of that difference. Does not work with forecast_length=1.
MADE
is mean absolute differential error. Similar to contour, it measures how well similar a forecast changes are to the timestep changes in the actual. Contour measures direction while MADE measures magnitude. Does not work with forecast_length = 1.
The contour metric is useful as it encourages 'wavy' forecasts, ie, not flat line forecasts. Although flat line naive or linear forecasts can sometimes be very good models, they "don't look like they are trying hard enough" to some managers, and using contour favors non-flat forecasts that (to many) look like a more serious model.
Ensemble methods are specified by the ensemble=
parameter. It can be either a list or a comma-separated string.
simple
style ensembles (labeled 'BestN' in templates) are the most recognizable form of ensemble and are the simple average of the specified models, here usally 3 or 5 models.
distance
style ensembles are two models spliced together. The first model forecasts the first fraction of forecast period, the second model the latter half. There is no overlap of the models.
Both simple
and distance
style models are constructed on the first evaluation set of data, and run through validation along with all other models selected for validation.
Both of these can also be recursive in depth, containing ensembles of ensembles. This recursive ensembling can happen when ensembles are imported from a starting template - they work just fine, but may get rather slow, having lots of models.
horizontal
ensembles are the type of ensembles for which this package was originally created.
With this, each series gets its own model. This avoids the 'one size does not fit all' problem when many time series are in a datset.
In the interest of efficiency, univariate models are only run on the series they are needed for.
Models not in the no_shared
list may make horizontal ensembling very slow at scale - as they have to be run for every series, even if they are only used for one.
horizontal-max
chooses the best series for every model. horizontal
and horizontal-min
attempt to reduce the number of slow models chosen while still maintaining as much accuracy as possble.
A feature called horizontal_generalization
allows the use of subset
and makes these ensembles fault tolerant.
If you see a message no full models available
, however, that means this generalization may fail. Including at least one of the superfast
or a model not in no_shared
models usually prevents this.
These ensembles are choosen based on per series accuracy on mae, rmse, contour, spl
, weighted as specified in metric_weighting
.
horizontal
ensembles can contain recursive depths of simple
and distance
style ensembles but horizontal
ensembles cannot be nested.
mosaic
enembles are an extension of horizontal
ensembles, but with a specific model choosen for each series and for each forecast period.
As this means the maximum number of models can be number of series * forecast_length
, this obviously may get quite slow.
Theoretically, this style of ensembling offers the highest accuracy.
However, mosaic
models only utilize MAE for model selection, and as such upper and lower forecast performance may be poor.
They are also more prone to over-fitting, so use this with more validations and more stable data.
Unlike horizontal
ensembles, which only work on multivariate datasets, mosaic
can be run on a single time series.
One thing you can do with mosaic
ensembles if you only care about the accuracy of one forecast point, but want to run a forecast for the full forecast length, you can convert the mosaic to horizontal for just that forecast period.
import json
from autots.models.ensemble import mosaic_to_horizontal, model_forecast
# assuming model is from AutoTS.fit() with a mosaic as best_model
model_params = mosaic_to_horizontal(model.best_model_params, forecast_period=0)
result = model_forecast(
model_name="Ensemble",
model_param_dict=model_params,
model_transform_dict={},
df_train=model.df_wide_numeric,
forecast_length=model.forecast_length,
)
result.forecast
pip install autots
Some optional packages require installing Visual Studio C compilers if on Windows.
On Linux systems, apt-get/yum (rather than pip) installs of numpy/pandas may install faster/more stable compilations.
Linux may also require sudo apt install build-essential
for some packages.
You can check if your system is using mkl, OpenBLAS, or none with numpy.show_config()
. Generally recommended that you double-check this after installing new packages to make sure you haven't broken the LINPACK connection.
Python >= 3.6
numpy
>= 1.20 (Sliding Window in Motif and WindowRegression)
pandas
>= 1.1.0 (prediction.long_form_results())
gluonts incompatible with 1.1, 1.2, 1.3
sklearn
>= 0.23.0 (PoissonReg)
>= 0.24.0 (OrdinalEncoder handle_unknown)
>= 1.0 for models effected by "mse" -> "squared_error" update
>? (IterativeImputer, HistGradientBoostingRegressor)
statsmodels
>= 0.13 ARDL and UECM
scipy.uniform_filter1d (for mosaic-window ensemble only)
Of these, numpy and pandas are critical. Limited functionality should exist without scikit-learn. * Sklearn needed for categorical to numeric, some detrends/transformers, horizontal generalization, numerous models, nan_euclidean distance Full functionality should be maintained without statsmodels, albeit with fewer available models.
Prophet, Greykite, and mxnet/GluonTS are packages which tend to be finicky about installation on some systems.
pip install autots['additional']
psutil
holidays
prophet
gluonts (requires mxnet)
mxnet (mxnet-mkl, mxnet-cu91, mxnet-cu101mkl, etc.)
tensorflow >= 2.0.0
lightgbm
xgboost
tensorflow-probability
fredapi
greykite
Tensorflow, LightGBM, and XGBoost bring powerful models, but are also among the slowest. If speed is a concern, not installing them will speed up ~Regression style model runs.
venv, Anaconda, or Miniforge
# create a conda or venv environment
conda create -n openblas python=3.9
conda activate openblas
python -m pip install numpy scipy scikit-learn statsmodels tensorflow lightgbm xgboost yfinance pytrends fredapi --exists-action i
python -m pip install yfinance pytrends fredapi
python -m pip install numexpr bottleneck
python -m pip install pystan prophet --exists-action i # conda-forge option below works more easily, --no-deps to pip install prophet if this fails
python -m pip install mxnet --exists-action i # check the mxnet documentation for more install options, also try pip install mxnet --no-deps
python -m pip install gluonts --exists-action i
python -m pip install holidays-ext pmdarima dill greykite --exists-action i --no-deps
python -m pip install --upgrade numpy pandas --exists-action i # mxnet likes to (pointlessly seeming) install old versions of numpy
python -m pip install autots --exists-action i
https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html
# create the environment
conda create -n intelpython -c intel python=3.7 intelpython3_full
conda activate intelpython
# install additional packages as desired
python -m pip install yfinance pytrends fredapi bottleneck
python -m pip install mxnet --no-deps
python -m pip install gluonts
conda install -c conda-forge prophet
conda update -c intel intelpython3_full
conda install -c intel numexpr statsmodels lightgbm tensorflow
python -m pip install autots
# MKL_NUM_THREADS, USE_DAAL4PY_SKLEARN=1
from autots.evaluator.benchmark import Benchmark
bench = Benchmark()
bench.run(n_jobs="auto", times=3)
bench.results
Usually mysterious crashes or hangs (those without clear error messages) occur when the CPU or Memory is overloaded.
Try setting n_jobs=1
or an otherwise low number, which should reduce the load. Also test the 'superfast' naive models, which are generally low resource consumption.
GPU-accelerated models (Tensorflow in Regressions and GluonTS) are also more prone to crashes, and may be a source of problems when used.
If problems persist, post to the GitHub Discussion or Issues.
Pretty much as it says, if this isn't true, some odd things may happen that shouldn't.
Also if using the Prophet model, you can't have any series named 'ds'
How much data is 'too little' depends on the seasonality and volatility of the data.
Minimal training data most greatly impacts the ability to do proper cross validation. Set num_validations=0
in such cases.
Since ensembles are based on the test dataset, it would also be wise to set ensemble=None
if num_validations=0
.
future_
regressor, to make it clear this is data that will be know with high certainy about the future.
Such data about the future is rare, one example might be number of stores that will be (planned to be) open each given day in the future when forecast sales.
Only a handful of models support adding regressors, and not all handle multiple regressors.
The recommended way to provide regressors is as a pd.Series/pd.Dataframe with a DatetimeIndex.
Don't know the future? Don't worry, the models can handle quite a lot of parallel time series, which is another way to add information. Additional regressors can be passed through as additional time series to forecast as part of df_long. Some models here can utilize the additional information they provide to help improve forecast quality. To prevent forecast accuracy for considering these additional series too heavily, input series weights that lower or remove their forecast accuracy from consideration.
an example of regressors:
from autots.datasets import load_monthly
from autots.evaluator.auto_ts import fake_regressor
from autots import AutoTS
long = False
df = load_monthly(long=long)
forecast_length = 14
model = AutoTS(
forecast_length=forecast_length,
frequency='infer',
validation_method="backwards",
max_generations=2,
)
future_regressor_train2d, future_regressor_forecast2d = fake_regressor(
df,
dimensions=4,
forecast_length=forecast_length,
date_col='datetime' if long else None,
value_col='value' if long else None,
id_col='series_id' if long else None,
drop_most_recent=model.drop_most_recent,
aggfunc=model.aggfunc,
verbose=model.verbose,
)
model = model.fit(
df,
future_regressor=future_regressor_train2d,
date_col='datetime' if long else None,
value_col='value' if long else None,
id_col='series_id' if long else None,
)
prediction = model.predict(future_regressor=future_regressor_forecast2d, verbose=0)
forecasts_df = prediction.forecast
print(model)
For models here in the lower level api, confusingly, regression_type="User" must be specified as well as passing future_regressor. Why? This allows the model search to easily try both with and without the regressor, because sometimes the regressor may do more harm than good.
Simulation forecasting allows for experimenting with different potential future scenarios to examine the potential effects on the forecast.
This is done here by passing known values of a future_regressor
to model .fit
and then running .predict
with multiple variations on the future_regressor
future values.
By default in AutoTS, when a future_regressor
is supplied, models that can utilize it are tried both with and without the regressor.
To enforce the use of future_regressor for simulation forecasting, a few parameters must be supplied as below. They are: model_list, models_mode, initial_template
.
from autots.datasets import load_monthly
from autots.evaluator.auto_ts import fake_regressor
from autots import AutoTS
df = load_monthly(long=False)
forecast_length = 14
model = AutoTS(
forecast_length=forecast_length,
max_generations=2,
model_list="regressor",
models_mode="regressor",
initial_template="random",
)
# here these are random numbers but in the real world they could be values like weather or store holiday hours
future_regressor_train, future_regressor_forecast = fake_regressor(
df,
dimensions=2,
forecast_length=forecast_length,
drop_most_recent=model.drop_most_recent,
aggfunc=model.aggfunc,
verbose=model.verbose,
)
# another simulation of regressor
future_regressor_forecast_2 = future_regressor_forecast + 10
model = model.fit(
df,
future_regressor=future_regressor_train,
)
# first with one version
prediction = model.predict(future_regressor=future_regressor_forecast, verbose=0)
forecasts_df = prediction.forecast
# then with another
prediction_2 = model.predict(future_regressor=future_regressor_forecast_2, verbose=0)
forecasts_df_2 = prediction_2.forecast
print(model)
Note, this does not necessarily force the model to place any great value on the supplied features.
It may be necessary to rerun multiple times until a model with satisfactory variable response is found,
or to try with a subset of the regressor model list like ['FBProphet', 'GLM', 'ARDL', 'DatepartRegression']
.
There are a lot of parameters available here, but not always all of the options available for a particular parameter are actually used in generated templates. Usually, very slow options are left out. If you are familiar with a model, you can try manualy adding those parameter values in for a run in this way... To clarify, you can't usually add in entirely new parameters in this way, but you can often pass in new choices for existing parameter values.
- Run AutoTS with your desired model and export a template.
- Open the template in a text editor or Excel and manually change the param values to what you want.
- Run AutoTS again, this time importing the template before running .fit().
- There is no guarantee it will choose the model with the given params- choices are made based on validation accuracy, but it will at least run it, and if it does well, it will be incorporated into new models in that run (that's how the genetic algorithms work).
Categorical data is handled, but it is handled crudely. For example, optimization metrics do not currently include any categorical accuracy metrics. For categorical data that has a meaningful order (ie 'low', 'medium', 'high') it is best for the user to encode that data before passing it in, thus properly capturing the relative sequence (ie 'low'=1, 'medium'=2, 'high'=3).
Data must be coercible to a regular frequency. It is recommended the frequency be specified as a datetime offset as per pandas documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#dateoffset-objects Some models will support a more limited range of frequencies.
The transformers expect data only in the wide
shape with ascending date.
The simplest way to access them is through the GeneralTransformer.
This takes dictionaries containing strings of the desired transformers and parameters.
Inverse_transforms get confusing. It can be necessary to inverse_transform the data to get predictions back to a usable space.
Some inverse_transformer only work on 'original' or 'forecast' data immediately following the training period.
The DifferencedTransformer is one example.
It can take the last N value of the training data to bring forecast data back to original space, but will not work for just 'any' future period unconnected to training data.
Some transformers (mostly the smoothing filters like bkfilter
) cannot be inversed at all, but transformed values are close to original values.
from autots.tools.transform import transformer_dict, DifferencedTransformer
from autots import load_monthly
print(f"Available transformers are: {transformer_dict.keys()}")
df = load_monthly(long=long)
# some transformers tolerate NaN, and some don't...
df = df.fillna(0)
trans = DifferencedTransformer()
df_trans = trans.fit_transform(df)
print(df_trans.tail())
# trans_method is not necessary for most transformers
df_inv_return = trans.inverse_transform(df_trans, trans_method="original") # forecast for future data
The Regression models are WindowRegression, RollingRegression, UnivariateRegression, MultivariateRegression, and DatepartRegression. They are all different ways of reshaping the time series into X and Y for traditional ML and Deep Learning approaches. All draw from the same potential pool of models, mostly sklearn and tensorflow models.
- DatepartRegression is where X is simply the date features, and Y are the time series values for that date.
- WindowRegression takes an
n
preceeding data points as X to predict the future value or values of the series. - RollingRegression takes all time series and summarized rolling values of those series in one massive dataframe as X. Works well for a small number of series but scales poorly.
- MultivariateRegression uses the same rolling features as above, but considers them one at a time, features for series
i
are used to predict next step for seriesi
, with a model trained on all data from all series. - UnivariateRegression is the same as MultivariateRegression but trains an independent model on each series, thus not capable of learning from the patterns of other series. This performs well in horizontal ensembles as it can be parsed down to one series with the same performance on that series.
How the upper and lower forecast bounds are created for these models will likely change in the future.
Currently MultivariateRegression
utilizes a stock GradientBoostingRegressor with quantile loss for probabilistic estimates, while others utilize point to probabilistic estimates.
Model | Dependencies | Optional Dependencies | Probabilistic | Multiprocessing | GPU | Multivariate | Experimental | Use Regressor |
---|---|---|---|---|---|---|---|---|
ZeroesNaive | ||||||||
LastValueNaive | ||||||||
AverageValueNaive | True | |||||||
SeasonalNaive | ||||||||
GLS | statsmodels | True | ||||||
GLM | statsmodels | joblib | True | |||||
ETS - Exponential Smoothing | statsmodels | joblib | ||||||
UnobservedComponents | statsmodels | True | joblib | True | ||||
ARIMA | statsmodels | True | joblib | True | ||||
VARMAX | statsmodels | True | True | |||||
DynamicFactor | statsmodels | True | True | True | ||||
VECM | statsmodels | True | True | |||||
VAR | statsmodels | True | True | True | ||||
Theta | statsmodels | True | joblib | |||||
ARDL | statsmodels | True | joblib | True | ||||
FBProphet | fbprophet | True | joblib | True | ||||
GluonTS | gluonts, mxnet | True | yes | True | True | |||
RollingRegression | sklearn | lightgbm, tensorflow | sklearn | some | True | True | ||
WindowRegression | sklearn | lightgbm, tensorflow | sklearn | some | True | True | ||
DatepartRegression | sklearn | lightgbm, tensorflow | sklearn | some | True | |||
MultivariateRegression | sklearn | lightgbm, tensorflow | True | sklearn | some | True | True | |
UnivariateRegression | sklearn | lightgbm, tensorflow | sklearn | some | True | |||
UnivariateMotif/MultivariateMotif | scipy.distaince.cdist | True | joblib | * | ||||
SectionalMotif | scipy.distaince.cdist | sklearn | True | True | True | |||
NVAR | True | blas/lapack | True | |||||
Greykite | greykite | True | joblib | True | * | |||
MotifSimulation | sklearn.metrics.pairwise | True | joblib | True* | True | |||
TensorflowSTS | tensorflow_probability | True | yes | True | True | |||
TFPRegression | tensorflow_probability | True | yes | True | True | True | ||
ComponentAnalysis | sklearn | True | True |