#!/usr/bin/env python # coding: utf-8 # # Overview # # In the 10x series of notebooks, we will look at Time Series modeling in pycaret using univariate data and no exogenous variables. We will use the famous airline dataset for illustration. Our plan of action is as follows: # # 1. Perform EDA on the dataset to extract valuable insight about the process generating the time series. **(Covered in this notebook)** # 2. Model the dataset based on exploratory analysis (univariable model without exogenous variables). # 3. Use an automated approach (AutoML) to improve the performance. # In[1]: # Only enable critical logging (Optional) import os os.environ["PYCARET_CUSTOM_LOGGING_LEVEL"] = "CRITICAL" # In[2]: def what_is_installed(): from pycaret import show_versions show_versions() try: what_is_installed() except ModuleNotFoundError: get_ipython().system('pip install pycaret') what_is_installed() # In[3]: import time import numpy as np import pandas as pd from pycaret.datasets import get_data from pycaret.time_series import TSForecastingExperiment # In[4]: y = get_data('airline', verbose=False) # In[5]: # We want to forecast the next 12 months of data and we will use 3 fold cross-validation to test the models. fh = 12 # or alternately fh = np.arange(1,13) fold = 3 # In[6]: # Global Figure Settings for notebook ---- # Depending on whether you are using jupyter notebook, jupyter lab, Google Colab, you may have to set the renderer appropriately # NOTE: Setting to a static renderer here so that the notebook saved size is reduced. fig_kwargs = { # "renderer": "notebook", "renderer": "png", "width": 1000, "height": 600, } # # Exploratory Analysis # # `pycaret` Time Series Forecasting module provides a conventient interface for perform exploratory analysis using `plot_model`. # # **NOTE:** # * Without an estimator argument, `plot_model` will plot using the original dataset. We will cover this in the current notebook. # * If an estimator (model) is passed to `plot_model`, the the plots are made using the model data (e.g. future forecasts, or analysis on insample residuals). We will cover this in a subsequent notebook. # # Let's see how this works next. # # **First, we will plots the original dataset.** # In[7]: eda = TSForecastingExperiment() eda.setup(data=y, fh=fh, fig_kwargs=fig_kwargs) # In[8]: # NOTE: This is the same as `eda.plot_model(plot="ts")` eda.plot_model() # Before we explore the data further, there are a few minor things to know about how PyCaret prepares a **modeling pipeline** under the hood. The data being modeled is usually fed through an internal pipeline that has optional steps in the following order: # # **Data Input (by user) >> Imputation >> Transformation & Scaling >> Model** # # 1. **Imputation** # - This step is optional if data does not have missing values, and is mandatory if data has missing values. This is because many statistical tests and models can not work with missing data. # - Although some models like `Prophet` can work with missing data, the need to run statistical tests to extract useful informaton from the data for default model settings necessitates having imputation when data has missing values. # # # 2. **Transformations and Scaling** # - This step is optional and user should usually only enable this after evaluating the models (e.g. by performing residual analysis), or if they have specific requirements such as limiting the forecast to only positive values. # # # We will discuss more about imputation and transformations in in another notebook. For now, our data does not have any missing values or any transformations. So **Data Input (by user), i.e. Original data = Imputed data = Transformed data = Data fed to models**. We can verify this by plotting the internal datasets by specifying the `plot_data_type` `data_kwargs`. # # NOTE: If `plot_data_type` is not provided, each plot type has it's own default data_type that is internally determined, but the users have the ability to always overwrite using `plot_data_type`. # In[9]: eda.plot_model(data_kwargs={"plot_data_type": ["original", "imputed", "transformed"]}) # **Let's explore the standard ACF and PACF plots for the dataset next** # In[10]: # ACF and PACF for the original dataset eda.plot_model(plot="acf") # In[11]: # NOTE: you can customize the plots with kwargs - e.g. number of lags, figure size (width, height), etc # data_kwargs such as `nlags` are passed to the underlying function that gets the ACF values # figure kwargs such as `fig_size` & `template` are passed to plotly and can have any value that plotly accepts eda.plot_model(plot="pacf", data_kwargs={'nlags':36}, fig_kwargs={'height': 500, "width": 800}) # **Users may also wish to explore the spectrogram or the FFT which are very useful for studying the frequency components in the time series.** # # For example: # - Peaking at f =~ 0 can indicating wandering behavior characteristic of a random walk that needs to be differenced. This could also be indicative of a stationary ARMA process with a high positive phi value. # - Peaking at a frequency and its multiples is indicative of seasonality. The lowest frequency in this case is called the fundamental frequency and the inverse of this frequency is the seasonal period for the model. # In[12]: eda.plot_model(plot="periodogram") eda.plot_model(plot="fft") # **In the plots above, we notice** # # 1. Peaking at f ~= 0 indicating that we need to difference the data. # 2. Peaking at f = 0.0833, 0.1677, 0.25, 0.3333, 0.4167. All these are multiple of 0.0833. Hence 0.0833 is the fundamental frequency and the seasonal period is 1/0.0833 = 12. # **Alternately, the `diagnostics` plot provides all these details in a convenient call.** # In[13]: eda.plot_model(plot="diagnostics", fig_kwargs={"height": 800, "width": 1000}) # Our diagnosutic plots indicated the need to difference and the presence of a seasonal period of 12. **Let's see what happends when we remove this from the model. What other characteristics are left in the model that would need to be taken care of?** # # This can be achieved through the difference plots. Along with the difference plots, we will plot the corresponding ACF, PACF and Periodogram for further diagnostics. # In[14]: # Row 1: Original # Row 2: d = 1 # Row 3: First (d = 1) then (D = 1, s = 12) # - Corresponds to applying a standard first difference to handle trend, and # followed by a seasonal difference (at lag 12) to attempt to account for # seasonal dependence. # Ref: https://www.sktime.org/en/stable/api_reference/auto_generated/sktime.transformations.series.difference.Differencer.html eda.plot_model( plot="diff", data_kwargs={"lags_list": [[1], [1, 12]], "acf": True, "pacf": True, "periodogram": True}, fig_kwargs={"height": 800, "width": 1500} ) # ## NOTE: Another way to specify differences is using order_list # # Row 1: Original # # Row 2: d = 1 # # Row 3: d = 2 # eda.plot_model( # plot="diff", # data_kwargs={ # "order_list": [1, 2], # "acf": True, "pacf": True, "periodogram": True # }, # fig_kwargs={"height": 600, "width": 1200} # ) # **Observations:** # # 1. In the second row, we have only removed the wandering behavior by taking a first difference. This can be seen in the ACF plot (extended autocorrelations are gone) and Periodogram (peaking at f =~ 0 is squished). The ACF (preaking at seasonal period of 12 and its multiples) and PACF (peaking at fundamental frequency of 0.0833 and its multiples) still show the seasonal behavior. # 2. In the third row, we have taken first difference followed by a seasonal difference of 12. Now, we can see that the peaking at seasonal multiples is gone from both ACF and Periodogram. There is still a little bit of autoregresssive properties that we have not taken care of but by looking at the PACF, maybe p=1 seems like a reasonable value to use (most lags after that are insignificant). # # **Conclusion** # * If you were modeling this with ARIMA, the model to try would be **ARIMA(1,1,0)x(0,1,0,12)**. # * Other models could use this information appropriately. For example, reduced regression models could remove the trend and seasonality of 12 (i.e. make the data stationary) before modeling the rest of the autoregressive properties. Luckily, the `pycaret` time series module will take care of this internally. # # **Let's plot the Time Series Decomposition next (another classical diagnostic plot)** # In[15]: # First, classical decomposition # By default the seasonal period is the one detected during setup - 12 in this case. eda.plot_model(plot="decomp", fig_kwargs={"height": 500}) # In[16]: # Users can change the seasonal period to explore what is best for this model. eda.plot_model(plot="decomp", data_kwargs={'seasonal_period': 24}, fig_kwargs={"height": 500}) # In[17]: # Users may wish to customize the decomposition, for example, in this case multiplicative seasonality # probably makes more sense since the magnitide of the seasonality increase as the time progresses eda.plot_model(plot="decomp", data_kwargs={'type': 'multiplicative'}, fig_kwargs={"height": 500}) # In[18]: # Users can also plot STL decomposition # Reference: https://otexts.com/fpp2/stl.html eda.plot_model(plot="decomp_stl", fig_kwargs={"height": 500}) # **Let us look at the various splits of the data used for modeling next.** # # **NOTE:** # * In time series, we can not split the data randomly since there is serial correlation in the data and using future data to predict past data will result in leakage. Hence the temporal dependence must be maintained when splitting the data. # * Users may wish to refer to this for more details: # - https://github.com/pycaret/pycaret/discussions/1761 # - https://robjhyndman.com/hyndsight/tscv/ # - https://topepo.github.io/caret/data-splitting.html#data-splitting-for-time-series # In[19]: # Show the train-test splits on the dataset # Internally split - len(fh) as test set, remaining used as test set eda.plot_model(plot="train_test_split", fig_kwargs={"height": 400, "width": 900}) # Show the Cross Validation splits inside the train set # The blue dots represent the training data for each fold. # The orange dots represent the validation data for each fold eda.plot_model(plot="cv", fig_kwargs={"height": 400, "width": 900}) # # Statistical Tests # # Statistical Testing is another important part of time series modeling. This can be achieved easily in pycaret using `check_stats`. # # **Options are:** # * 'summary', # * 'white_noise' # * 'stationarity' # * 'adf' # * 'kpss' # * 'normality' # * 'all' # # By default the statistics are run on the "transformed" data, but similar to plots, the user has the abiliy to set this to another data type using the `data_type` argument. e.g. `eda.check_stats(test="summary", data_type = "original")` # In[20]: # Summary Statistics eda.check_stats(test="summary") # In[21]: # Stationarity tests (ADF and KPSS) # NOTE: Users can also just run a single test by passing either 'adf' or 'kpss' to `check_stats` eda.check_stats(test='stationarity') # The ADF tests shows that the data is not stationary and we saw this in the plots as well (clear trend and seasonal behavior) # In[22]: # Ljung-Bx test to tests of white noise (whether the data is uncorrelated or not) eda.check_stats(test='white_noise') # The Ljung-Box tests indicates that the data is not white noise - again something that was clearly visible in the data # In[23]: # Users have the option to customize the tests such as change the alpha value. eda.check_stats(test='kpss', alpha = 0.2) # Alternately, all the above tests can be done in one shot by not passing any test type. # In[24]: eda.check_stats() # **That's it for this notebook. In the next notebook, we will see how we can start to model this data.**