Time Collection Forecasting Made Easy (Half 1): Decomposition and Baseline Fashions

I to keep away from time collection evaluation. Each time I took a web based course, I’d see a module titled “Time Collection Evaluation” with subtopics like Fourier Transforms, autocorrelation capabilities and different intimidating phrases. I don’t know why, however I at all times discovered a purpose to keep away from it.

However right here’s what I’ve realized: any advanced subject turns into manageable once we begin from the fundamentals and concentrate on understanding the instinct behind it. That’s precisely what this weblog collection is about : making time collection really feel much less like a maze and extra like a dialog along with your knowledge over time.

We perceive advanced matters far more simply after they’re defined by way of real-world examples. That’s precisely how I’ll method this collection.

In every put up, we’ll work with a easy dataset and discover what’s wanted from a time collection perspective. We’ll construct instinct round every idea, perceive why it issues, and implement it step-by-step on the information.

Time Collection Evaluation is the method of understanding, modeling and Forecasting knowledge that’s noticed over time. It includes figuring out patterns reminiscent of tendencies, seasonality and noise utilizing previous observations to make knowledgeable predictions about future values.

Let’s begin by contemplating a dataset named Every day Minimal Temperatures in Melbourne (). This dataset comprises day by day data of the bottom temperature (in Celsius) noticed in Melbourne, Australia, over a 10-year interval from 1981 to 1990. Every entry contains simply two columns:

Date: The calendar day (from 1981-01-01 to 1990-12-31)
Temp: The minimal temperature recorded on that day

You’ve most likely heard of fashions like ARIMA, SARIMA or Exponential Smoothing. However earlier than we go there, it’s a good suggestion to check out some easy baseline fashions first, to see how nicely a fundamental method performs on our knowledge.

Whereas there are various sorts of baseline fashions utilized in time collection forecasting, right here we’ll concentrate on the three most important ones, that are easy, efficient, and extensively relevant throughout industries.

Naive Forecast: Assumes the following worth would be the identical because the final noticed one.
Seasonal Naive Forecast: Assumes the worth will repeat from the identical level final season (e.g., final week or final month).
Shifting Common: Takes the common of the final n factors.

You may be questioning, why use baseline fashions in any respect? Why not simply go straight to the well-known forecasting strategies like ARIMA or SARIMA?

Let’s contemplate a store proprietor who needs to forecast subsequent month’s gross sales. By making use of a transferring common baseline mannequin, they’ll estimate subsequent month’s gross sales because the common of earlier months. This easy method may already ship round 80% accuracy — ok for planning and stock selections.

Now, if we change to a extra superior mannequin like ARIMA or SARIMA, we would improve accuracy to round 85%. However the important thing query is: is that further 5% well worth the extra time, effort and assets? On this case, the baseline mannequin does the job.

Actually, in most on a regular basis enterprise eventualities, baseline fashions are ample. We usually flip to classical fashions like ARIMA or SARIMA in high-impact industries reminiscent of finance or vitality, the place even a small enchancment in accuracy can have a big monetary or operational impression. Even then, a baseline mannequin is normally utilized first — not solely to supply fast insights but additionally to behave as a benchmark that extra advanced fashions should outperform.

Okay, now that we’re able to implement some baseline fashions, there’s one key factor we have to perceive first:
Each time collection is made up of three major elements — pattern, seasonality and residuals.

Time collection decomposition separates knowledge into pattern, seasonality and residuals (noise), serving to us uncover the true patterns beneath the floor. This understanding guides the selection of forecasting fashions and improves accuracy. It’s additionally a significant first step earlier than constructing each easy and superior forecasting options.

Pattern
That is the general route your knowledge is transferring in over time — going up, down or staying flat.
Instance: Regular lower in month-to-month cigarette gross sales.

Seasonality
These are the patterns that repeat at common intervals — day by day, weekly, month-to-month or yearly.
Instance: Cool drinks gross sales in summer time.

Residuals (Noise)
That is the random “leftover” a part of the information, the unpredictable ups and downs that may’t be defined by pattern or seasonality.
Instance: A one-time automobile buy exhibiting up in your month-to-month expense sample.

Now that we perceive the important thing elements of a time collection, let’s put that into observe utilizing an actual dataset: Every day Minimal Temperatures in Melbourne, Australia.

We’ll use Python to decompose the time collection into its pattern, seasonality, and residual elements so we are able to higher perceive its construction and select an applicable baseline mannequin.

:

import pandas as pd
import matplotlib.pyplot as plt
from statsmodels.tsa.seasonal import seasonal_decompose

# Load the dataset
df = pd.read_csv("minimal day by day temperatures knowledge.csv")

# Convert 'Date' to datetime and set as index
df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
df.set_index('Date', inplace=True)

# Set an everyday day by day frequency and fill lacking values utilizing ahead fill
df = df.asfreq('D')
df['Temp'].fillna(methodology='ffill', inplace=True)

# Decompose the day by day collection (365-day seasonality for yearly patterns)
decomposition = seasonal_decompose(df['Temp'], mannequin='additive', interval=365)

# Plot the decomposed elements
decomposition.plot()
plt.suptitle('Decomposition of Every day Minimal Temperatures (Every day)', fontsize=14)
plt.tight_layout()
plt.present()

Output:

Decomposition of day by day temperatures exhibiting pattern, seasonal cycles and random fluctuations.

The decomposition plot clearly exhibits a robust seasonal sample that repeats every year, together with a delicate pattern that shifts over time. The residual part captures the random noise that isn’t defined by pattern or seasonality.

Within the code earlier, you may need seen that I used an additive mannequin for decomposing the Time Collection. However what precisely does that imply — and why is it the proper alternative for this dataset?

Let’s break it down.
In an additive mannequin, we assume Pattern, Seasonality and Residuals (Noise) mix linearly, like this:
Y = T ​+ S ​+ R​

The place:
Y is the precise worth at time t
T​ is the pattern
S is the seasonal part
R is the residual (random noise)

This implies we’re treating the noticed worth because the sum of the elements, every part contributes independently to the ultimate output.

I selected the additive mannequin as a result of after I appeared on the sample in day by day minimal temperatures, I seen one thing vital:

The road plot above exhibits the day by day minimal temperatures from 1981 to 1990. We are able to clearly see a robust seasonal cycle that repeats every year, colder temperatures in winter, hotter in summer time.

Importantly, the amplitude of those seasonal swings stays comparatively constant over time. For instance, the temperature distinction between summer time and winter doesn’t seem to develop or shrink over time. This stability in seasonal variation is a key signal that the additive mannequin is acceptable for decomposition, because the seasonal part seems to be impartial of any pattern.

We use an additive mannequin when the pattern is comparatively steady and doesn’t amplify or distort the seasonal sample, and when the seasonality stays inside a constant vary over time, even when there are minor fluctuations.

Now that we perceive how the additive mannequin works, let’s discover the multiplicative mannequin — which is commonly used when the seasonal impact scales with the pattern which may even assist us perceive the additive mannequin extra clearly.

Contemplate a family’s electrical energy consumption. Suppose the family makes use of 20% extra electrical energy in summer time in comparison with winter. Which means the seasonal impact isn’t a set quantity — it’s a proportion of their baseline utilization.

Let’s see how this appears with actual numbers:

In 2021, the family used 300 kWh in winter and 360 kWh in summer time (20% greater than winter).

In 2022, their winter consumption elevated to 330 kWh, and summer time utilization rose to 396 kWh (nonetheless 20% greater than winter).

In each years, the seasonal distinction grows with the pattern   from +60 kWh in 2021 to +66 kWh in 2022   though the share improve stays the identical. That is precisely the type of habits {that a} multiplicative mannequin captures nicely.

In mathematical phrases:
Y = T ×S ×R 
The place:
Y​: Noticed worth
T: Pattern part
S: Seasonal part
R​: Residual (noise)

By trying on the decomposition plot, we are able to determine whether or not an additive or multiplicative mannequin matches our knowledge higher.

There are additionally different highly effective decomposition instruments accessible, which I’ll be protecting in certainly one of my upcoming weblog posts.Now that we have now a transparent understanding of additive and multiplicative fashions, let’s shift our focus to making use of a baseline mannequin that matches this dataset.

Based mostly on the decomposition plot, we are able to see a robust seasonal sample within the knowledge, which suggests {that a} Seasonal Naive mannequin may be a great match for this time collection.

This mannequin assumes that the worth at a given time would be the identical because it was in the identical interval of the earlier season — making it a easy but efficient alternative when seasonality is dominant and constant. For instance, if temperatures usually comply with the identical yearly cycle, then the forecast for July 1st, 1990, would merely be the temperature recorded on July 1st, 1989.

Code:

import pandas as pd
import matplotlib.pyplot as plt
import numpy as np

# Load the dataset
df = pd.read_csv("minimal day by day temperatures knowledge.csv")

# Convert 'Date' column to datetime and set as index
df['Date'] = pd.to_datetime(df['Date'], dayfirst=True)
df.set_index('Date', inplace=True)

# Guarantee common day by day frequency and fill lacking values
df = df.asfreq('D')
df['Temp'].fillna(methodology='ffill', inplace=True)

# Step 1: Create the Seasonal Naive Forecast
seasonal_period = 365  # Assuming yearly seasonality for day by day knowledge
# Create the Seasonal Naive forecast by shifting the temperature values by three hundred and sixty five days
df['Seasonal_Naive'] = df['Temp'].shift(seasonal_period)

# Step 2: Plot the precise vs forecasted values
# Plot the final 2 years (730 days) of information to match
plt.determine(figsize=(12, 5))
plt.plot(df['Temp'][-730:], label='Precise')
plt.plot(df['Seasonal_Naive'][-730:], label='Seasonal Naive Forecast', linestyle='--')
plt.title('Seasonal Naive Forecast vs Precise Temperatures')
plt.xlabel('Date')
plt.ylabel('Temperature (°C)')
plt.legend()
plt.tight_layout()
plt.present()

# Step 3: Consider utilizing MAPE (Imply Absolute Share Error)
# Use the final three hundred and sixty five days for testing
check = df[['Temp', 'Seasonal_Naive']].iloc[-365:].copy()
check.dropna(inplace=True)

# MAPE Calculation
mape = np.imply(np.abs((check['Temp'] - check['Seasonal_Naive']) / check['Temp'])) * 100
print(f"MAPE (Seasonal Naive Forecast): {mape:.2f}%")

Output:

Seasonal Naive Forecast vs. Precise Temperatures (1989–1990)

To maintain the visualization clear and targeted, we’ve plotted the final two years of the dataset (1989–1990) as a substitute of all 10 years.

This plot compares the precise day by day minimal temperatures in Melbourne with the values predicted by the Seasonal Naive mannequin, which merely assumes that every day’s temperature would be the identical because it was on the identical day one 12 months in the past.

As seen within the plot, the Seasonal Naive forecast captures the broad form of the seasonal cycles fairly nicely — it mirrors the rise and fall of temperatures all year long. Nonetheless, it doesn’t seize day-to-day variations, nor does it reply to slight shifts in seasonal timing. That is anticipated, because the mannequin is designed to repeat the earlier 12 months’s sample precisely, with out adjusting for pattern or noise.

To judge how nicely this mannequin performs, we calculate the Imply Absolute Share Error (MAPE) over the ultimate three hundred and sixty five days of the dataset (i.e., 1990). We solely use this era as a result of the Seasonal Naive forecast wants a full 12 months of historic knowledge earlier than it could start making predictions.

Imply Absolute Share Error (MAPE) is a generally used metric to guage the accuracy of forecasting fashions. It measures the common absolute distinction between the precise and predicted values, expressed as a share of the particular values.

In time collection forecasting, we usually consider mannequin efficiency on the most up-to-date or goal time interval — not on the center years. This displays how forecasts are utilized in the true world: we construct fashions on historic knowledge to foretell what’s coming subsequent.

That’s why we calculate MAPE solely on the remaining three hundred and sixty five days of the dataset — this simulates forecasting for a future and provides us a sensible measure of how nicely the mannequin would carry out in observe.

A MAPE of 28.23%, which supplies us a baseline stage of forecasting error. Any mannequin we construct subsequent — whether or not it’s personalized or extra superior, ought to goal to outperform this benchmark.

A MAPE of 28.23% implies that, on common, the mannequin’s predictions have been 28.23% off from the precise day by day temperature values over the past 12 months.

In different phrases, if the true temperature on a given day was 10°C, the Seasonal Naïve forecast may need been round 7.2°C or 12.8°C, reflecting a 28% deviation.

I’ll dive deeper into analysis metrics in a future put up.

On this put up, we laid the inspiration for time collection forecasting by understanding how real-world knowledge might be damaged down into pattern, seasonality, and residuals by way of decomposition. We explored the distinction between additive and multiplicative fashions, carried out the Seasonal Naive baseline forecast and evaluated its efficiency utilizing MAPE.

Whereas the Seasonal Naive mannequin is straightforward and intuitive, it comes with limitations particularly for this dataset. It assumes that the temperature on any given day is an identical to the identical day final 12 months. However because the plot and MAPE of 28.23% confirmed, this assumption doesn’t maintain completely. The information shows slight shifts in seasonal patterns and long-term variations that the mannequin fails to seize.

Within the subsequent a part of this collection, we’ll go additional. We’ll discover customise a baseline mannequin, examine it to the Seasonal Naive method and consider which one performs higher utilizing error metrics like MAPE, MAE and RMSE.

We’ll additionally start constructing the inspiration wanted to grasp extra superior fashions like ARIMA together with key ideas reminiscent of:

  • Stationarity
  • Autocorrelation and Partial Autocorrelation 
  • Differencing
  • Lag-based modeling (AR and MA phrases)

Half 2 will dive into these matters in additional element, beginning with customized baselines and ending with the foundations of ARIMA.

Thanks for studying.  I hope you discovered this put up useful and insightful.