Course Content
Machine Learning Projects (Retail and Commerce)
About Lesson

Forecast prediction is predicting a future value using past values and many other factors. In this tutorial, we will create a sales forecasting model using the Keras functional API.

Sales forecasting

It is determining present-day or future sales using data like past sales, seasonality, festivities, economic conditions, etc.

So, this model will predict sales on a certain day after being provided with a certain set of inputs.

In this model 8 parameters were used as input:

  1. past seven day sales
  2. day of the week
  3. date – the date was transformed into 3 different inputs
  4. season
  5. Festival or not
  6. sales on the same day in the previous year

How does it work?

First, all inputs are preprocessed to be understandable by the machine. This is a linear regression model based on supervised learning, so the output will be provided along with the input. Then inputs are then fed to the model along with desired output. The model will plot(learn) a relation(function) between the input and output. This function or relation is then used to predict the output for a specific set of inputs. In this case, input parameters like date and previous sales are labeled as input, and the amount of sales is marked as output. The model will predict a number between 0 and 1 as a sigmoid function is used in the last layer. This output can be multiplied by a specific number(in this case, maximum sales), this will be our corresponding sales amount for a certain day. This output is then provided as input to calculate sales data for the next day. This cycle of steps will be continued until a certain date arrives.

 
 

Required packages and Installation

  1. numpy
  2. pandas
  3. keras
  4. tensorflow
  5. csv
  6. matplotlib.pyplot
 

 
 
 
 
 
 
 

The use of external libraries has been kept to a minimum to provide a simpler interface, you can replace the functions used in this tutorial with those already existing in established libraries.

Original data set for sales data for 5 years:

Sales data from Jan 2015 to Dec 2019

As you can see, the sales data seems to be following a similar kind of pattern for each year and the peak sales value seems to be increasing with time over the 5-year time frame.

In this 5-year time frame, the first 4 years will be used to train the model and the last year will be used as a test set. 

Now, a few helper functions were used for processing the dataset and creating inputs of the required shape and size. They are as follows:

  1. get_data – used to load the data set using a path to its location.
  2. date_to_day – provides a day to each day. For example — 2/2/16 is Saturday and 9/5/15 is Monday.
  3. date_to_enc – Encodes data into one-hot vectors, this provides a better learning opportunity for the model.

All the properties of these functions and a few other functions cannot be explained here as it would take too much time. Please visit this link if you want to look at the entire code.

Preprocessing:

Initially, the data set had only two columns: date and traffic(sales).

After the addition of different columns and processing/normalization of values, the data contained all these values.

  1. Date
  2. Traffic
  3. Holiday or not
  4. Day

All these parameters have to be converted into a form that the machine can understand, which will be done using this function below.

Instead of keeping date, month, and year as a single entity, it was broken into three different inputs. The reason is that the year parameter in these inputs will be the same most of the time, this will cause the model to become complacent i.e it will begin to overfit to the current dataset. To increase the variability between different various inputs dates, days and months were labeled separately.  The following function conversion() will create six lists and append appropriate input to them. This is how years 2015 to 2019 will look as an encoding: is

{2015: array([1., 0., 0., 0., 0.], dtype=float32), 2016: array([0., 1., 0., 0., 0.], dtype=float32), 2017: array([0., 0., 1., 0., 0.], dtype=float32), 2018: array([0., 0., 0., 1., 0.], dtype=float32), 2019: array([0., 0., 0., 0., 1.], dtype=float32)

Each of them is a NumPy array of length 5 with 1s and 0s denoting its value

 

 
 
 
 
 
 
 

 
We will now process some other inputs that were remaining, the reason behind using all these parameters is to increase the efficiency of the model, you can experiment with removing or adding some inputs.

Sales data of the past seven days were passed as an input to create a trend in sales data, this will the predicted value will not be completely random similarly, sales data of the same day in the previous year was also provided.

The following function(other_inputs) processes three inputs:

  • sales data of past seven days
  • sales data on the same date in the previous year
  • seasonality – seasonality was added to mark trends like summer sales, etc.
 

 
 
 
 
 
 
 

The reason behind so many inputs is that if all of these were combined into a single array, it would have different rows or columns of different lengths. Such an array cannot be fed as an input.

Linearly arranging all the values in a single array lead to the model having a high loss. 

A linear arrangement will cause the model to generalize, as the difference between successive inputs would not be too different, which will lead to limited learning, decreasing the accuracy of the model.

Defining the Model

Eight separate inputs are processed and concatenated into a single layer and passed to the model.

The finalized inputs are as follows:

  1. Date
  2. Month
  3. Year
  4. Day
  5. Previous seven days sales
  6. sales in the previous year
  7. Season
  8. Holiday or not

Here in most layers, I have used 5 units as the output shape, you can further experiment with it to increase the efficiency of the model. 

 

 
 
 
 
 
 
 

Model Summary:

Compiling the model using RMSprop:

RMSprop is great at dealing with random distributions, hence its use here.

 

 
 
 
 
 
 
 

Fitting the model on the dataset:

The model will now be fed with the input and output data, this is the final step and now our model will be able to predict sales data.

 

 
 
 
 
 
 
 

Output:

Now, to test the model, input() takes input and transform it into the appropriate form:

 

 
 
 
 
 
 
 

Predicting sales data is not what we are here for right, so let’s get on with the forecasting job.

Sales Forecasting

Defining forecast_testing function to forecast the sales data from one year back from provided date:

This function works as follows:

  • A date is required as input to forecast the sales data from one year back till the mentioned date
  • Then, we access the previous year’s sales data on the same day and sales data of 7 days before it.
  • Then, using these as input a new value is predicted, then in the seven days value the first day is removed and the predicted output is added as input for the next prediction

For eg: we require forecasting of one year till 31/12/2019

  • First, the date of 31/12/2018 (one year back) is recorded, and also seven-day sales from (25/12/2018 – 31/12/2018)
  • Then the sales data of one year back i.e 31/12/2017 is collected
  • Using these as inputs with other ones, the first sales data(i.e 1/1/2019) is predicted
  • Then 24/12/2018 sales data is removed and 1/1/2019 predicted sales are added. This cycle is repeated until the sales data for 31/12/2019 is predicted.

So, previous outputs are used as inputs.

 

 
 
 
 
 
 
 

Run the forecast test function and a list containing all the sales data for that one year are returned

Result = forecast_testing(’31/12/2019′, date)

Graphs for both the forecast and actual values to test the performance of the model

 

 
 
 
 
 
 
 

 
 

Actual Values from 1/1/2019 to 31/12/2019

Comparison between prediction and actual values

 As you can see, the predicted and actual values are quite close to each other, this proves the efficiency of our model. If there are any errors or possibilities of improvements in the above article, please feel free to mention them in the comment section.