Naïve Forecasting: A Tool to Compare Forecast Models
DOI:
https://doi.org/10.3126/njmathsci.v4i1.53156Abstract
In this study, a freshly fitted forecast model is put up against a standard procedure for comparison. But first, the essay makes a distinction between the confusing notion of a prediction model's accuracy measure and a comparison of forecast models in terms of gauging their relative and absolute accuracy measures in various scenarios. A forecast model's accuracy measure by itself does not give a complete picture of how much better a newly fitted model is than other benchmark models built from the same dataset.
This article illustrates the comparison of a multiple regression model as a novel fit with the naive forecasting methodology, a well-known benchmark in the forecasting area, using cross-validation techniques. The performance of the forecast models was assessed using two generally used accuracy measures, Mean Absolute Error (MAE) and Mean Absolute Percentage Error (MAPE). It was discovered that the multiple regression model performs better than the naive technique in both MAE and MAPE. This meant the multiple regression model was a worthy fit.
In summary, it is crucial to compare a newly developed forecast model with benchmark models to evaluate its performance accurately. This process allows for the identification of the most suitable forecasting method for a specific context and promotes the development of improved techniques for comparing forecast models in the future.
Downloads
Downloads
Published
How to Cite
Issue
Section
License
© School of Mathematical Sciences, Tribhuvan University