Research Article
BibTex RIS Cite

Multi-Location Demand Forecasting in FMCG via Deep Reinforcement Learning

Year 2025, Volume: 9 Issue: 2, 30 - 36, 30.09.2025
https://doi.org/10.34110/forecasting.1681404

Abstract

In this study, we propose a unified model for forecasting the daily demand of Fast-Moving Consumer Goods (FMCG) across multiple restaurant locations. Unlike traditional machine learning approaches that require prior segmentation of restaurants and products or separate forecasting models for each combination, our approach enables a single model to predict sales for multiple products and locations simultaneously. To achieve this, we trained and evaluated reinforcement learning (RL) models using key features such as pricing, holidays, weather conditions, and USD exchange rates. The study utilized daily sales data spanning from January 1, 2022, to October 14, 2024, covering three restaurants and two products. We experimented with several RL-based models, including Deep Q-Network (DQN), Convolutional Deep Q-Network (CDQN), Long Short-Term Memory (LSTM)-based RL, and Recurrent Neural Networks (RNN)-based RL, comparing their performance using Mean Absolute Percentage Error (MAPE) and Mean Squared Error (MSE) as evaluation metrics. Experimental results indicate that the DQN model achieved the highest predictive accuracy, outperforming other approaches. The proposed forecasting model can significantly contribute to price optimization, inventory management, and strategic decision-making, offering businesses a more efficient way to anticipate demand without the need for extensive segmentation or multiple independent models.

Ethical Statement

This study does not involve any human participants, animal subjects, or sensitive personal data. Therefore, ethical approval was not required.

References

  • [1] Zhang, Y., He, L., & Zheng, J. (2025). A Deep Reinforcement Learning-Based Dynamic Replenishment Approach for Multi-Echelon Inventory Considering Cost Optimization. Electronics, 14(1), Article 1.
  • [2] Demizu, T., Fukazawa, Y., & Morita, H. (2023). Inventory management of new products in retailers using model-based deep reinforcement learning. Expert Systems with Applications, 229, 120256.
  • [3] Kalusivalingam, A. K., Sharma, A., Patel, N., & Singh, V. (2020). Leveraging Deep Reinforcement Learning and Real-Time Stream Processing for Enhanced Retail Analytics. International Journal of AI and ML, 1(2), Article 2.
  • [4] Chien, C.-F., Lin, Yun-Siang, & and Lin, S.-K. (2020). Deep reinforcement learning for selecting demand forecast models to empower Industry 3.5 and an empirical study for a semiconductor component distributor. International Journal of Production Research, 58(9), 2784–2804.
  • [5] Al Hajj Hassan, L., Mahmassani, H. S., & Chen, Y. (2020). Reinforcement learning framework for freight demand forecasting to support operational planning decisions. Transportation Research Part E: Logistics and Transportation Review, 137, 101926.
  • [6] Fu, Y., Wu, D., & Boulet, B. (2022). Reinforcement Learning Based Dynamic Model Combination for Time Series Forecasting. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), Article 6.
  • [7] Lu, D. W. (2017). Agent Inspired Trading Using Recurrent Reinforcement Learning and LSTM Neural Networks (arXiv:1707.07338). arXiv.
  • [8] Li, Y., Ni, P., & Chang, V. (2020). Application of deep reinforcement learning in stock trading strategies and stock forecasting. Computing, 102(6), 1305–1322.
  • [9] Liu, H., Yu, C., Wu, H., Duan, Z., & Yan, G. (2020). A new hybrid ensemble deep reinforcement learning model for wind speed short term forecasting. Energy, 202, 117794.
  • [10] Liu, T., Tan, Z., Xu, C., Chen, H., & Li, Z. (2020). Study on deep reinforcement learning techniques for building energy consumption forecasting. Energy and Buildings, 208, 109675.
  • [11] Dabbaghjamanesh, M., Moeini, A., & Kavousi-Fard, A. (2021). Reinforcement Learning-Based Load Forecasting of Electric Vehicle Charging Station Using Q-Learning Technique. IEEE Transactions on Industrial Informatics, 17(6), 4229–4237. IEEE Transactions on Industrial Informatics.
  • [12] Sutton, R. S., & Barto, A. G. (1998). Reinforcement Learning: An Introduction. IEEE Transactions on Neural Networks, 9(5), 1054–1054. IEEE Transactions on Neural Networks. [13] Watkins, C. J. C. H., & Dayan, P. (1992). Technical Note: Q-Learning. Machine Learning, 8(3), 279–292.
  • [14] Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., & Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518(7540), 529–533.
  • [15] Bakker, B. (2001). Reinforcement Learning with Long Short-Term Memory. Advances in Neural Information Processing Systems, 14.
  • [16] Zhang, X., Liu, L., Long, G., Jiang, J., & Liu, S. (2021). Episodic memory governs choices: An RNN-based reinforcement learning model for decision-making task. Neural Networks, 134, 1–10.
There are 15 citations in total.

Details

Primary Language English
Subjects Deep Learning, Neural Networks, Machine Learning (Other)
Journal Section Articles
Authors

Sergül Ürgenç 0000-0003-1965-4488

Ata Osman Özgüz 0009-0002-8303-8348

Publication Date September 30, 2025
Submission Date April 22, 2025
Acceptance Date September 28, 2025
Published in Issue Year 2025 Volume: 9 Issue: 2

Cite

APA Ürgenç, S., & Özgüz, A. O. (2025). Multi-Location Demand Forecasting in FMCG via Deep Reinforcement Learning. Turkish Journal of Forecasting, 9(2), 30-36. https://doi.org/10.34110/forecasting.1681404
AMA Ürgenç S, Özgüz AO. Multi-Location Demand Forecasting in FMCG via Deep Reinforcement Learning. TJF. September 2025;9(2):30-36. doi:10.34110/forecasting.1681404
Chicago Ürgenç, Sergül, and Ata Osman Özgüz. “Multi-Location Demand Forecasting in FMCG via Deep Reinforcement Learning”. Turkish Journal of Forecasting 9, no. 2 (September 2025): 30-36. https://doi.org/10.34110/forecasting.1681404.
EndNote Ürgenç S, Özgüz AO (September 1, 2025) Multi-Location Demand Forecasting in FMCG via Deep Reinforcement Learning. Turkish Journal of Forecasting 9 2 30–36.
IEEE S. Ürgenç and A. O. Özgüz, “Multi-Location Demand Forecasting in FMCG via Deep Reinforcement Learning”, TJF, vol. 9, no. 2, pp. 30–36, 2025, doi: 10.34110/forecasting.1681404.
ISNAD Ürgenç, Sergül - Özgüz, Ata Osman. “Multi-Location Demand Forecasting in FMCG via Deep Reinforcement Learning”. Turkish Journal of Forecasting 9/2 (September2025), 30-36. https://doi.org/10.34110/forecasting.1681404.
JAMA Ürgenç S, Özgüz AO. Multi-Location Demand Forecasting in FMCG via Deep Reinforcement Learning. TJF. 2025;9:30–36.
MLA Ürgenç, Sergül and Ata Osman Özgüz. “Multi-Location Demand Forecasting in FMCG via Deep Reinforcement Learning”. Turkish Journal of Forecasting, vol. 9, no. 2, 2025, pp. 30-36, doi:10.34110/forecasting.1681404.
Vancouver Ürgenç S, Özgüz AO. Multi-Location Demand Forecasting in FMCG via Deep Reinforcement Learning. TJF. 2025;9(2):30-6.

INDEXING

   16153                        16126   

  16127                       16128                       16129