FLOOD FORECASTING USING NEURAL NETWORK: APPLYING THE LSTM NETWORK IN THE MOSUL REGION. IRAQ
Year 2022,
Volume: 6 Issue: 1, 113 - 116, 20.07.2022
Abdullahi Abdu Ibrahim
,
Ayad Khalaf Jırrı Halboosh
Abstract
Abstract – Flooding is one of the most dangerous natural causes that inflict harm to both life and property on a yearly basis. Therefore, building a flood model for predicting the immersion zone in a watershed is critical for decision-makers. Floods are a perilous tragedy that annually threatens Iraq and the Middle East region, impacting millions of people. In this context, having suitable flood forecasting algorithms may help people by reducing property damage and saving lives by warning communities of potentially severe flooding events ahead of time.
Data mining techniques such as artificial neural network (ANN) approaches have recently been applied to model floods. The purpose of this study is to develop a model that extrapolates the past into the future using existing statistical models and recurrent neural networks and is powered by rainfall forecasting data. We investigate a number of time series forecasting approaches, including Long Short-Term Memory (LSTM) Networks. The forecasting methods investigated are tested and implemented using rainfall data from the Mosul region of Iraq. In addition, in flood occurrences and conducting experiments to study the relationship between rainfall and floods.
Supporting Institution
ALTINBAS UNIVERSITY, Istanbul, Turkey
Thanks
Please accept my heartfelt gratitude for all your support and encouragement! You are so helpful, kind, and generous with your time and energy. Thank you so much for being an excellent supervisor.
References
- [1] S. Mei, A. Montanari, and P.-M. Nguyen, “A mean field view of the landscape of two-layer neural networks,” Proc. Natl. Acad. Sci. U. S. A., vol. 115, no. 33, pp. E7665–E7671, 2018.
- [2] V. Gulshan et al., “Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs,” JAMA, vol. 316, no. 22, p. 2402, 2016.
- [3] Y. Liu, S. Liu, Y. Wang, F. Lombardi, and J. Han, “A survey of stochastic computing neural networks for machine learning applications,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 7, pp. 2809–2824, 2021.
- [4] J. Bayer, D. Wierstra, J. Togelius, and J. Schmidhuber, “Evolving memory cell structures for sequence learning,” in Artificial Neural Networks – ICANN 2009, Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 755–764.
- [5] G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass, : “Long short- term memory and learning-to-learn in networks of spiking neurons,” 2018, pp. 787–797.
- [6] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu, : “Are you talking to a machine? dataset and methods for multilingual image question,” 2015, pp. 2296–2304.
- [7] J. Gong, X. Chen, T. Gui, and X. Qiu, : “Switch-lstms for multi-criteria chinese word segmentation,” 2019, vol. 33, pp. 6457–6464.
- [8] K. Greff, R. K. Srivastava, J. Koutnik, B. R. Steunebrink, and J. Schmidhuber, “LSTM: A search space odyssey,” IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 10, pp. 2222–2232, 2017.
- [9] C. R., K. K. R., and D. E. Newton, “Drthis: Deep ransomware threat hunting and intelligence system at the fog layer,” Future Generation Computer Sys- tems, vol. 90, pp. 94 – 104, 2019.
- [10] T. Horsmann and T. Zesch: “Do lstms really work so well for pos tagging? – a replication study,” 2017, pp. 727–736.
- [11] Cho, K.; van Merrienboer, B.; Gülçehre, Ç.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078.
Year 2022,
Volume: 6 Issue: 1, 113 - 116, 20.07.2022
Abdullahi Abdu Ibrahim
,
Ayad Khalaf Jırrı Halboosh
References
- [1] S. Mei, A. Montanari, and P.-M. Nguyen, “A mean field view of the landscape of two-layer neural networks,” Proc. Natl. Acad. Sci. U. S. A., vol. 115, no. 33, pp. E7665–E7671, 2018.
- [2] V. Gulshan et al., “Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs,” JAMA, vol. 316, no. 22, p. 2402, 2016.
- [3] Y. Liu, S. Liu, Y. Wang, F. Lombardi, and J. Han, “A survey of stochastic computing neural networks for machine learning applications,” IEEE Trans. Neural Netw. Learn. Syst., vol. 32, no. 7, pp. 2809–2824, 2021.
- [4] J. Bayer, D. Wierstra, J. Togelius, and J. Schmidhuber, “Evolving memory cell structures for sequence learning,” in Artificial Neural Networks – ICANN 2009, Berlin, Heidelberg: Springer Berlin Heidelberg, 2009, pp. 755–764.
- [5] G. Bellec, D. Salaj, A. Subramoney, R. Legenstein, and W. Maass, : “Long short- term memory and learning-to-learn in networks of spiking neurons,” 2018, pp. 787–797.
- [6] H. Gao, J. Mao, J. Zhou, Z. Huang, L. Wang, and W. Xu, : “Are you talking to a machine? dataset and methods for multilingual image question,” 2015, pp. 2296–2304.
- [7] J. Gong, X. Chen, T. Gui, and X. Qiu, : “Switch-lstms for multi-criteria chinese word segmentation,” 2019, vol. 33, pp. 6457–6464.
- [8] K. Greff, R. K. Srivastava, J. Koutnik, B. R. Steunebrink, and J. Schmidhuber, “LSTM: A search space odyssey,” IEEE Trans. Neural Netw. Learn. Syst., vol. 28, no. 10, pp. 2222–2232, 2017.
- [9] C. R., K. K. R., and D. E. Newton, “Drthis: Deep ransomware threat hunting and intelligence system at the fog layer,” Future Generation Computer Sys- tems, vol. 90, pp. 94 – 104, 2019.
- [10] T. Horsmann and T. Zesch: “Do lstms really work so well for pos tagging? – a replication study,” 2017, pp. 727–736.
- [11] Cho, K.; van Merrienboer, B.; Gülçehre, Ç.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078.