Review
BibTex RIS Cite
Year 2025, Volume: 11 Issue: 1, 96 - 115, 29.06.2025
https://doi.org/10.51477/mejs.1640908

Abstract

References

  • Hinton, G. E., Osindero, S., Teh, Y. W., “A fast learning algorithm for deep belief nets”, Neural Computation, 18(7), 1527–1554, 2006.
  • Hao, X., Zhang, G., “Technical survey deep learning”, International Journal of Neural Systems, 10(3), 417–439, 2016.
  • Krizhevsky, A., Sutskever, I., Hinton, G. E., “ImageNet classification with deep convolutional neural networks”, Advances in Neural Information Processing Systems, 25, 1097–1105, 2012.
  • Koch, C., “How the computer beat the Go player”, Scientific American Mind, 27(4), 20–23, 2016.
  • Khan, S., Islam, N., Jan, Z., Din, I. U., Rodrigues, J. J. P. C., “A novel deep learning based framework for the detection and classification of breast cancer using transfer learning”, Computers in Biology and Medicine, 125, 1–6, 2019.
  • Yıldırım, Ö., Pławiak, P., Tan, R., Acharya, U. R., “Arrhythmia detection using deep convolutional neural network with long duration ECG signals”, Computers in Biology and Medicine, 102, 411–420, 2018.
  • Mohsen, H., El-Dahshan, E.-S. A., El-Horbaty, E.-S. M., Salem, A.-B. M., “Classification using deep learning neural networks for brain tumors”, Future Computing and Informatics Journal, 3(1), 68–71, 2018.
  • Bulut, M. G., Unal, S., Hammad, M., Pławiak, P., “Deep CNN-based detection of cardiac rhythm disorders using PPG signals from wearable devices”, Plos One, 20(2), 2025.
  • Fukushima, K., “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position”, Biological Cybernetics, 36, 193–202, 1980.
  • LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., “Backpropagation applied to handwritten zip code recognition”, Neural Computation, 1(4), 541–551, 1989.
  • Zeiler, M. D., Fergus, R., “Visualizing and understanding convolutional networks”, Proceedings of European Conference on Computer Vision, Zurich, Switzerland, pp. 818–833, 2014.
  • Simonyan, K., Zisserman, A., “Very deep convolutional networks for large-scale image recognition”, Proceedings of 3rd International Conference on Learning Representations (ICLR), San Diego, USA, 2015.
  • Rubin, J., Abreu, R., Ganguli, A., Nelaturi, S., Matei, I., Sricharan, K., “Recognizing abnormal heart sounds using deep learning”, Proceedings of Computing in Cardiology Conference, Rennes, France, 2017.
  • Ozturk, T., Talo, M., Azra, E., Baran, U., Yildirim, O., “Automated detection of COVID-19 cases using deep neural networks with X-ray images”, Computers in Biology and Medicine, 121, 2020.
  • Tsirtsakis, P., Zacharis, G., Maraslidis, G. S., Fragulis, G. F., “Deep learning for object recognition: A comprehensive review of models and algorithms”, International Journal of Cognitive Computing in Engineering, 6, 298–312, 2025.
  • Panwar, M., et al., “CNN based approach for activity recognition using a wrist-worn accelerometer”, Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Jeju Island, South Korea, pp. 2438–2441, 2017.
  • Silver, D., et al., “Mastering chess and shogi by self-play with a general reinforcement learning algorithm”, arXiv preprint, arXiv:1712.01815, 2017.
  • Nelson, D. M. Q., Pereira, A. C. M., De Oliveira, R. A., “Stock market’s price movement prediction with LSTM neural networks”, Proceedings of International Joint Conference on Neural Networks (IJCNN), Anchorage, USA, vol. 2017-May, pp. 1419–1426, 2017.
  • Wang, Y., Li, B., Todo, Y., “Enhancing robustness of object detection: Hubel–Wiesel model connected with deep learning”, Knowledge-Based Systems, 311, 2025.
  • Deutsch, S., Biological Cybernetics: A Simplified Version of Kunihiko Fukushima’s Neocognitron. Tokyo: Springer, 1981.
  • McCulloch, W. S., Pitts, W., “A logical calculus of the ideas immanent in nervous activity”, Bulletin of Mathematical Biophysics, 5, 115–133, 1943.
  • Hubel, D. H., Wiesel, T. N., “Shape and arrangement of columns in cat’s striate cortex”, Journal of Physiology, 165(3), 559–568, 1963.
  • Fukushima, K., “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position”, Biological Cybernetics, 36(4), 193–202, 1980.
  • LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., “Gradient-based learning applied to document recognition”, Proceedings of the IEEE, 86(11), 2278–2323, 1998.
  • Szegedy, C., et al., “Going deeper with convolutions”, IEEE Conference on Computer Vision and Pattern Recognition, 1–9, 2015.
  • Mahyudin, R., et al., “Design of Automated Smart Attendance System Using Deep Learning Based Face Recognition”, Proceedings of 9th International Conference on Science, Technology, Engineering and Mathematics (ICONSTEM), Chennai, India, 2024.
  • Mnih, V., et al., “Human-level control through deep reinforcement learning”, Nature, 518(7540), 529–533, 2015.
  • Banerjee, A., Swain, S., Rout, M., Bandyopadhyay, M., “Composite spectral spatial pixel CNN for land-use hyperspectral image classification with hybrid activation function”, Multimedia Tools and Applications, 1-24, 2024.
  • Wu, T., et al., “A brief overview of ChatGPT: The history, status quo and potential future development”, IEEE/CAA Journal of Automatica Sinica, 10(5), 1122–1136, 2023.
  • Sharifuzzaman Sagar, A. S. M., Chen, Y., Xie, Y. K., Kim, H. S., “MSA R-CNN: A comprehensive approach to remote sensing object detection and scene understanding”, Expert Systems with Applications, 241, 2024.
  • Berner, C., et al., “Dota 2 with Large Scale Deep Reinforcement Learning”, arXiv preprint, arXiv:1912.06680, 2019.
  • Zhang, Y., Qian, H., Zhang, J., Shi, Z., “Action Recognition Networks Based on Spatio-Temporal Motion Modules”, Proceedings of 2024 International Conference on Advances in Electrical Engineering and Computer Applications (AEECA), Shenyang, China, pp. 436–440, 2024.
  • Wang, Y., et al., “RingMo-Lite: A Remote Sensing Lightweight Network With CNN-Transformer Hybrid Framework”, IEEE Transactions on Geoscience and Remote Sensing, 62, 1–20, 2024.
  • Hochreiter, S., Schmidhuber, J., “Long Short-Term Memory”, Neural Computation, 9(8), 1735–1780, 1997.
  • Rumelhart, D. E., Hinton, G. E., Williams, R. J., “Learning representations by back-propagating errors”, Nature, 323(6088), 533–536, 1986.
  • Kirov, C., Cotterell, R., “Recurrent Neural Networks in Linguistic Theory: Revisiting Pinker and Prince (1988) and the Past Tense Debate”, Transactions of the Association for Computational Linguistics, 6, 2018.
  • Hochreiter, S., Schmidhuber, J., “Long Short-Term Memory”, Neural Computation, 9(8), 1735–1780, 1997.
  • Sun, L., Jia, K., Chen, K., Yeung, D. Y., Shi, B. E., Savarese, S., “Lattice Long Short-Term Memory for Human Action Recognition”, Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp. 2147–2156, 2017.
  • Zhou, J., Lu, Y., Dai, H. N., Wang, H., Xiao, H., “Sentiment analysis of Chinese microblog based on stacked bidirectional LSTM”, IEEE Access, 7, 38856–38866, 2019.
  • Chen, Y., Yuan, J., You, Q., Luo, J., “Twitter Sentiment Analysis via Bi-sense Emoji Embedding”, Proceedings of the 26th ACM International Conference on Multimedia, Seoul, South Korea, pp. 117–125, 2018.
  • Kim, T., Kim, H. Y., “Forecasting stock prices with a feature fusion LSTM-CNN model using different representations of the same data,” Plos One, 14(2), 2019.
  • Tian, Y., Zhang, K., Li, J., Lin, X., Yang, B., “LSTM-based traffic flow prediction with missing data,” Neurocomputing, 318, 297–305, 2018.
  • Karevan, Z., Suykens, J. A. K., “Spatio-temporal stacked LSTM for temperature prediction in weather forecasting,” arXiv preprint, arXiv:1811.06341, 2018.
  • Yang, F., Zhang, S., Li, W., Miao, Q., “State-of-charge estimation of lithium-ion batteries using LSTM and UKF,” Energy, 201, 117664, 2020.
  • Dehouche, N., Dehouche, K., “What’s in a text-to-image prompt? The potential of stable diffusion in visual arts education,” Heliyon, 9(6), e16757, 2023.
  • Chamola, V., et al., “Beyond reality: The pivotal role of generative AI in the Metaverse,” arXiv preprint, arXiv:2308.06272, 2023.
  • Akhtar, Z., “Deepfakes generation and detection: A short survey,” Journal of Imaging, 9(1), 2023.
  • Privacy Affairs, “What are Deepfakes, their threats, and how to avoid them?,” [Online]. Available: https://www.privacyaffairs.com/deepfakes/
  • Goodfellow, I. J., et al., “Generative adversarial nets,” Advances in Neural Information Processing Systems, 27, 2014.
  • Huang, X., Liu, M., Belongie, S., Kautz, J., “Multimodal unsupervised image-to-image translation,” Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, pp. 172–189, 2018.
  • Karras, T., Aila, T., Laine, S., Lehtinen, J., “Progressive growing of GANs for improved quality, stability, and variation,” Proceedings of 6th International Conference on Learning Representations (ICLR), Vancouver, Canada, 2018.
  • Li, J., Monroe, W., Shi, T., Jean, S., Ritter, A., Jurafsky, D., “Adversarial learning for neural dialogue generation,” Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), Copenhagen, Denmark, pp. 2157–2169, 2017.
  • Hu, W., Tan, Y., “Generating adversarial malware examples for black-box attacks based on GAN,” arXiv preprint, arXiv:1702.05983, 2017.
  • Chidambaram, M., Qi, Y., “Style transfer generative adversarial networks: Learning to play chess differently,” arXiv preprint, arXiv:1702.06762, 2017.
  • Thorndike, E. L., “Animal intelligence: An experimental study of the associative processes in animals,” The Psychological Review Monograph Supplements, 2(4), i–109, 1898.
  • Thorndike, E. L., “Animal intelligence,” Psychological Review Monographs, 2(4), 1898.
  • Kaelbling, L. P., Littman, M. L., Moore, A. W., “Reinforcement learning: A survey,” Journal of Artificial Intelligence Research, 4, 237–285, 1996.
  • Silver, D., et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, 529(7587), 484–489, 2016.
  • Lancaster, T., “Artificial intelligence, text generation tools and ChatGPT – does digital watermarking offer a solution?,” International Journal for Educational Integrity, 19(1), 1–14, 2023.
  • Teubner, T., Flath, C. M., Weinhardt, C., van der Aalst, W., Hinz, O., “Welcome to the Era of ChatGPT et al.: The Prospects of Large Language Models,” Business and Information Systems Engineering, 65(2), 95–101, 2023.
  • 365 Data Science, “The Evolution of ChatGPT: History and Future,” [Online]. Available: https://365datascience.com/trending/the-evolution-of-chatgpt-history-and-future/.
  • Singh, H., Singh, A., “ChatGPT: Systematic Review, Applications, and Agenda for Multidisciplinary Research,” Journal of Chinese Economic and Business Studies, 21(2), 193–212, 2023.
  • Heinrich, J., Silver, D., “Deep Reinforcement Learning from Self-Play in Imperfect-Information Games,” arXiv preprint, arXiv:1603.01121, 2016.
  • Gu, S., Holly, E., Lillicrap, T., Levine, S., “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates,” Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Singapore, pp. 3389–3396, 2017.
  • El Sallab, A., Abdou, M., Perot, E., Yogamani, S., “Deep reinforcement learning framework for autonomous driving,” IS&T International Symposium on Electronic Imaging, Burlingame, USA, pp. 70–76, 2017.
  • Zhou, Z., Li, X., Zare, R. N., “Optimizing chemical reactions with deep reinforcement learning,” ACS Central Science, 3(12), 1337–1344, 2017.
  • Zheng, G., et al., “DRN: A deep reinforcement learning framework for news recommendation,” The Web Conference (WWW), 2, 167–176, 2018.
  • Bromley, J., Bentz, J., Bottou, L., Guyon, I., LeCun, Y., Moore, C., “Signature verification using a ‘siamese’ time delay neural network,” International Journal of Pattern Recognition and Artificial Intelligence, 7(4), 669–688, 1993.
  • Chicco, D., “Siamese neural networks: An overview,” Artificial Neural Networks, pp. 73–94, 2021.
  • Koch, G., “Siamese neural networks for one-shot image recognition,” [Online]. Available: http://www.cs.toronto.edu/~gkoch/files/msc-thesis.pdf, 2015.
  • Ahrabian, K., BabaAli, B., “Usage of autoencoders and siamese networks for online handwritten signature verification,” Neural Computing and Applications, 31(12), 9321–9334, 2019.
  • Song, L., Gong, D., Li, Z., Liu, C., Liu, W., “Occlusion robust face recognition based on mask learning with pairwise differential siamese network,” Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, South Korea, pp. 773–782, 2019.
  • Jindal, S., Gupta, G., Yadav, M., Sharma, M., Vig, L., “Siamese networks for chromosome classification,” Proceedings of IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, pp. 72–81, 2017.
  • Schlesinger, O., Vigderhouse, N., Eytan, D., Moshe, Y., “Blood pressure estimation from PPG signals using convolutional neural networks and siamese network,” Proceedings of ICASSP 2020 - IEEE International Conference on Acoustics, Speech and Signal Processing, Barcelona, Spain, pp. 1135–1139, 2020.
  • Hinton, G. E., Krizhevsky, A., Wang, S. D., “Transforming auto-encoders,” Proceedings of 21st International Conference on Artificial Neural Networks (ICANN), Espoo, Finland, pp. 44–51, 2011.
  • Choudhary, S., Saurav, S., Saini, R., Singh, S., “Capsule networks for computer vision applications: A comprehensive review,” Applied Intelligence, 53, 21799–21826, 2023.
  • Li, J., et al., “A survey on capsule networks: Evolution, application, and future development,” Proceedings of 2021 International Conference on High Performance Big Data and Intelligent Systems (HPBD and IS), Shenzhen, China, pp. 177–185, 2021.
  • Bronstein, M. M., Bruna, J., Cohen, T., Veličković, P., “Geometric deep learning: Grids, groups, graphs, geodesics, and gauges,” arXiv preprint, arXiv:2104.13478, 2021.
  • Cao, W., Yan, Z., He, Z., He, Z., “A comprehensive survey on geometric deep learning,” IEEE Access, 8, 35929–35949, 2020.
  • Cao, W., Zheng, C., Yan, Z., He, Z., Xie, W., “Geometric machine learning: Research and applications,” Multimedia Tools and Applications, 81(21), 30545–30597, 2022.
  • Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A., Vandergheynst, P., “Geometric Deep Learning: Going beyond Euclidean data,” IEEE Signal Processing Magazine, 34(4), 18–42, 2017.
  • Qiao, Z., Christensen, A. S., Welborn, M., Manby, F. R., Anandkumar, A., Miller III, T. F., “Informing geometric deep learning with electronic interactions to accelerate quantum chemistry,” Proceedings of the National Academy of Sciences, 119(31), 2022.
  • Atz, K., Grisoni, F., Schneider, G., “Geometric deep learning on molecular representations,” Nature Machine Intelligence, 3(12), 1023–1032, 2021.
  • Villalba-Diez, J., Molina, M., Schmidt, D., “Geometric deep lean learning: Evaluation using a Twitter social network,” Applied Sciences, 11(15), 2021.
  • Han, K., et al., “A survey on vision transformer,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1), 87–110, 2023.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., “Attention is all you need,” Advances in Neural Information Processing Systems, 30, 2017.
  • Chitty-Venkata, K. T., Emani, M., Vishwanath, V., Somani, A. K., “Neural architecture search for transformers: A survey,” IEEE Access, 10, 108374–108412, 2022.
  • Parmar, N., et al., “Image transformer,” Proceedings of International Conference on Machine Learning (ICML), Stockholm, Sweden, pp. 4055–4064, 2018.
  • Touvron, H., Cord, M., Sablayrolles, A., Synnaeve, G., Jégou, H., “Going deeper with image transformers,” Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, Canada, pp. 32–42, 2021.
  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint, arXiv:2010.11929, 2020.
  • Ras, G., Xie, N., van Gerven, M., Doran, D., “Explainable deep learning: A field guide for the uninitiated,” Journal of Artificial Intelligence Research, 73, 329–396, 2022.
  • Rasouli, P., Yu, I. C., “Explainable debugger for black-box machine learning models,” Proceedings of 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, pp. 1–10, 2021.
  • Barredo Arrieta, A., et al., “Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information Fusion, 58, 82–115, 2020.
  • Lisboa, P. J. G., Saralajew, S., Vellido, A., Villmann, T., “The coming of age of interpretable and explainable machine learning models,” Neurocomputing, 535, 25–39, 2023.
  • Cha, Y., Lee, Y., “Advanced sentence-embedding method considering token importance based on explainable artificial intelligence and text summarization model,” Neurocomputing, 564, 126987, 2024.
  • Oviedo, F., Ferres, J. L., Buonassisi, T., Butler, K. T., “Interpretable and explainable machine learning for materials science and chemistry,” Accounts of Materials Research, 3(6), 597–607, 2022.
  • Dasari, C. M., Bhukya, R., “Explainable deep neural networks for novel viral genome prediction,” Applied Intelligence, 52(3), 3002–3017, 2022.
  • Zogan, H., Razzak, I., Wang, X., Jameel, S., Xu, G., “Explainable depression detection with multi-aspect features using a hybrid deep learning model on social media,” World Wide Web, 25(1), 281–304, 2022.
  • Buolamwini, J., Gebru, T., “Gender shades: Intersectional accuracy disparities in commercial gender classification,” Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT), 2018.
  • Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A., “A survey on bias and fairness in machine learning,” ACM Computing Surveys, 54(6), 2021.
  • O’Neil, C., Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, NY: Crown Publishing Group, 2016.
  • Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., Swami, A., “Practical black-box attacks against machine learning,” Proceedings of the ACM Asia Conference on Computer and Communications Security (ASIACCS), Abu Dhabi, United Arab Emirates, pp. 506–519, 2017.
  • Brynjolfsson, E., McAfee, A., The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York, NY: W. W. Norton & Company, 2014.
  • Imam, N. M., Ibrahim, A., Tiwari, M., “Explainable Artificial Intelligence (XAI) Techniques to Enhance Transparency in Deep Learning Models,” IOSR Journal of Computer Engineering, 26(6), 29–36, 2024.
  • Bikkasani, D. C., “Navigating artificial general intelligence (AGI): Societal implications, ethical considerations, and governance strategies,” AI and Ethics, 2024.

DEEP LEARNING: EVOLUTION, INNOVATIONS, AND APPLICATIONS IN THE LAST DECADE

Year 2025, Volume: 11 Issue: 1, 96 - 115, 29.06.2025
https://doi.org/10.51477/mejs.1640908

Abstract

Deep learning has become a popular method in the last ten years with its superior performance in many fields, especially in health. Although it is a sub-branch of machine learning, one of the most important reasons for researchers to use this method is that it automates the difficult processes of feature extraction stages in traditional machine learning methods. With the advancement of technology every year, it has become easier to create large data sets or to access large data sets that have been used before on the web. Researchers who want to work on large data sets use deep learning methods effectively because of their advantages instead of using traditional machine learning methods. The foundations of deep learning were laid with the deep belief networks method, which was first developed in 2006. Later, with the significant success of the Convolutional Neural Network (CNN) method developed in 2012 in image classification, deep learning methods have been used in many applications in other disciplines. The success of the deep reinforcement learning algorithm that defeated Alpha Go champion Lee Sedol in 2016, the remarkable success of the contentious generating networks in creating their own unique images, the success of the Siamese networks with the ability to learn from little data in signature verification and facial recognition systems, the success of the artificial intelligence chat bot ChatGPT, which was launched in the last months of 2022, attracted attention in a short time, and the ability of DALL-E, a similar language model, to create images from texts, shows that deep learning is in constant innovation and development. This study aims to give an idea to researchers who will work in this field in the future by talking about the basic concepts of deep learning and the innovative and popular approaches used.

References

  • Hinton, G. E., Osindero, S., Teh, Y. W., “A fast learning algorithm for deep belief nets”, Neural Computation, 18(7), 1527–1554, 2006.
  • Hao, X., Zhang, G., “Technical survey deep learning”, International Journal of Neural Systems, 10(3), 417–439, 2016.
  • Krizhevsky, A., Sutskever, I., Hinton, G. E., “ImageNet classification with deep convolutional neural networks”, Advances in Neural Information Processing Systems, 25, 1097–1105, 2012.
  • Koch, C., “How the computer beat the Go player”, Scientific American Mind, 27(4), 20–23, 2016.
  • Khan, S., Islam, N., Jan, Z., Din, I. U., Rodrigues, J. J. P. C., “A novel deep learning based framework for the detection and classification of breast cancer using transfer learning”, Computers in Biology and Medicine, 125, 1–6, 2019.
  • Yıldırım, Ö., Pławiak, P., Tan, R., Acharya, U. R., “Arrhythmia detection using deep convolutional neural network with long duration ECG signals”, Computers in Biology and Medicine, 102, 411–420, 2018.
  • Mohsen, H., El-Dahshan, E.-S. A., El-Horbaty, E.-S. M., Salem, A.-B. M., “Classification using deep learning neural networks for brain tumors”, Future Computing and Informatics Journal, 3(1), 68–71, 2018.
  • Bulut, M. G., Unal, S., Hammad, M., Pławiak, P., “Deep CNN-based detection of cardiac rhythm disorders using PPG signals from wearable devices”, Plos One, 20(2), 2025.
  • Fukushima, K., “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position”, Biological Cybernetics, 36, 193–202, 1980.
  • LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., “Backpropagation applied to handwritten zip code recognition”, Neural Computation, 1(4), 541–551, 1989.
  • Zeiler, M. D., Fergus, R., “Visualizing and understanding convolutional networks”, Proceedings of European Conference on Computer Vision, Zurich, Switzerland, pp. 818–833, 2014.
  • Simonyan, K., Zisserman, A., “Very deep convolutional networks for large-scale image recognition”, Proceedings of 3rd International Conference on Learning Representations (ICLR), San Diego, USA, 2015.
  • Rubin, J., Abreu, R., Ganguli, A., Nelaturi, S., Matei, I., Sricharan, K., “Recognizing abnormal heart sounds using deep learning”, Proceedings of Computing in Cardiology Conference, Rennes, France, 2017.
  • Ozturk, T., Talo, M., Azra, E., Baran, U., Yildirim, O., “Automated detection of COVID-19 cases using deep neural networks with X-ray images”, Computers in Biology and Medicine, 121, 2020.
  • Tsirtsakis, P., Zacharis, G., Maraslidis, G. S., Fragulis, G. F., “Deep learning for object recognition: A comprehensive review of models and algorithms”, International Journal of Cognitive Computing in Engineering, 6, 298–312, 2025.
  • Panwar, M., et al., “CNN based approach for activity recognition using a wrist-worn accelerometer”, Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Jeju Island, South Korea, pp. 2438–2441, 2017.
  • Silver, D., et al., “Mastering chess and shogi by self-play with a general reinforcement learning algorithm”, arXiv preprint, arXiv:1712.01815, 2017.
  • Nelson, D. M. Q., Pereira, A. C. M., De Oliveira, R. A., “Stock market’s price movement prediction with LSTM neural networks”, Proceedings of International Joint Conference on Neural Networks (IJCNN), Anchorage, USA, vol. 2017-May, pp. 1419–1426, 2017.
  • Wang, Y., Li, B., Todo, Y., “Enhancing robustness of object detection: Hubel–Wiesel model connected with deep learning”, Knowledge-Based Systems, 311, 2025.
  • Deutsch, S., Biological Cybernetics: A Simplified Version of Kunihiko Fukushima’s Neocognitron. Tokyo: Springer, 1981.
  • McCulloch, W. S., Pitts, W., “A logical calculus of the ideas immanent in nervous activity”, Bulletin of Mathematical Biophysics, 5, 115–133, 1943.
  • Hubel, D. H., Wiesel, T. N., “Shape and arrangement of columns in cat’s striate cortex”, Journal of Physiology, 165(3), 559–568, 1963.
  • Fukushima, K., “Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position”, Biological Cybernetics, 36(4), 193–202, 1980.
  • LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., “Gradient-based learning applied to document recognition”, Proceedings of the IEEE, 86(11), 2278–2323, 1998.
  • Szegedy, C., et al., “Going deeper with convolutions”, IEEE Conference on Computer Vision and Pattern Recognition, 1–9, 2015.
  • Mahyudin, R., et al., “Design of Automated Smart Attendance System Using Deep Learning Based Face Recognition”, Proceedings of 9th International Conference on Science, Technology, Engineering and Mathematics (ICONSTEM), Chennai, India, 2024.
  • Mnih, V., et al., “Human-level control through deep reinforcement learning”, Nature, 518(7540), 529–533, 2015.
  • Banerjee, A., Swain, S., Rout, M., Bandyopadhyay, M., “Composite spectral spatial pixel CNN for land-use hyperspectral image classification with hybrid activation function”, Multimedia Tools and Applications, 1-24, 2024.
  • Wu, T., et al., “A brief overview of ChatGPT: The history, status quo and potential future development”, IEEE/CAA Journal of Automatica Sinica, 10(5), 1122–1136, 2023.
  • Sharifuzzaman Sagar, A. S. M., Chen, Y., Xie, Y. K., Kim, H. S., “MSA R-CNN: A comprehensive approach to remote sensing object detection and scene understanding”, Expert Systems with Applications, 241, 2024.
  • Berner, C., et al., “Dota 2 with Large Scale Deep Reinforcement Learning”, arXiv preprint, arXiv:1912.06680, 2019.
  • Zhang, Y., Qian, H., Zhang, J., Shi, Z., “Action Recognition Networks Based on Spatio-Temporal Motion Modules”, Proceedings of 2024 International Conference on Advances in Electrical Engineering and Computer Applications (AEECA), Shenyang, China, pp. 436–440, 2024.
  • Wang, Y., et al., “RingMo-Lite: A Remote Sensing Lightweight Network With CNN-Transformer Hybrid Framework”, IEEE Transactions on Geoscience and Remote Sensing, 62, 1–20, 2024.
  • Hochreiter, S., Schmidhuber, J., “Long Short-Term Memory”, Neural Computation, 9(8), 1735–1780, 1997.
  • Rumelhart, D. E., Hinton, G. E., Williams, R. J., “Learning representations by back-propagating errors”, Nature, 323(6088), 533–536, 1986.
  • Kirov, C., Cotterell, R., “Recurrent Neural Networks in Linguistic Theory: Revisiting Pinker and Prince (1988) and the Past Tense Debate”, Transactions of the Association for Computational Linguistics, 6, 2018.
  • Hochreiter, S., Schmidhuber, J., “Long Short-Term Memory”, Neural Computation, 9(8), 1735–1780, 1997.
  • Sun, L., Jia, K., Chen, K., Yeung, D. Y., Shi, B. E., Savarese, S., “Lattice Long Short-Term Memory for Human Action Recognition”, Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, pp. 2147–2156, 2017.
  • Zhou, J., Lu, Y., Dai, H. N., Wang, H., Xiao, H., “Sentiment analysis of Chinese microblog based on stacked bidirectional LSTM”, IEEE Access, 7, 38856–38866, 2019.
  • Chen, Y., Yuan, J., You, Q., Luo, J., “Twitter Sentiment Analysis via Bi-sense Emoji Embedding”, Proceedings of the 26th ACM International Conference on Multimedia, Seoul, South Korea, pp. 117–125, 2018.
  • Kim, T., Kim, H. Y., “Forecasting stock prices with a feature fusion LSTM-CNN model using different representations of the same data,” Plos One, 14(2), 2019.
  • Tian, Y., Zhang, K., Li, J., Lin, X., Yang, B., “LSTM-based traffic flow prediction with missing data,” Neurocomputing, 318, 297–305, 2018.
  • Karevan, Z., Suykens, J. A. K., “Spatio-temporal stacked LSTM for temperature prediction in weather forecasting,” arXiv preprint, arXiv:1811.06341, 2018.
  • Yang, F., Zhang, S., Li, W., Miao, Q., “State-of-charge estimation of lithium-ion batteries using LSTM and UKF,” Energy, 201, 117664, 2020.
  • Dehouche, N., Dehouche, K., “What’s in a text-to-image prompt? The potential of stable diffusion in visual arts education,” Heliyon, 9(6), e16757, 2023.
  • Chamola, V., et al., “Beyond reality: The pivotal role of generative AI in the Metaverse,” arXiv preprint, arXiv:2308.06272, 2023.
  • Akhtar, Z., “Deepfakes generation and detection: A short survey,” Journal of Imaging, 9(1), 2023.
  • Privacy Affairs, “What are Deepfakes, their threats, and how to avoid them?,” [Online]. Available: https://www.privacyaffairs.com/deepfakes/
  • Goodfellow, I. J., et al., “Generative adversarial nets,” Advances in Neural Information Processing Systems, 27, 2014.
  • Huang, X., Liu, M., Belongie, S., Kautz, J., “Multimodal unsupervised image-to-image translation,” Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, pp. 172–189, 2018.
  • Karras, T., Aila, T., Laine, S., Lehtinen, J., “Progressive growing of GANs for improved quality, stability, and variation,” Proceedings of 6th International Conference on Learning Representations (ICLR), Vancouver, Canada, 2018.
  • Li, J., Monroe, W., Shi, T., Jean, S., Ritter, A., Jurafsky, D., “Adversarial learning for neural dialogue generation,” Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), Copenhagen, Denmark, pp. 2157–2169, 2017.
  • Hu, W., Tan, Y., “Generating adversarial malware examples for black-box attacks based on GAN,” arXiv preprint, arXiv:1702.05983, 2017.
  • Chidambaram, M., Qi, Y., “Style transfer generative adversarial networks: Learning to play chess differently,” arXiv preprint, arXiv:1702.06762, 2017.
  • Thorndike, E. L., “Animal intelligence: An experimental study of the associative processes in animals,” The Psychological Review Monograph Supplements, 2(4), i–109, 1898.
  • Thorndike, E. L., “Animal intelligence,” Psychological Review Monographs, 2(4), 1898.
  • Kaelbling, L. P., Littman, M. L., Moore, A. W., “Reinforcement learning: A survey,” Journal of Artificial Intelligence Research, 4, 237–285, 1996.
  • Silver, D., et al., “Mastering the game of Go with deep neural networks and tree search,” Nature, 529(7587), 484–489, 2016.
  • Lancaster, T., “Artificial intelligence, text generation tools and ChatGPT – does digital watermarking offer a solution?,” International Journal for Educational Integrity, 19(1), 1–14, 2023.
  • Teubner, T., Flath, C. M., Weinhardt, C., van der Aalst, W., Hinz, O., “Welcome to the Era of ChatGPT et al.: The Prospects of Large Language Models,” Business and Information Systems Engineering, 65(2), 95–101, 2023.
  • 365 Data Science, “The Evolution of ChatGPT: History and Future,” [Online]. Available: https://365datascience.com/trending/the-evolution-of-chatgpt-history-and-future/.
  • Singh, H., Singh, A., “ChatGPT: Systematic Review, Applications, and Agenda for Multidisciplinary Research,” Journal of Chinese Economic and Business Studies, 21(2), 193–212, 2023.
  • Heinrich, J., Silver, D., “Deep Reinforcement Learning from Self-Play in Imperfect-Information Games,” arXiv preprint, arXiv:1603.01121, 2016.
  • Gu, S., Holly, E., Lillicrap, T., Levine, S., “Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates,” Proceedings of IEEE International Conference on Robotics and Automation (ICRA), Singapore, pp. 3389–3396, 2017.
  • El Sallab, A., Abdou, M., Perot, E., Yogamani, S., “Deep reinforcement learning framework for autonomous driving,” IS&T International Symposium on Electronic Imaging, Burlingame, USA, pp. 70–76, 2017.
  • Zhou, Z., Li, X., Zare, R. N., “Optimizing chemical reactions with deep reinforcement learning,” ACS Central Science, 3(12), 1337–1344, 2017.
  • Zheng, G., et al., “DRN: A deep reinforcement learning framework for news recommendation,” The Web Conference (WWW), 2, 167–176, 2018.
  • Bromley, J., Bentz, J., Bottou, L., Guyon, I., LeCun, Y., Moore, C., “Signature verification using a ‘siamese’ time delay neural network,” International Journal of Pattern Recognition and Artificial Intelligence, 7(4), 669–688, 1993.
  • Chicco, D., “Siamese neural networks: An overview,” Artificial Neural Networks, pp. 73–94, 2021.
  • Koch, G., “Siamese neural networks for one-shot image recognition,” [Online]. Available: http://www.cs.toronto.edu/~gkoch/files/msc-thesis.pdf, 2015.
  • Ahrabian, K., BabaAli, B., “Usage of autoencoders and siamese networks for online handwritten signature verification,” Neural Computing and Applications, 31(12), 9321–9334, 2019.
  • Song, L., Gong, D., Li, Z., Liu, C., Liu, W., “Occlusion robust face recognition based on mask learning with pairwise differential siamese network,” Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, South Korea, pp. 773–782, 2019.
  • Jindal, S., Gupta, G., Yadav, M., Sharma, M., Vig, L., “Siamese networks for chromosome classification,” Proceedings of IEEE International Conference on Computer Vision Workshops (ICCVW), Venice, Italy, pp. 72–81, 2017.
  • Schlesinger, O., Vigderhouse, N., Eytan, D., Moshe, Y., “Blood pressure estimation from PPG signals using convolutional neural networks and siamese network,” Proceedings of ICASSP 2020 - IEEE International Conference on Acoustics, Speech and Signal Processing, Barcelona, Spain, pp. 1135–1139, 2020.
  • Hinton, G. E., Krizhevsky, A., Wang, S. D., “Transforming auto-encoders,” Proceedings of 21st International Conference on Artificial Neural Networks (ICANN), Espoo, Finland, pp. 44–51, 2011.
  • Choudhary, S., Saurav, S., Saini, R., Singh, S., “Capsule networks for computer vision applications: A comprehensive review,” Applied Intelligence, 53, 21799–21826, 2023.
  • Li, J., et al., “A survey on capsule networks: Evolution, application, and future development,” Proceedings of 2021 International Conference on High Performance Big Data and Intelligent Systems (HPBD and IS), Shenzhen, China, pp. 177–185, 2021.
  • Bronstein, M. M., Bruna, J., Cohen, T., Veličković, P., “Geometric deep learning: Grids, groups, graphs, geodesics, and gauges,” arXiv preprint, arXiv:2104.13478, 2021.
  • Cao, W., Yan, Z., He, Z., He, Z., “A comprehensive survey on geometric deep learning,” IEEE Access, 8, 35929–35949, 2020.
  • Cao, W., Zheng, C., Yan, Z., He, Z., Xie, W., “Geometric machine learning: Research and applications,” Multimedia Tools and Applications, 81(21), 30545–30597, 2022.
  • Bronstein, M. M., Bruna, J., LeCun, Y., Szlam, A., Vandergheynst, P., “Geometric Deep Learning: Going beyond Euclidean data,” IEEE Signal Processing Magazine, 34(4), 18–42, 2017.
  • Qiao, Z., Christensen, A. S., Welborn, M., Manby, F. R., Anandkumar, A., Miller III, T. F., “Informing geometric deep learning with electronic interactions to accelerate quantum chemistry,” Proceedings of the National Academy of Sciences, 119(31), 2022.
  • Atz, K., Grisoni, F., Schneider, G., “Geometric deep learning on molecular representations,” Nature Machine Intelligence, 3(12), 1023–1032, 2021.
  • Villalba-Diez, J., Molina, M., Schmidt, D., “Geometric deep lean learning: Evaluation using a Twitter social network,” Applied Sciences, 11(15), 2021.
  • Han, K., et al., “A survey on vision transformer,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(1), 87–110, 2023.
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., “Attention is all you need,” Advances in Neural Information Processing Systems, 30, 2017.
  • Chitty-Venkata, K. T., Emani, M., Vishwanath, V., Somani, A. K., “Neural architecture search for transformers: A survey,” IEEE Access, 10, 108374–108412, 2022.
  • Parmar, N., et al., “Image transformer,” Proceedings of International Conference on Machine Learning (ICML), Stockholm, Sweden, pp. 4055–4064, 2018.
  • Touvron, H., Cord, M., Sablayrolles, A., Synnaeve, G., Jégou, H., “Going deeper with image transformers,” Proceedings of IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, Canada, pp. 32–42, 2021.
  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., “An image is worth 16x16 words: Transformers for image recognition at scale,” arXiv preprint, arXiv:2010.11929, 2020.
  • Ras, G., Xie, N., van Gerven, M., Doran, D., “Explainable deep learning: A field guide for the uninitiated,” Journal of Artificial Intelligence Research, 73, 329–396, 2022.
  • Rasouli, P., Yu, I. C., “Explainable debugger for black-box machine learning models,” Proceedings of 2021 International Joint Conference on Neural Networks (IJCNN), Shenzhen, China, pp. 1–10, 2021.
  • Barredo Arrieta, A., et al., “Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI,” Information Fusion, 58, 82–115, 2020.
  • Lisboa, P. J. G., Saralajew, S., Vellido, A., Villmann, T., “The coming of age of interpretable and explainable machine learning models,” Neurocomputing, 535, 25–39, 2023.
  • Cha, Y., Lee, Y., “Advanced sentence-embedding method considering token importance based on explainable artificial intelligence and text summarization model,” Neurocomputing, 564, 126987, 2024.
  • Oviedo, F., Ferres, J. L., Buonassisi, T., Butler, K. T., “Interpretable and explainable machine learning for materials science and chemistry,” Accounts of Materials Research, 3(6), 597–607, 2022.
  • Dasari, C. M., Bhukya, R., “Explainable deep neural networks for novel viral genome prediction,” Applied Intelligence, 52(3), 3002–3017, 2022.
  • Zogan, H., Razzak, I., Wang, X., Jameel, S., Xu, G., “Explainable depression detection with multi-aspect features using a hybrid deep learning model on social media,” World Wide Web, 25(1), 281–304, 2022.
  • Buolamwini, J., Gebru, T., “Gender shades: Intersectional accuracy disparities in commercial gender classification,” Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT), 2018.
  • Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A., “A survey on bias and fairness in machine learning,” ACM Computing Surveys, 54(6), 2021.
  • O’Neil, C., Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, NY: Crown Publishing Group, 2016.
  • Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., Swami, A., “Practical black-box attacks against machine learning,” Proceedings of the ACM Asia Conference on Computer and Communications Security (ASIACCS), Abu Dhabi, United Arab Emirates, pp. 506–519, 2017.
  • Brynjolfsson, E., McAfee, A., The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. New York, NY: W. W. Norton & Company, 2014.
  • Imam, N. M., Ibrahim, A., Tiwari, M., “Explainable Artificial Intelligence (XAI) Techniques to Enhance Transparency in Deep Learning Models,” IOSR Journal of Computer Engineering, 26(6), 29–36, 2024.
  • Bikkasani, D. C., “Navigating artificial general intelligence (AGI): Societal implications, ethical considerations, and governance strategies,” AI and Ethics, 2024.
There are 105 citations in total.

Details

Primary Language English
Subjects Communications Engineering (Other)
Journal Section Review
Authors

Mihriban Günay 0000-0002-0932-1981

Özal Yıldırım 0000-0001-5375-3012

Yakup Demir 0000-0001-9530-5824

Early Pub Date June 26, 2025
Publication Date June 29, 2025
Submission Date February 16, 2025
Acceptance Date May 26, 2025
Published in Issue Year 2025 Volume: 11 Issue: 1

Cite

IEEE M. Günay, Ö. Yıldırım, and Y. Demir, “DEEP LEARNING: EVOLUTION, INNOVATIONS, AND APPLICATIONS IN THE LAST DECADE”, MEJS, vol. 11, no. 1, pp. 96–115, 2025, doi: 10.51477/mejs.1640908.

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License

TRDizinlogo_live-e1586763957746.png   ici2.png     scholar_logo_64dp.png    CenterLogo.png     crossref-logo-landscape-200.png  logo.png         logo1.jpg   DRJI_Logo.jpg  17826265674769  logo.png