Araştırma Makalesi
BibTex RIS Kaynak Göster

Bayesyen Optimizasyonlu Topluluk Öğrenmesi ile Çok Sınıflı Alım–Satım Sinyali Sınıflandırması

Yıl 2026, Cilt: 24 Sayı: 59, 271 - 298, 24.01.2026
https://doi.org/10.35408/comuybd.1667062

Öz

Bu çalışma, Apple (AAPL) için günlük Al/Tut/Sat işlem sinyallerini tahmin eden pratik bir makine öğrenmesi hattını test etmekte ve “iyi sınıflandırma” başarısının, işlem maliyetleri eklendiğinde iyi bir ekonomik performansa dönüşüp dönüşmediğini incelemektedir. Veri seti, senkronize edilmiş günlük piyasa serileri ile AAPL’ye ait teknik göstergelerden oluşturulmaktadır. Hedef sinyal, MACD’nin sinyal çizgisiyle karşılaştırılması ve RSI filtresi kullanan şeffaf bir kuralla üretildiğinden, problem denetimli bir üç sınıflı sınıflandırma problemine dönüşmektedir. Çalışmada dört ağaç tabanlı topluluk modeli karşılaştırılmaktadır: Random Forest, LightGBM, XGBoost ve AdaBoost. Sonuçların ad hoc parametre seçimlerine duyarlı olmaması için her model sistematik bir arama prosedürüyle ayarlanmaktadır. Ham etiketlerde ciddi sınıf dengesizliği bulunduğundan eğitim aşamasında SMOTE uygulanmakta; ancak tüm performans ve ekonomik testler, gerçekçi değerlendirme için orijinal zaman sıralı test döneminde yürütülmektedir. Bulgular, modeller arasında belirgin bir sıralama ortaya koymaktadır. XGBoost en yüksek sınıflandırma kalitesini sunmaktadır (Doğruluk 0.974, Kesinlik 0.975, Duyarlılık 0.974, F1 0.974). LightGBM ve Random Forest çok yakın düzeylerde onu izlemektedir. AdaBoost ise belirgin biçimde daha zayıf kalmaktadır (Doğruluk 0.668, F1 0.536); ayrıca kesinliği nispeten yüksek görünse de (0.779) sınıflar arasında dengeli bir performans sergileyememektedir. Karışıklık matrisi sonuçları bu tabloyu desteklemekte; güçlü modellerin Al ve Sat sınıflarını neredeyse hatasız ayırdığı, kalan hataların büyük ölçüde Tut sınıfında yoğunlaştığı görülmektedir. Buna karşılık AdaBoost’un Tut sınıfını neredeyse hiç yakalayamadığı ve birçok Tut gününü Al/Sat olarak etiketlediği anlaşılmaktadır. Ekonomik geriye dönük test sonuçları da aynı örüntüyü doğrulamaktadır. Gerçekçi işlem maliyetleri ve başlangıç sermayesi altında, model tahminleriyle işlem yapmak XGBoost için +%49.1, LightGBM için +%46.1 ve Random Forest için +%44.9 getiri üretmektedir. AdaBoost ise zarar yazmaktadır (−%11.3) ve daha olumsuz bir risk profili sergilemektedir (Sharpe −0.10, maksimum düşüş %29.0). Ayrıca daha fazla işlem ürettiğinden (yaklaşık 68 işlem) toplam maliyetleri de daha yüksek gerçekleşmektedir. Genel olarak, bu sinyal tasarımı altında modern gradient boosting tabanlı toplulukların hem istatistiksel olarak daha güçlü hem de ekonomik açıdan daha inandırıcı sonuçlar ürettiği değerlendirilmektedir.

Kaynakça

  • Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M. (2019). Optuna: A next-generation hyperparameter optimization framework. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’19), 2623–2631. doi:10.1145/3292500.3330701
  • Appel, G. (1979). The Moving Average Convergence-Divergence Trading Method. Signalert Corporation.
  • Aroussi, R. (2024). yfinance (Version 0.1.70) [Software]. Zenodo. https://doi.org/10.5281/zenodo.13340981
  • Bollinger, J. (2002). Bollinger on Bollinger Bands. McGraw-Hill.
  • Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32. https://doi.org/10.1023/A:1010933404324
  • Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research, 16, 321–357. https://doi.org/10.1613/jair.953
  • Chen, T., and Guestrin, C. (2016). XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794. San Francisco, CA, United States. https://doi.org/10.1145/2939672.2939785
  • Cheng, L., Huang, Y., Hsieh, M., and Wu, M. (2021). A novel trading strategy framework based on reinforcement deep learning for financial market predictions. Mathematics, 9(23), 3094. https://doi.org/10.3390/math9233094
  • Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. The Annals of Statistics, 29(5), 1189–1232. https://doi.org/10.1214/aos/1013203451
  • Freund, Y., and Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119–139. https://doi.org/10.1006/jcss.1997.1504
  • Gupta, V., and Kumar, E. (2023). H3O-LGBM: Hybrid Harris Hawk Optimization-based Light Gradient Boosting Machine model for real-time trading. Artificial Intelligence Review, 56(8), 8697–8720. https://doi.org/10.1007/s10462-022-10323-0
  • Ji, G., Yu, J., Hu, K., Xie, J., and Ji, X. (2022). An adaptive feature selection schema using improved technical indicators for predicting stock price movements. Expert Systems with Applications, 200, 116941. https://doi.org/10.1016/j.eswa.2022.116941
  • Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). LightGBM: A highly efficient gradient boosting decision tree. Proceedings of the 31st International Conference on Neural Information Processing Systems, 3149–3157. Long Beach, CA, United States.
  • Lemaître, G., Nogueira, F., and Aridas, C. K. (2017). imbalanced-learn: A Python toolbox to tackle the curse of imbalanced datasets in machine learning. Journal of Machine Learning Research, 18(17), 1–5. Retrieved December 8, 2024, from https://imbalanced-learn.org/stable/
  • Li, Z., and Tam, V. (2018). A machine learning view on momentum and reversal trading. Algorithms, 11(11), 170. https://doi.org/10.3390/a11110170
  • Lin, H., Chen, C., Huang, G., and Jafari, A. (2021). Stock price prediction using generative adversarial networks. Journal of Computer Science, 17(3), 188–196. https://doi.org/10.3844/jcssp.2021.188.196
  • Saifan, R., Sharif, K., Abu-Ghazaleh, M., and Abdel-Majeed, M. (2020). Investigating algorithmic stock market trading using ensemble machine learning methods. Informatica, 44(3). https://doi.org/10.31449/inf.v44i3.2904
  • Saud, A., and Shakya, S. (2022). Directional movement index-based machine learning strategy for predicting stock trading signals. International Journal of Electrical and Computer Engineering (IJECE), 12(4), 4185–4194. https://doi.org/10.11591/ijece.v12i4.pp4185-4194
  • Sebastião, H., and Godinho, P. (2021). Forecasting and trading cryptocurrencies with machine learning under changing market conditions. Financial Innovation, 7(1). https://doi.org/10.1186/s40854-020-00217-x
  • Wang, Q., Kang, K., Zhihan, Z., and Cao, D. (2021). Application of LSTM and Conv1D LSTM network in stock forecasting model. Artificial Intelligence Advances, 3(1), 36–43. https://doi.org/10.30564/aia.v3i1.2790
  • Wang, Y., and Yan, K. (2023). Application of traditional machine learning models for quantitative trading of Bitcoin. Artificial Intelligence Evolution, 4(1), 34–48. https://doi.org/10.37256/aie.4120232226
  • Wilder, J. W., Jr. (1978). New Concepts in Technical Trading Systems. Trend Research.

Bayesian-Optimized Ensemble Learning for Multi-Class Trading Signal Classification

Yıl 2026, Cilt: 24 Sayı: 59, 271 - 298, 24.01.2026
https://doi.org/10.35408/comuybd.1667062

Öz

This study tests a practical machine-learning pipeline to predict daily Buy/Hold/Sell trading signals for Apple (AAPL) and to assess whether “good classification” also yields good trading returns after costs. The dataset is built from synchronized daily market series and AAPL-based technical indicators. The target signal is generated by a transparent rule using MACD relative to its signal line and an RSI filter, so the task is a supervised three-class classification problem. Four tree-based ensemble models are compared: Random Forest, LightGBM, XGBoost, and AdaBoost. To avoid fragile, hand-picked settings, each model is tuned with a systematic search procedure. Because the raw labels are strongly imbalanced, SMOTE is applied for training, while all performance and economic tests are run on the original time-ordered test period to keep the evaluation realistic. The results show a clear ranking. XGBoost delivers the best overall classification quality (Accuracy 0.974, Precision 0.975, Recall 0.974, F1 0.974). LightGBM and Random Forest follow at similarly high levels, while AdaBoost is much weaker (Accuracy 0.668, F1 0.536) despite relatively higher precision (0.779), meaning its predictions are not well balanced across classes. Confusion-matrix evidence supports this: the strong models classify Buy and Sell almost perfectly, and most remaining errors come from the Hold class. AdaBoost, however, fails to detect Hold and instead generates many Buy/Sell signals on Hold days. Economic backtests confirm the same story under realistic transaction costs and initial capital. Trading on predicted signals yields +49.1% for XGBoost, +46.1% for LightGBM, and +44.9% for Random Forest. AdaBoost loses money (−11.3%), with worse risk outcomes (Sharpe −0.10, max drawdown 29.0%) and heavier trading (about 68 trades, higher total costs). Overall, modern gradient-boosting ensembles are both statistically strong and economically more credible for this signal design.

Kaynakça

  • Akiba, T., Sano, S., Yanase, T., Ohta, T., and Koyama, M. (2019). Optuna: A next-generation hyperparameter optimization framework. Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’19), 2623–2631. doi:10.1145/3292500.3330701
  • Appel, G. (1979). The Moving Average Convergence-Divergence Trading Method. Signalert Corporation.
  • Aroussi, R. (2024). yfinance (Version 0.1.70) [Software]. Zenodo. https://doi.org/10.5281/zenodo.13340981
  • Bollinger, J. (2002). Bollinger on Bollinger Bands. McGraw-Hill.
  • Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32. https://doi.org/10.1023/A:1010933404324
  • Chawla, N. V., Bowyer, K. W., Hall, L. O., & Kegelmeyer, W. P. (2002). SMOTE: Synthetic Minority Over-sampling Technique. Journal of Artificial Intelligence Research, 16, 321–357. https://doi.org/10.1613/jair.953
  • Chen, T., and Guestrin, C. (2016). XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794. San Francisco, CA, United States. https://doi.org/10.1145/2939672.2939785
  • Cheng, L., Huang, Y., Hsieh, M., and Wu, M. (2021). A novel trading strategy framework based on reinforcement deep learning for financial market predictions. Mathematics, 9(23), 3094. https://doi.org/10.3390/math9233094
  • Friedman, J. H. (2001). Greedy function approximation: A gradient boosting machine. The Annals of Statistics, 29(5), 1189–1232. https://doi.org/10.1214/aos/1013203451
  • Freund, Y., and Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119–139. https://doi.org/10.1006/jcss.1997.1504
  • Gupta, V., and Kumar, E. (2023). H3O-LGBM: Hybrid Harris Hawk Optimization-based Light Gradient Boosting Machine model for real-time trading. Artificial Intelligence Review, 56(8), 8697–8720. https://doi.org/10.1007/s10462-022-10323-0
  • Ji, G., Yu, J., Hu, K., Xie, J., and Ji, X. (2022). An adaptive feature selection schema using improved technical indicators for predicting stock price movements. Expert Systems with Applications, 200, 116941. https://doi.org/10.1016/j.eswa.2022.116941
  • Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., and Liu, T.-Y. (2017). LightGBM: A highly efficient gradient boosting decision tree. Proceedings of the 31st International Conference on Neural Information Processing Systems, 3149–3157. Long Beach, CA, United States.
  • Lemaître, G., Nogueira, F., and Aridas, C. K. (2017). imbalanced-learn: A Python toolbox to tackle the curse of imbalanced datasets in machine learning. Journal of Machine Learning Research, 18(17), 1–5. Retrieved December 8, 2024, from https://imbalanced-learn.org/stable/
  • Li, Z., and Tam, V. (2018). A machine learning view on momentum and reversal trading. Algorithms, 11(11), 170. https://doi.org/10.3390/a11110170
  • Lin, H., Chen, C., Huang, G., and Jafari, A. (2021). Stock price prediction using generative adversarial networks. Journal of Computer Science, 17(3), 188–196. https://doi.org/10.3844/jcssp.2021.188.196
  • Saifan, R., Sharif, K., Abu-Ghazaleh, M., and Abdel-Majeed, M. (2020). Investigating algorithmic stock market trading using ensemble machine learning methods. Informatica, 44(3). https://doi.org/10.31449/inf.v44i3.2904
  • Saud, A., and Shakya, S. (2022). Directional movement index-based machine learning strategy for predicting stock trading signals. International Journal of Electrical and Computer Engineering (IJECE), 12(4), 4185–4194. https://doi.org/10.11591/ijece.v12i4.pp4185-4194
  • Sebastião, H., and Godinho, P. (2021). Forecasting and trading cryptocurrencies with machine learning under changing market conditions. Financial Innovation, 7(1). https://doi.org/10.1186/s40854-020-00217-x
  • Wang, Q., Kang, K., Zhihan, Z., and Cao, D. (2021). Application of LSTM and Conv1D LSTM network in stock forecasting model. Artificial Intelligence Advances, 3(1), 36–43. https://doi.org/10.30564/aia.v3i1.2790
  • Wang, Y., and Yan, K. (2023). Application of traditional machine learning models for quantitative trading of Bitcoin. Artificial Intelligence Evolution, 4(1), 34–48. https://doi.org/10.37256/aie.4120232226
  • Wilder, J. W., Jr. (1978). New Concepts in Technical Trading Systems. Trend Research.
Toplam 22 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Finansal Ekonomi
Bölüm Araştırma Makalesi
Yazarlar

Cemal Öztürk 0000-0003-3850-7416

Gönderilme Tarihi 27 Mart 2025
Kabul Tarihi 18 Ocak 2026
Yayımlanma Tarihi 24 Ocak 2026
Yayımlandığı Sayı Yıl 2026 Cilt: 24 Sayı: 59

Kaynak Göster

APA Öztürk, C. (2026). Bayesian-Optimized Ensemble Learning for Multi-Class Trading Signal Classification. Yönetim Bilimleri Dergisi, 24(59), 271-298. https://doi.org/10.35408/comuybd.1667062

Sayın Araştırmacı;

Dergimize gelen yoğun talep nedeniyle halihazırda yaklaşık 100 makalenin süreçleri devam etmektedir. Bu makalelerin süreçleri nihayete erdirildikten sonra dergimiz yeni makale alımına başlayacaktır.

Dergimize göndereceğiniz çalışmalar linkte yer alan taslak dikkate alınarak hazırlanmalıdır. Çalışmanızı aktaracağınız taslak dergi yazım kurallarına göre düzenlenmiştir. Bu yüzden biçimlendirmeyi ve ana başlıkları değiştirmeden çalışmanızı bu taslağa aktarmanız gerekmektedir.
İngilizce Makale Şablonu için tıklayınız...

Saygılarımızla,