Kısa Bildiri
BibTex RIS Kaynak Göster

Overcoming Deep Learning Based Antivirus Models Using Open Source Adversarial Attack Libraries

Yıl 2021, , 66 - 71, 30.04.2021
https://doi.org/10.47769/izufbed.879611

Öz

One of the most popular attacks among cybercriminals becomes malware and its derivatives due to the recent improvements in the technology and percentage of the population that has internet access. The cybercriminals could take the control of individual or corporate systems, those attacks usually aim to keep persistence in the system for a long time. Cybersecurity companies have been developing multiple ways to detect those attacks, the latest one is an artificial intelligence-based detection engine. However, those engines cannot prevent attacks 100%. In this work, it is proved that a neural network-based malware detection engine can be bypassed by various adversarial attacks that have been prepared for the model.

Kaynakça

  • Rey, W., Xu, A. (2018). Maximal jacobian-based saliency map attack. arXiv preprint arXiv:1808.07945.
  • Sukanta, D. et al. (2019). EvadePDF: Towards Evading Machine Learning Based PDF Malware Classifiers. International Conference on Security and Privacy. Springer, Singapore.
  • Srndic, N., Laskov, P. (2013). Detection of Malicious Pdf Files Based on Hierarchical Document Structure. In 20th Network and Distributed System Security Symposium (NDSS).
  • Weilin, X., Yanjun, Q., Evans D. (2016). Automatically evading classifiers. Proceedings of the 2016 network and distributed systems symposium. Vol. 10.
  • Weiwei, H., Tan, Y. (2017). Generating adversarial malware examples for black-box attacks based on gan. arXiv preprint arXiv:1702.05983.
  • Naveed, A., Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access 6: 14410-14430.
  • Gamaleldin, E. et al. (2018). Adversarial examples that fool both computer vision and time-limited humans. Advances in Neural Information Processing Systems.
  • Kevin, E. et al. (2018) Robust physical-world attacks on deep learning visual classification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
  • Xiaoyong, Y. et al. (2019). Adversarial examples: Attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems 30.9: 2805-2824.
  • Krizhevsky, A., Sutskever, I., Hinton, G., E. (2012), Imagenet classification with deep convolutional neural networks, in Advances in neural information processing systems, pp. 1097–1105.
  • Simonyan, K., Zisserman, A. (2014), Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556.
  • Redmon, J., Farhadi, A. (2016), Yolo9000: Better, faster, stronger, arXiv preprint arXiv:1612.08242.
  • Ren, S., He, K., Girshick, R., Sun, J. (2015), Faster r-cnn: Towards realtime object detection with region proposal networks, in Advances in neural information processing systems, pp. 91–99.
  • Saon, G., Kuo, J., Rennie, S., Picheny, M. (2015), The ibm 2015 english conversational telephone speech recognition system, arXiv preprint arXiv:1505.05899.
  • Sutskever, I., Vinyals O., Le, V. (2014), Sequence to sequence learning with neural networks, in Advances in neural information processing systems, pp. 3104–3112.
  • Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, A., Senior, A., Kavukcuoglu, K. Wavenet (2016): A generative model for raw audio, arXiv preprint arXiv:1609.03499.
  • Yann, L., Bengio, Y. (1995). Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks 3361.10.
  • Alessandro, G. et al. (2013). Fast image scanning with deep max-pooling convolutional neural networks. 2013 IEEE International Conference on Image Processing. IEEE.
  • Maria-Irina, N. et al. (2018) Adversarial Robustness Toolbox v1. 0.0. arXiv preprint arXiv:1807.01069.
  • Goodfellow, I., Shlens, J., Szegedy, C. (2015). Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR).
  • Brown, T., Mane, D., Roy, A., Abadi, M., Gilmer, J. (2017). Adversarial patch. CoRR, abs/1712.09665.
  • Brendel, W., Rauber, J., Bethge, M. (2017). Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. CoRR, 1712.04248.
  • Carlini, N., Wagner, D. (2017). Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy.
  • Dezfooli, S., Fawzi, A., Frossard, P. (2015). Deepfool: a simple and accurate method to fool deep neural networks. CoRR, abs/1511.04599.
  • Chen, P., Sharma, Y., Zhang, H., Yi, J., Hsieh, E. (2017): Elastic-net attacks to deep neural networks via adversarial examples. CoRR, abs/1709.04114.
  • Nathan, I. et al. (2018). Adversarial attacks for optical flow-based action recognition classifiers. arXiv preprint arXiv:1811.11875.
  • Jianbo, C., Jordan, M., Wainwright, M. (2020). Hopskipjumpattack: A query-efficient decision-based attack. 2020 ieee symposium on security and privacy (sp). IEEE.
  • Alexey, K., Goodfellow, I., Bengio, S. (2016). Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533.
  • Madry, A. et al. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
  • Uyeong, J., Wu, X., Jha, S. (2017). Objective metrics and gradient descent algorithms for adversarial examples in machine learning. Proceedings of the 33rd Annual Computer Security Applications Conference.
  • Nicolas, P. et al. (2016). The limitations of deep learning in adversarial settings. 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE.
  • Amin, G., Shafahi, A., Goldstein, T (2020). Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates. arXiv preprint arXiv:2003.08937.
  • Logan, E. et al. (2019) Exploring the landscape of spatial robustness. International Conference on Machine Learning. PMLR.
  • Andriushchenko, M. et al. (2020). Square attack: a query-efficient black-box adversarial attack via random search. European Conference on Computer Vision. Springer, Cham.
  • Dezfooli, M., Mohsen, S. et al. (2017) Universal adversarial perturbations. Proceedings of the IEEE conference on computer vision and pattern recognition.

Derin Öğrenme Tabanlı Antivirüs Modellerinin Açık Kaynak Kodlu Çekişmeli Atak Kütüphaneleri Kullanılarak Atlatılması

Yıl 2021, , 66 - 71, 30.04.2021
https://doi.org/10.47769/izufbed.879611

Öz

Gelişen teknoloji ve internetin gelişmesi ile birlikte; insanların bu gelişen teknolojiyi kullanım oranının artması, kullanıcıları ve sistemlerini siber saldırganlar tarafından hedef haline getirmektedir. Siber saldırganlar tarafından kullanılan en etkili atak yöntemlerinden biri zararlı yazılımlardır. Zararlı yazılımlar aracılığı ile kişi ve kurumlara ait sistemler ele geçirilebilir, farklı enfeksiyonlara sebep olunarak daha büyük çaplı ataklar gerçekleştirilebilir. Bu saldırılar karşısında siber güvenlik firmaları tarafından geliştirilen yapay zeka tabanlı son jenerasyon antivirüs yazılımlarının yüzde yüz başarılı olamadığı görülmektedir. Gerçekleştirilen ilgili çalışmada; zararlı yazılım ve zararlı ofansif araçlara uygulanacak çekişmeli(adversarial) ataklar sonucunda üretilecek çekişmeli örnekler sayesinde, geliştirilen yapay zeka tabanlı son jenerasyon güvenlik ürünlerinin başarılı bir şekilde atlatılabildiği gözlemlenmiştir.

Kaynakça

  • Rey, W., Xu, A. (2018). Maximal jacobian-based saliency map attack. arXiv preprint arXiv:1808.07945.
  • Sukanta, D. et al. (2019). EvadePDF: Towards Evading Machine Learning Based PDF Malware Classifiers. International Conference on Security and Privacy. Springer, Singapore.
  • Srndic, N., Laskov, P. (2013). Detection of Malicious Pdf Files Based on Hierarchical Document Structure. In 20th Network and Distributed System Security Symposium (NDSS).
  • Weilin, X., Yanjun, Q., Evans D. (2016). Automatically evading classifiers. Proceedings of the 2016 network and distributed systems symposium. Vol. 10.
  • Weiwei, H., Tan, Y. (2017). Generating adversarial malware examples for black-box attacks based on gan. arXiv preprint arXiv:1702.05983.
  • Naveed, A., Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access 6: 14410-14430.
  • Gamaleldin, E. et al. (2018). Adversarial examples that fool both computer vision and time-limited humans. Advances in Neural Information Processing Systems.
  • Kevin, E. et al. (2018) Robust physical-world attacks on deep learning visual classification. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
  • Xiaoyong, Y. et al. (2019). Adversarial examples: Attacks and defenses for deep learning. IEEE transactions on neural networks and learning systems 30.9: 2805-2824.
  • Krizhevsky, A., Sutskever, I., Hinton, G., E. (2012), Imagenet classification with deep convolutional neural networks, in Advances in neural information processing systems, pp. 1097–1105.
  • Simonyan, K., Zisserman, A. (2014), Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556.
  • Redmon, J., Farhadi, A. (2016), Yolo9000: Better, faster, stronger, arXiv preprint arXiv:1612.08242.
  • Ren, S., He, K., Girshick, R., Sun, J. (2015), Faster r-cnn: Towards realtime object detection with region proposal networks, in Advances in neural information processing systems, pp. 91–99.
  • Saon, G., Kuo, J., Rennie, S., Picheny, M. (2015), The ibm 2015 english conversational telephone speech recognition system, arXiv preprint arXiv:1505.05899.
  • Sutskever, I., Vinyals O., Le, V. (2014), Sequence to sequence learning with neural networks, in Advances in neural information processing systems, pp. 3104–3112.
  • Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, A., Senior, A., Kavukcuoglu, K. Wavenet (2016): A generative model for raw audio, arXiv preprint arXiv:1609.03499.
  • Yann, L., Bengio, Y. (1995). Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks 3361.10.
  • Alessandro, G. et al. (2013). Fast image scanning with deep max-pooling convolutional neural networks. 2013 IEEE International Conference on Image Processing. IEEE.
  • Maria-Irina, N. et al. (2018) Adversarial Robustness Toolbox v1. 0.0. arXiv preprint arXiv:1807.01069.
  • Goodfellow, I., Shlens, J., Szegedy, C. (2015). Explaining and harnessing adversarial examples. In International Conference on Learning Representations (ICLR).
  • Brown, T., Mane, D., Roy, A., Abadi, M., Gilmer, J. (2017). Adversarial patch. CoRR, abs/1712.09665.
  • Brendel, W., Rauber, J., Bethge, M. (2017). Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. CoRR, 1712.04248.
  • Carlini, N., Wagner, D. (2017). Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy.
  • Dezfooli, S., Fawzi, A., Frossard, P. (2015). Deepfool: a simple and accurate method to fool deep neural networks. CoRR, abs/1511.04599.
  • Chen, P., Sharma, Y., Zhang, H., Yi, J., Hsieh, E. (2017): Elastic-net attacks to deep neural networks via adversarial examples. CoRR, abs/1709.04114.
  • Nathan, I. et al. (2018). Adversarial attacks for optical flow-based action recognition classifiers. arXiv preprint arXiv:1811.11875.
  • Jianbo, C., Jordan, M., Wainwright, M. (2020). Hopskipjumpattack: A query-efficient decision-based attack. 2020 ieee symposium on security and privacy (sp). IEEE.
  • Alexey, K., Goodfellow, I., Bengio, S. (2016). Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533.
  • Madry, A. et al. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
  • Uyeong, J., Wu, X., Jha, S. (2017). Objective metrics and gradient descent algorithms for adversarial examples in machine learning. Proceedings of the 33rd Annual Computer Security Applications Conference.
  • Nicolas, P. et al. (2016). The limitations of deep learning in adversarial settings. 2016 IEEE European symposium on security and privacy (EuroS&P). IEEE.
  • Amin, G., Shafahi, A., Goldstein, T (2020). Breaking certified defenses: Semantic adversarial examples with spoofed robustness certificates. arXiv preprint arXiv:2003.08937.
  • Logan, E. et al. (2019) Exploring the landscape of spatial robustness. International Conference on Machine Learning. PMLR.
  • Andriushchenko, M. et al. (2020). Square attack: a query-efficient black-box adversarial attack via random search. European Conference on Computer Vision. Springer, Cham.
  • Dezfooli, M., Mohsen, S. et al. (2017) Universal adversarial perturbations. Proceedings of the IEEE conference on computer vision and pattern recognition.
Toplam 35 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Mühendislik
Bölüm Makaleler
Yazarlar

Fatih Erdoğan 0000-0002-2075-1413

Mert Can Alıcı 0000-0002-4553-5872

Yayımlanma Tarihi 30 Nisan 2021
Gönderilme Tarihi 13 Şubat 2021
Kabul Tarihi 16 Şubat 2021
Yayımlandığı Sayı Yıl 2021

Kaynak Göster

APA Erdoğan, F., & Alıcı, M. C. (2021). Derin Öğrenme Tabanlı Antivirüs Modellerinin Açık Kaynak Kodlu Çekişmeli Atak Kütüphaneleri Kullanılarak Atlatılması. İstanbul Sabahattin Zaim Üniversitesi Fen Bilimleri Enstitüsü Dergisi, 3(1), 66-71. https://doi.org/10.47769/izufbed.879611

20503

Bu eser Creative Commons Atıf-GayriTicari 4.0 Uluslararası Lisansı ile lisanslanmıştır.