Araştırma Makalesi
BibTex RIS Kaynak Göster

Measuring The Robustness of AI Models Against Adversarial Attacks: Thyroid Ultrasound Images Case Study

Yıl 2022, Cilt: 2 Sayı: 2, 42 - 47, 27.02.2023

Öz

The healthcare industry is looking for ways on using artificial intelligence effectively. Decision support systems use AI (Artificial Intelligence) models that diagnose cancer from radiology images. These models in such implementations are not perfect, and the attackers can use techniques to make the models give wrong predictions. It is necessary to measure the robustness of these models after an adversarial attack. The studies in the literature focus on models trained with images obtained from different regions (lung x-ray and skin dermoscopy images) and shooting techniques. This study focuses on thyroid ultrasound images as a use case. We trained these images with VGG19, Xception, ResNet50V2, and EfficientNetB2 CNN models. The aim is to make these models make false predictions. We used FGSM, BIM, and PGD techniques to generate adversarial images. The attack resulted in misprediction with 99%. Future work will focus on making these models more robust with adversarial training.

Destekleyen Kurum

Mugla Sitki Kocman University

Kaynakça

  • A. Hosny, C.Parmar, J.Quackenbush, L. H. Schwartz, and H. J. W. L. Aerts “"Artificial intelligence in radiology.” Nature Reviews Cancer, Aug., pp. 500-510, 2018
  • G. S. Tandel, M. Biswas, O. G. Kakde, A. Tiwari, H. S. Suri, M. Turk et al., “A review on a deep learning perspective in brain cancer classification,” Cancers, vol. 11, no. 1, p. 111, 2019.
  • S. G. Finlayson, J. D. Bowers, J. Ito, J. L. Zittrain, A. L. Beam, and I. S. Kohane, “Adversarial attacks on medical machine learning,” Science, vol. 363, no. 6433, pp. 1287–1289, 2019.
  • B. Ehteshami Bejnordi, M. Veta, P. Johannes van Diest, B. van Ginneken, N. Karssemeijer, G. Litjens et al. “Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer,” JAMA, vol. 318, no. 22, p. 2199, 2017.
  • F. Abdolali, A. Shahroudnejad, S. Amiri, A. Rakkunedeth Hareendranathan, J. L. Jaremko et al. “A systematic review on the role of artificial intelligence in sonographic diagnosis of thyroid cancer: Past, present and future,” Frontiers in Biomedical Technologies, 2021.
  • G. Bortsova, C. González-Gonzalo, S. C. Wetstein, F. Dubost, I. Katramados, L. Hogeweg et al. “Adversarial attack vulnerability of medical image analysis systems: Unexplored factors,” Medical Image Analysis, vol. 73, p. 102141, 2021.
  • A. Vatian, N. Gusarova, N. Dobrenko, S. Dudorov, N. Nigmatullin, A. Shalyto et al. “Impact of adversarial examples on the efficiency of interpretation and use of information from high-tech medical images,” 2019 24th Conference of Open Innovations Association (FRUCT), 2019.
  • I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples.” 2014 [Online]. Available: http://arxiv.org/abs/1412.6572
  • A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world.” 2016 [Online]. Available: http://arxiv.org/abs/1607.02533
  • A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards Deep Learning Models Resistant to Adversarial Attacks,” 2017 [Online]. Available: http://arxiv.org/abs/1706.06083
  • N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” 2016 IEEE European Symposium on Security and Privacy (EuroS&P), 2016.
  • S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “DeepFool: A simple and accurate method to fool Deep Neural Networks,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” 2017 IEEE Symposium on Security and Privacy (SP), 2017.
  • J. Su, D. V. Vargas, and S. Kouichi, “One pixel attack for fooling deep neural networks.” 2017 [Online]. Available: http://arxiv.org/abs/1710.08864
  • T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer, “Adversarial Patch,” 2017 [Online]. Available: http://arxiv.org/abs/1712.09665
  • J. Deng, W. Dong, R. Socher, L. -J. Li, Kai Li and Li Fei-Fei, "ImageNet: A large-scale hierarchical image database," 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248-255, doi: 10.1109/CVPR.2009.5206848.
  • D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber, “High-Performance Neural Networks for Visual Object Classification,” CoRR, vol. abs/1102.0183, 2011 [Online]. Available: http://dblp.uni-trier.de/db/journals/corr/corr1102.html#abs-1102-0183
  • A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” pp. 32--33, 2009 [Online]. Available: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
  • S. Albawi, T. A. Mohammed and S. Al-Zawi, "Understanding of a convolutional neural network," 2017 International Conference on Engineering and Technology (ICET), 2017, pp. 1-6, doi: 10.1109/ICEngTechnol.2017.8308186.
  • K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition.” 2014 [Online]. Available: http://arxiv.org/abs/1409.1556
  • K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” Computer Vision – ECCV 2016, pp. 630–645, 2016.
  • F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.,” in ICML, 2019, vol. 97, pp. 6105–6114 [Online]. Available: http://dblp.uni-trier.de/db/conf/icml/icml2019.html
  • M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, “Can machine learning be secure?,” Proceedings of the 2006 ACM Symposium on Information, computer and communications security - ASIACCS '06, 2006.
  • M. Barreno, B. Nelson, A. D. Joseph, and J. D. Tygar, “The security of Machine Learning,” Machine Learning, vol. 81, no. 2, pp. 121–148, 2010.
  • Q. Liu, P. Li, W. Zhao, W. Cai, S. Yu, and V. C. Leung, “A survey on security threats and defensive techniques of Machine Learning: A data driven view,” IEEE Access, vol. 6, pp. 12103–12117, 2018.
  • D. T. Nguyen, J. K. Kang, T. D. Pham, G. Batchuluun, and K. R. Park, “Ultrasound image-based diagnosis of malignant thyroid nodule using artificial intelligence,” Sensors, vol. 20, no. 7, p. 1822, 2020.
  • C. Szegedy et al., “Intriguing properties of neural networks,” 2013 [Online]. Available: http://arxiv.org/abs/1312.6199
  • T. zen, “Thyroid for pretraining,” Kaggle, 27-Aug-2021. [Online]. Available: https://www.kaggle.com/tingzen/thyroid-for-pretraining. [Accessed: 08-Nov-2022].
  • IBM, “Adversarial Robustness Toolbox,” Adversarial Robustness Toolbox 1.12.1 documentation. [Online]. Available: https://adversarial-robustness-toolbox.readthedocs.io/en/latest/index.html. [Accessed: 08-Nov-2022].
Yıl 2022, Cilt: 2 Sayı: 2, 42 - 47, 27.02.2023

Öz

Kaynakça

  • A. Hosny, C.Parmar, J.Quackenbush, L. H. Schwartz, and H. J. W. L. Aerts “"Artificial intelligence in radiology.” Nature Reviews Cancer, Aug., pp. 500-510, 2018
  • G. S. Tandel, M. Biswas, O. G. Kakde, A. Tiwari, H. S. Suri, M. Turk et al., “A review on a deep learning perspective in brain cancer classification,” Cancers, vol. 11, no. 1, p. 111, 2019.
  • S. G. Finlayson, J. D. Bowers, J. Ito, J. L. Zittrain, A. L. Beam, and I. S. Kohane, “Adversarial attacks on medical machine learning,” Science, vol. 363, no. 6433, pp. 1287–1289, 2019.
  • B. Ehteshami Bejnordi, M. Veta, P. Johannes van Diest, B. van Ginneken, N. Karssemeijer, G. Litjens et al. “Diagnostic assessment of deep learning algorithms for detection of lymph node metastases in women with breast cancer,” JAMA, vol. 318, no. 22, p. 2199, 2017.
  • F. Abdolali, A. Shahroudnejad, S. Amiri, A. Rakkunedeth Hareendranathan, J. L. Jaremko et al. “A systematic review on the role of artificial intelligence in sonographic diagnosis of thyroid cancer: Past, present and future,” Frontiers in Biomedical Technologies, 2021.
  • G. Bortsova, C. González-Gonzalo, S. C. Wetstein, F. Dubost, I. Katramados, L. Hogeweg et al. “Adversarial attack vulnerability of medical image analysis systems: Unexplored factors,” Medical Image Analysis, vol. 73, p. 102141, 2021.
  • A. Vatian, N. Gusarova, N. Dobrenko, S. Dudorov, N. Nigmatullin, A. Shalyto et al. “Impact of adversarial examples on the efficiency of interpretation and use of information from high-tech medical images,” 2019 24th Conference of Open Innovations Association (FRUCT), 2019.
  • I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and Harnessing Adversarial Examples.” 2014 [Online]. Available: http://arxiv.org/abs/1412.6572
  • A. Kurakin, I. Goodfellow, and S. Bengio, “Adversarial examples in the physical world.” 2016 [Online]. Available: http://arxiv.org/abs/1607.02533
  • A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards Deep Learning Models Resistant to Adversarial Attacks,” 2017 [Online]. Available: http://arxiv.org/abs/1706.06083
  • N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in adversarial settings,” 2016 IEEE European Symposium on Security and Privacy (EuroS&P), 2016.
  • S.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, “DeepFool: A simple and accurate method to fool Deep Neural Networks,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • N. Carlini and D. Wagner, “Towards evaluating the robustness of neural networks,” 2017 IEEE Symposium on Security and Privacy (SP), 2017.
  • J. Su, D. V. Vargas, and S. Kouichi, “One pixel attack for fooling deep neural networks.” 2017 [Online]. Available: http://arxiv.org/abs/1710.08864
  • T. B. Brown, D. Mané, A. Roy, M. Abadi, and J. Gilmer, “Adversarial Patch,” 2017 [Online]. Available: http://arxiv.org/abs/1712.09665
  • J. Deng, W. Dong, R. Socher, L. -J. Li, Kai Li and Li Fei-Fei, "ImageNet: A large-scale hierarchical image database," 2009 IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248-255, doi: 10.1109/CVPR.2009.5206848.
  • D. C. Ciresan, U. Meier, J. Masci, L. M. Gambardella, and J. Schmidhuber, “High-Performance Neural Networks for Visual Object Classification,” CoRR, vol. abs/1102.0183, 2011 [Online]. Available: http://dblp.uni-trier.de/db/journals/corr/corr1102.html#abs-1102-0183
  • A. Krizhevsky, “Learning Multiple Layers of Features from Tiny Images,” pp. 32--33, 2009 [Online]. Available: https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
  • S. Albawi, T. A. Mohammed and S. Al-Zawi, "Understanding of a convolutional neural network," 2017 International Conference on Engineering and Technology (ICET), 2017, pp. 1-6, doi: 10.1109/ICEngTechnol.2017.8308186.
  • K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition.” 2014 [Online]. Available: http://arxiv.org/abs/1409.1556
  • K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” Computer Vision – ECCV 2016, pp. 630–645, 2016.
  • F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • M. Tan and Q. V. Le, “EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks.,” in ICML, 2019, vol. 97, pp. 6105–6114 [Online]. Available: http://dblp.uni-trier.de/db/conf/icml/icml2019.html
  • M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, “Can machine learning be secure?,” Proceedings of the 2006 ACM Symposium on Information, computer and communications security - ASIACCS '06, 2006.
  • M. Barreno, B. Nelson, A. D. Joseph, and J. D. Tygar, “The security of Machine Learning,” Machine Learning, vol. 81, no. 2, pp. 121–148, 2010.
  • Q. Liu, P. Li, W. Zhao, W. Cai, S. Yu, and V. C. Leung, “A survey on security threats and defensive techniques of Machine Learning: A data driven view,” IEEE Access, vol. 6, pp. 12103–12117, 2018.
  • D. T. Nguyen, J. K. Kang, T. D. Pham, G. Batchuluun, and K. R. Park, “Ultrasound image-based diagnosis of malignant thyroid nodule using artificial intelligence,” Sensors, vol. 20, no. 7, p. 1822, 2020.
  • C. Szegedy et al., “Intriguing properties of neural networks,” 2013 [Online]. Available: http://arxiv.org/abs/1312.6199
  • T. zen, “Thyroid for pretraining,” Kaggle, 27-Aug-2021. [Online]. Available: https://www.kaggle.com/tingzen/thyroid-for-pretraining. [Accessed: 08-Nov-2022].
  • IBM, “Adversarial Robustness Toolbox,” Adversarial Robustness Toolbox 1.12.1 documentation. [Online]. Available: https://adversarial-robustness-toolbox.readthedocs.io/en/latest/index.html. [Accessed: 08-Nov-2022].
Toplam 30 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yapay Zeka
Bölüm Araştırma Makaleleri
Yazarlar

Mustafa Ceyhan 0000-0003-3268-6898

Enis Karaarslan 0000-0002-3595-8783

Yayımlanma Tarihi 27 Şubat 2023
Yayımlandığı Sayı Yıl 2022 Cilt: 2 Sayı: 2

Kaynak Göster

APA Ceyhan, M., & Karaarslan, E. (2023). Measuring The Robustness of AI Models Against Adversarial Attacks: Thyroid Ultrasound Images Case Study. Journal of Emerging Computer Technologies, 2(2), 42-47.

Journal of Emerging Computer Technologies
is indexed and abstracted by
Index Copernicus, ROAD, Academia.edu, Google Scholar, Asos Index, Academic Resource Index (Researchbib), OpenAIRE, IAD, Cosmos, EuroPub, Academindex

Publisher
Izmir Academy Association
www.izmirakademi.org