Research Article
BibTex RIS Cite
Year 2018, Volume: 3 Issue: 2, 1 - 15, 01.01.2019

Abstract

References

  • [1] Haugeland, J. (1989). Artificial intelligence: The very idea. MIT press.
  • [2] Fukushima, K., Miyake, S., & Ito, T. (1983). Neocognitron: A neural network model for a mechanism of visual pattern recognition. IEEE transactions on systems, man, and cybernetics, (5), 826-834.
  • [3] Lo, S. C., Lou, S. L., Lin, J. S., Freedman, M. T., Chien, M. V., & Mun, S. K. (1995). Artificial convolution neural network techniques and applications for lung nodule detection. IEEE Transactions on Medical Imaging, 14(4), 711-718.
  • [4] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
  • [5] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
  • [6] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., ... & Berg, A. C. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211-252.
  • [7] Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8), 1798-1828.
  • [8] Ravi, D., Wong, C., Deligianni, F., Berthelot, M., Andreu-Perez, J., Lo, B., & Yang, G. Z. (2017). Deep learning for health informatics. IEEE journal of biomedical and health informatics, 21(1), 4-21.
  • [9] Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A., Ciompi, F., Ghafoorian, M., ... & Sánchez, C. I. (2017). A survey on deep learning in medical image analysis. Medical image analysis, 42, 60-88.
  • [10] Shen, D., Wu, G., & Suk, H. I. (2017). Deep learning in medical image analysis. Annual review of biomedical engineering, 19, 221-248.
  • [11] Nie, D., Zhang, H., Adeli, E., Liu, L., & Shen, D. (2016, October). 3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients. In International Conference on Medical Image Computing and Computer-Assisted Intervention(pp. 212-220). Springer, Cham.
  • [12] Xu, T., Zhang, H., Huang, X., Zhang, S., & Metaxas, D. N. (2016, October). Multimodal deep learning for cervical dysplasia diagnosis. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 115-123). Springer, Cham.
  • [13] M. Anthimopoulos, S. Christodoulidis, L. Ebner, A. Christe, and S. Mougiakakou, “Lung pattern classification for interstitial lung diseases using a deep convolutional neural network,” IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1207–1216, May 2016.
  • [14] Y. Cao et al., “Improving tuberculosis diagnostics using deep learning and mobile health technologies among resource-poor and marginalized communities,” in IEEE Connected Health, Appl., Syst. Eng. Technol., 2016, pp. 274–281.
  • [15] B. Jiang, X. Wang, J. Luo, X. Zhang, Y. Xiong, and H. Pang, “Convolutional neural networks in automatic recognition of trans-differentiated neural progenitor cells under bright-field microscopy,” in Proc. Instrum. Meas., Comput., Commun. Control, 2015, pp. 122–126.
  • [16] M. J. van Grinsven, B. van Ginneken, C. B. Hoyng, T. Theelen, and C. I. Sanchez, “Fast convolutional neural network training using selec- ´ tive data sampling: Application to hemorrhage detection in color fundus images,” IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1273–1284, May 2016.
  • [17] H. R. Roth et al., “Anatomy-specific classification of medical images using deep convolutional nets,” in Proc. IEEE Int. Symp. Biomed. Imag., 2015, pp. 101–104.
  • [18] Hubel DH, Wiesel TN. Receptive fields and functional architecture of monkey striate cortex. J Physiol 1968;195(1):215–43
  • [19] Min, S., Lee, B., & Yoon, S. (2017). Deep learning in bioinformatics. Briefings in bioinformatics, 18(5), 851-869.
  • [20] Lawrence S, Giles CL, Tsoi AC, et al. Face recognition: a convolutional neural-network approach. IEEE Trans Neural Netw 1997;8(1):98–113
  • [21] Fukushima K. Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern. 1980;36(4):193–202.
  • [22] Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 2014;15(1):1929–58.
  • [23] Mikolov, T., Karafiát, M., Burget, L., Černocký, J., & Khudanpur, S. (2010). Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association.
  • [24] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521(7553):436–44.
  • [25] Schuster M, Paliwal KK. Bidirectional recurrent neural networks.IEEE Trans Signal Process 1997;45(11):2673–81.
  • [26] Graves A, Schmidhuber J. Offline handwriting recognition with multidimensional recurrent neural networks. In: Advances in Neural Information Processing Systems, 2009. p.545–52.
  • [27] Stollenga MF, Byeon W, Liwicki M, et al. Parallel multidimensional LSTM, with application to fast biomedical volumetric image segmentation. arXiv Preprint arXiv:1506.07452, 2015.
  • [28] Soleymani M, Asghari-Esfeden S, Pantic M, et al. Continuous emotion detection using EEG signals and facial expressions.In: 2014 IEEE International Conference on Multimedia and Expo(ICME), 2014. p. 1–6. IEEE, New York.
  • [29] Petrosian A, Prokhorov D, Homan R, et al. Recurrent neural network based prediction of epileptic seizures in intra-and extracranial EEG. Neurocomputing 2000;30(1):201–18.
  • [30] Davidson PR, Jones RD, Peiris MT. EEG-based lapse detection with high temporal resolution. IEEE Trans Biomed Eng 2007;54(5):832–9.
  • [31] [2] R. Smith-Bindman et al., ``Use of diagnostic imaging studies and associated radiation exposure for patients enrolled in large integrated health care systems, 19962010,'' JAMA, vol. 307, no. 22, pp. 24002409, 2012.
  • [32] Ker, J., Wang, L., Rao, J., & Lim, T. (2018). Deep learning applications in medical image analysis. IEEE Access, 6, 9375-9389.
  • [33] S. C. B. Lo, S. L. A. Lou, J.-S. Lin, M. T. Freedman, M. V. Chien,and S. K. Mun, ``Artificial convolution neural network techniques and applications for lung nodule detection,'' IEEE Trans. Med. Imag., vol. 14,no. 4, pp. 711-718, Dec. 1995.
  • [34] A. Rajkomar, S. Lingam, A. G. Taylor, M. Blum, and J. Mongan, ``Highthroughput classification of radiographs using deep convolutional neural networks,'' J. Digit. Imag., vol. 30, no. 1, pp. 95-101, 2017.
  • [35] C. Szegedy et al., ``Going deeper with convolutions,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2015, pp. 1-9.
  • [36] P. Rajpurkar et al. (Dec. 2017). ``CheXNet: Radiologist-level pneumonia detection on chest X-rays with deep learning.''
  • [37] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers. (Dec. 2017). ``ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases.''
  • [38] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. (Aug. 2016). ``Densely connected convolutional networks.''
  • [39] E. Hosseini-Asl et al., ``Alzheimer's disease diagnostics by a 3D deeply supervised adaptable convolutional network,'' Front Biosci., vol. 23, pp. 584-596, Jan. 2018.
  • [40] S. Korolev, A. Safiullin, M. Belyaev, and Y. Dodonova. (Jan. 2017). ``Residual and plain convolutional neural networks for 3D brain MRI classification.
  • [41] K. Simonyan and A. Zisserman. (Sep. 2014). ``Very deep convolutional networks for large-scale image recognition.''
  • [42] K. He, X. Zhang, S. Ren, and J. Sun, ``Deep residual learning for image recognition,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 770-778.
  • [43] H. Pratt, F. Coenen, D. M. Broadbent, S. P. Harding, and Y. Zheng, ``Convolutional neural networks for diabetic retinopathy,'' Procedia Comput. Sci., vol. 90, pp. 200-205, Jul. 2016.
  • [44] M. D. Abràmoff et al., ``Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning,'' Investigative Ophthalmol. Vis. Sci., vol. 57, no. 13, pp. 5200-5206, 2016.
  • [45] S. M. Plis et al., ``Deep learning for neuroimaging: A validation study,'' Front Neurosci., vol. 8, p. 229, Aug. 2014.
  • [46] H. I. Suk, C. Y. Wee, S. W. Lee, and D. Shen, ``State-space model with deep learning for functional dynamics estimation in resting-state fMRI,'' Neuroimage, vol. 129, pp. 292307, Apr. 2016.
  • [47] M. D. Kumar, M. Babaie, S. Zhu, S. Kalra, and H. R. Tizhoosh. (Sep. 2017). ``A comparative study of CNN, BOVW and LBP for classification of histopathological images.''
  • [48] B. A. H. I. Kaggle. (2017). Kaggle Data Science Bowl 2017. [Online].Available: https: //www.kaggle.com/c/data-science-bowl-2017
  • [49] F. Liao, M. Liang, Z. Li, X. Hu, and S. Song. (2017). ``Evaluate the malignancy of pulmonary nodules using the 3D deep leaky noisy-or network.''
  • [50] O. Ronneberger, P. Fischer, and T. Brox, ``U-net: Convolutional networks for biomedical image segmentation,'' in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent., 2015, pp. 234-241.7
  • [51] H.-C. Shin et al., ``Deep convolutional neural networks for computeraided detection: CNN architectures, dataset characteristics and transfer learning,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1285-1298,May 2016.
  • [52] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun.(Dec. 2013). ``OverFeat: Integrated recognition, localization and detection using convolutional networks.''
  • [53] F. Ciompi et al., ``Automatic classication of pulmonary peri-fissural nodules in computed tomography using an ensemble of 2D views and a convolutional neural network out-of-the-box,'' Med. Image Anal., vol. 26, no. 1, pp. 195-202, 2015.
  • [54] A. Esteva et al., ``Dermatologist-level classification of skin cancer with deep neural networks,'' Nature, vol. 542, no. 7639, pp. 115-118,2017.
  • [55] D. C. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber, ``Mitosis detection in breast cancer histology images with deep neural networks,'' in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent., 2013, pp. 411-418.
  • [56] X. Yang et al., ``A deep learning approach for tumor tissue image classification,'' in Proc. Int. Conf. Biomed. Eng., Calgary, AB, Canada, 2016.
  • [57] K. Sirinukunwattana, S. E. A. Raza, Y.-W. Tsang, D. R. J. Snead, I. A. Cree, and N. M. Rajpoot, ``Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 11961206, May 2016.
  • [58] J. Xu et al., ``Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images,'' IEEE Trans. Med. Imag., vol. 35, no. 1, pp. 119-130, Jan. 2016.
  • [59] S. Albarqouni, C. Baur, F. Achilles, V. Belagiannis, S. Demirci, and N. Navab, ``AggNet: Deep learning from crowds for mitosis detection in breast cancer histology images,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1313-1321, May 2016.
  • [60] Z. Yan et al., ``Bodypart recognition using multi-stage deep learning,'' in Information Processing in Medical Imaging, vol. 24. Cham, Switzerland: Springer, Jun. 2015, pp. 449-461.
  • [61] H. R. Roth et al., ``Anatomy-specific classification of medical images using deep convolutional nets,'' in Proc. IEEE 12th Int. Symp. Biomed. Imag. (ISBI), Apr. 2015, pp. 101-104.
  • [62] H.-C. Shin, M. R. Orton, D. J. Collins, S. J. Doran, and M. O. Leach, ``Stacked autoencoders for unsupervised feature learning and multiple organ detection in a pilot study using 4D patient data,'' IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1930-1943, Aug. 2013.
  • [63] Z. Akkus, A. Galimzianova, A. Hoogi, D. L. Rubin, and B. J. Erickson, ``Deep learning for brain MRI segmentation: State of the art and future directions,'' J. Digit. Imag., vol. 30, no. 4, pp. 449-459, 2017.
  • [64] P. Moeskops, M. A. Viergever, A. M. Mendrik, L. S. de Vries, M. J. N. L. Benders, and I. Isgum, ``Automatic segmentation of MR brain images with a convolutional neural network,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1252-1261, May 2016.
  • [65] S. Pereira, A. Pinto, V. Alves, and C. A. Silva, ``Brain tumor segmentation using convolutional neural networks in MRI images,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1240-1251, May 2016.
  • [66] M. Havaei et al., ``Brain tumor segmentation with deep neural networks,'' Med. Image Anal., vol. 35, pp. 18-31, Jan. 2017.
  • [67] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. (Jun. 2016). ``DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs.''
  • [68] A. Casamitjana, S. Puch, A. Aduriz, E. Sayrol, and V. Vilaplana, ``3D convolutional networks for brain tumor segmentation,'' in Proc. MICCAI Challenge Multimodal Brain Tumor Image Segmentation (BRATS), 2016, pp. 65-68.
  • [69] T. Brosch, L. Y.W. Tang, Y. Yoo, D. K. B. Li, A. Traboulsee, and R. Tam, ``Deep 3D convolutional encoder networks with shortcuts for multiscale feature integration applied to multiple sclerosis lesion segmentation,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1229-1239, May 2016.
  • [70] F. E.-Z. A. El-Gamal, M. Elmogy, and A. Atwan, ``Current trends in medical image registration and fusion,'' Egyptian Inform. J., vol. 17, no. 1, pp. 99-124, 2016.
  • [71] X. Yang, R. Kwitt, M. Styner, and M. Niethammer, ``Quicksilver: Fast predictive image registration A deep learning approach,'' Neuroimage, vol. 158, pp. 378-396, Jul. 2017.
  • [72] S. Miao, Z. J. Wang, and R. Liao, ``A CNN regression approach for real-time 2D/3D registration,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1352-1363, May 2016.
  • [73] C. Sun, A. Shrivastava, S. Singh, and A. Gupta. (Aug. 2017). ``Revisiting unreasonable effectiveness of data in deep learning era.
  • [74] C. Szegedy et al., ``Going deeper with convolutions,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2015, pp. 1-9.
  • [75] R. Socher, B. Huval, B. Bath, C. D. Manning, and A. Y. Ng, ``Convolutional-recursive deep learning for 3D object classification,'' in Proc. Adv. Neural Inf. Process. Syst., 2012, pp. 656-664.
  • [76] J. Snoek, H. Larochelle, and R. P. Adams, ``Practical Bayesian optimization of machine learning algorithms,'' in Proc. Adv. Neural Inf. Process. Syst., 2012, pp. 2951-2959.
  • [77] J. Cho, K. Lee, E. Shin, G. Choy, and S. Do. (Nov. 2015). ``How much data is needed to train a medical image deep learning system to achieve necessary high accuracy?''
  • [78] J. T. Guibas, T. S. Virdi, and P. S. Li. (Dec. 2017). ``Synthetic medical images from dual generative adversarial networks.''
  • [79] P. Costa et al. (Jan. 2017). ``Towards adversarial retinal image synthesis.''
  • [80] P. Moeskops, M. Veta, M. W. Lafarge, K. A. Eppenhof, and J. P. Pluim, ``Adversarial training and dilated convolutions for brain MRI segmentation,'' in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Cham, Switzerland: Springer, 2017, pp. 56-64.
  • [81] K. Kamnitsas et al., ``Unsupervised domain adaptation in brain lesion segmentation with adversarial networks,'' in Proc. Int. Conf. Inf. Process. Med. Imag., 2017, pp. 597-609.
  • [82] V. Alex, M. S. KP, S. S. Chennamsetty, and G. Krishnamurthi, ``Generative adversarial networks for brain lesion detection,'' in Proc. Med. Imag., Image Process., vol. 101330. Feb. 2017, p. 101330G.
  • [83] M. A. Mazurowski, P. A. Habas, J. M. Zurada, J. Y. Lo, J. A. Baker, and G. D. Tourassi, ``Training neural network classifiers for medical decision making: The effects of imbalanced datasets on classification performance,'' Neural Netw., vol. 21, nos. 2-3, pp. 427-436, 2008.
  • [84] H.-I. Suk et al., ``Latent feature representation with stacked autoencoder for AD/MCI diagnosis,'' Brain Struct. Funct., vol. 220, no. 2, pp. 841-859, 2015.
  • [85] X. Liu, H. R. Tizhoosh, and J. Kofman, ``Generating binary tags for fast medical image retrieval based on convolutional nets and radon transform,'' in Proc. Int. Joint Conf. Neural Netw. (IJCNN), 2016, pp. 2872-2878.
  • [86] Y. Anavi, I. Kogan, E. Gelbart, O. Geva, and H. Greenspan, ``Visualizing and enhancing a deep learning framework using patients age and gender for chest X-ray image retrieval,'' in Proc. Medi. Imag., Comput.-Aided Diagnosis, vol. 9785. Jul. 2016, p. 978510.
  • [87] X. Wang et al. (Mar. 2016). ``Unsupervised category discovery via looped deep pseudo-task optimization using a large scale radiology image database.''
  • [88] H.-C. Shin, L. Lu, L. Kim, A. Seff, J. Yao, and R. M. Summers, ``Interleaved text/image Deep Mining on a large-scale radiology database,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2015, pp. 1090-1099.
  • [89] Y. Duan et al. (Dec. 2017). ``One-shot imitation learning.'' [Online]. Available: https://arxiv.org/abs/1703.07326
  • [90] S. Levine, C. Finn, T. Darrell, and P. Abbeel, ``End-to-end training of deep visuomotor policies,'' J. Mach. Learn. Res., vol. 17, no. 39, pp. 1-40, 2016.
  • [91] B. Thananjeyan, A. Garg, S. Krishnan, C. Chen, L. Miller, and K. Goldberg, ``Multilateral surgical pattern cutting in 2D orthotropic gauze with deep reinforcement learning policies for tensioning,'' in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), May/Jun. 2017, pp. 2371-2378.
  • [92] D. Seita, S. Krishnan, R. Fox, S. McKinley, J. Canny, and K. Goldberg. (Sep. 2017). ``Fast and reliable autonomous surgical debridement with cable-driven robots using a two-phase calibration procedure.'' [Online]. Available: https://arxiv.org/abs/1709.06668

ON THE USE OF DEEP LEARNING METHODS ON MEDICAL IMAGES

Year 2018, Volume: 3 Issue: 2, 1 - 15, 01.01.2019

Abstract

Deep Learning algorithms have recently been reported to be successful in the analysis of images and voice. These algorithms, specifically Convolutional Neural Network (CNN), have also proven themselves to be highly promising on images produced by medical imaging technologies, as well. By use of deep learning algorithms, researchers have accomplished several tasks in this field including image classification, object and lesion detection and segmentation of different tissues in a medical image. Researchers mostly focused on medical images of neurons, retina, lungs, digital pathology, breast, heart, abdomen and skeleton system to take advantage of the Deep Learning approach. This study reviews literature studies of recent years that utilized Deep Learning algorithms on medical images in order to present a general picture of the relevant literature.


References

  • [1] Haugeland, J. (1989). Artificial intelligence: The very idea. MIT press.
  • [2] Fukushima, K., Miyake, S., & Ito, T. (1983). Neocognitron: A neural network model for a mechanism of visual pattern recognition. IEEE transactions on systems, man, and cybernetics, (5), 826-834.
  • [3] Lo, S. C., Lou, S. L., Lin, J. S., Freedman, M. T., Chien, M. V., & Mun, S. K. (1995). Artificial convolution neural network techniques and applications for lung nodule detection. IEEE Transactions on Medical Imaging, 14(4), 711-718.
  • [4] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
  • [5] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
  • [6] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., ... & Berg, A. C. (2015). Imagenet large scale visual recognition challenge. International Journal of Computer Vision, 115(3), 211-252.
  • [7] Bengio, Y., Courville, A., & Vincent, P. (2013). Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8), 1798-1828.
  • [8] Ravi, D., Wong, C., Deligianni, F., Berthelot, M., Andreu-Perez, J., Lo, B., & Yang, G. Z. (2017). Deep learning for health informatics. IEEE journal of biomedical and health informatics, 21(1), 4-21.
  • [9] Litjens, G., Kooi, T., Bejnordi, B. E., Setio, A. A. A., Ciompi, F., Ghafoorian, M., ... & Sánchez, C. I. (2017). A survey on deep learning in medical image analysis. Medical image analysis, 42, 60-88.
  • [10] Shen, D., Wu, G., & Suk, H. I. (2017). Deep learning in medical image analysis. Annual review of biomedical engineering, 19, 221-248.
  • [11] Nie, D., Zhang, H., Adeli, E., Liu, L., & Shen, D. (2016, October). 3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients. In International Conference on Medical Image Computing and Computer-Assisted Intervention(pp. 212-220). Springer, Cham.
  • [12] Xu, T., Zhang, H., Huang, X., Zhang, S., & Metaxas, D. N. (2016, October). Multimodal deep learning for cervical dysplasia diagnosis. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 115-123). Springer, Cham.
  • [13] M. Anthimopoulos, S. Christodoulidis, L. Ebner, A. Christe, and S. Mougiakakou, “Lung pattern classification for interstitial lung diseases using a deep convolutional neural network,” IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1207–1216, May 2016.
  • [14] Y. Cao et al., “Improving tuberculosis diagnostics using deep learning and mobile health technologies among resource-poor and marginalized communities,” in IEEE Connected Health, Appl., Syst. Eng. Technol., 2016, pp. 274–281.
  • [15] B. Jiang, X. Wang, J. Luo, X. Zhang, Y. Xiong, and H. Pang, “Convolutional neural networks in automatic recognition of trans-differentiated neural progenitor cells under bright-field microscopy,” in Proc. Instrum. Meas., Comput., Commun. Control, 2015, pp. 122–126.
  • [16] M. J. van Grinsven, B. van Ginneken, C. B. Hoyng, T. Theelen, and C. I. Sanchez, “Fast convolutional neural network training using selec- ´ tive data sampling: Application to hemorrhage detection in color fundus images,” IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1273–1284, May 2016.
  • [17] H. R. Roth et al., “Anatomy-specific classification of medical images using deep convolutional nets,” in Proc. IEEE Int. Symp. Biomed. Imag., 2015, pp. 101–104.
  • [18] Hubel DH, Wiesel TN. Receptive fields and functional architecture of monkey striate cortex. J Physiol 1968;195(1):215–43
  • [19] Min, S., Lee, B., & Yoon, S. (2017). Deep learning in bioinformatics. Briefings in bioinformatics, 18(5), 851-869.
  • [20] Lawrence S, Giles CL, Tsoi AC, et al. Face recognition: a convolutional neural-network approach. IEEE Trans Neural Netw 1997;8(1):98–113
  • [21] Fukushima K. Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern. 1980;36(4):193–202.
  • [22] Srivastava N, Hinton G, Krizhevsky A, et al. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 2014;15(1):1929–58.
  • [23] Mikolov, T., Karafiát, M., Burget, L., Černocký, J., & Khudanpur, S. (2010). Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association.
  • [24] LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521(7553):436–44.
  • [25] Schuster M, Paliwal KK. Bidirectional recurrent neural networks.IEEE Trans Signal Process 1997;45(11):2673–81.
  • [26] Graves A, Schmidhuber J. Offline handwriting recognition with multidimensional recurrent neural networks. In: Advances in Neural Information Processing Systems, 2009. p.545–52.
  • [27] Stollenga MF, Byeon W, Liwicki M, et al. Parallel multidimensional LSTM, with application to fast biomedical volumetric image segmentation. arXiv Preprint arXiv:1506.07452, 2015.
  • [28] Soleymani M, Asghari-Esfeden S, Pantic M, et al. Continuous emotion detection using EEG signals and facial expressions.In: 2014 IEEE International Conference on Multimedia and Expo(ICME), 2014. p. 1–6. IEEE, New York.
  • [29] Petrosian A, Prokhorov D, Homan R, et al. Recurrent neural network based prediction of epileptic seizures in intra-and extracranial EEG. Neurocomputing 2000;30(1):201–18.
  • [30] Davidson PR, Jones RD, Peiris MT. EEG-based lapse detection with high temporal resolution. IEEE Trans Biomed Eng 2007;54(5):832–9.
  • [31] [2] R. Smith-Bindman et al., ``Use of diagnostic imaging studies and associated radiation exposure for patients enrolled in large integrated health care systems, 19962010,'' JAMA, vol. 307, no. 22, pp. 24002409, 2012.
  • [32] Ker, J., Wang, L., Rao, J., & Lim, T. (2018). Deep learning applications in medical image analysis. IEEE Access, 6, 9375-9389.
  • [33] S. C. B. Lo, S. L. A. Lou, J.-S. Lin, M. T. Freedman, M. V. Chien,and S. K. Mun, ``Artificial convolution neural network techniques and applications for lung nodule detection,'' IEEE Trans. Med. Imag., vol. 14,no. 4, pp. 711-718, Dec. 1995.
  • [34] A. Rajkomar, S. Lingam, A. G. Taylor, M. Blum, and J. Mongan, ``Highthroughput classification of radiographs using deep convolutional neural networks,'' J. Digit. Imag., vol. 30, no. 1, pp. 95-101, 2017.
  • [35] C. Szegedy et al., ``Going deeper with convolutions,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2015, pp. 1-9.
  • [36] P. Rajpurkar et al. (Dec. 2017). ``CheXNet: Radiologist-level pneumonia detection on chest X-rays with deep learning.''
  • [37] X. Wang, Y. Peng, L. Lu, Z. Lu, M. Bagheri, and R. M. Summers. (Dec. 2017). ``ChestX-ray8: Hospital-scale chest X-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases.''
  • [38] G. Huang, Z. Liu, K. Q. Weinberger, and L. van der Maaten. (Aug. 2016). ``Densely connected convolutional networks.''
  • [39] E. Hosseini-Asl et al., ``Alzheimer's disease diagnostics by a 3D deeply supervised adaptable convolutional network,'' Front Biosci., vol. 23, pp. 584-596, Jan. 2018.
  • [40] S. Korolev, A. Safiullin, M. Belyaev, and Y. Dodonova. (Jan. 2017). ``Residual and plain convolutional neural networks for 3D brain MRI classification.
  • [41] K. Simonyan and A. Zisserman. (Sep. 2014). ``Very deep convolutional networks for large-scale image recognition.''
  • [42] K. He, X. Zhang, S. Ren, and J. Sun, ``Deep residual learning for image recognition,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2016, pp. 770-778.
  • [43] H. Pratt, F. Coenen, D. M. Broadbent, S. P. Harding, and Y. Zheng, ``Convolutional neural networks for diabetic retinopathy,'' Procedia Comput. Sci., vol. 90, pp. 200-205, Jul. 2016.
  • [44] M. D. Abràmoff et al., ``Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning,'' Investigative Ophthalmol. Vis. Sci., vol. 57, no. 13, pp. 5200-5206, 2016.
  • [45] S. M. Plis et al., ``Deep learning for neuroimaging: A validation study,'' Front Neurosci., vol. 8, p. 229, Aug. 2014.
  • [46] H. I. Suk, C. Y. Wee, S. W. Lee, and D. Shen, ``State-space model with deep learning for functional dynamics estimation in resting-state fMRI,'' Neuroimage, vol. 129, pp. 292307, Apr. 2016.
  • [47] M. D. Kumar, M. Babaie, S. Zhu, S. Kalra, and H. R. Tizhoosh. (Sep. 2017). ``A comparative study of CNN, BOVW and LBP for classification of histopathological images.''
  • [48] B. A. H. I. Kaggle. (2017). Kaggle Data Science Bowl 2017. [Online].Available: https: //www.kaggle.com/c/data-science-bowl-2017
  • [49] F. Liao, M. Liang, Z. Li, X. Hu, and S. Song. (2017). ``Evaluate the malignancy of pulmonary nodules using the 3D deep leaky noisy-or network.''
  • [50] O. Ronneberger, P. Fischer, and T. Brox, ``U-net: Convolutional networks for biomedical image segmentation,'' in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent., 2015, pp. 234-241.7
  • [51] H.-C. Shin et al., ``Deep convolutional neural networks for computeraided detection: CNN architectures, dataset characteristics and transfer learning,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1285-1298,May 2016.
  • [52] P. Sermanet, D. Eigen, X. Zhang, M. Mathieu, R. Fergus, and Y. LeCun.(Dec. 2013). ``OverFeat: Integrated recognition, localization and detection using convolutional networks.''
  • [53] F. Ciompi et al., ``Automatic classication of pulmonary peri-fissural nodules in computed tomography using an ensemble of 2D views and a convolutional neural network out-of-the-box,'' Med. Image Anal., vol. 26, no. 1, pp. 195-202, 2015.
  • [54] A. Esteva et al., ``Dermatologist-level classification of skin cancer with deep neural networks,'' Nature, vol. 542, no. 7639, pp. 115-118,2017.
  • [55] D. C. Ciresan, A. Giusti, L. M. Gambardella, and J. Schmidhuber, ``Mitosis detection in breast cancer histology images with deep neural networks,'' in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent., 2013, pp. 411-418.
  • [56] X. Yang et al., ``A deep learning approach for tumor tissue image classification,'' in Proc. Int. Conf. Biomed. Eng., Calgary, AB, Canada, 2016.
  • [57] K. Sirinukunwattana, S. E. A. Raza, Y.-W. Tsang, D. R. J. Snead, I. A. Cree, and N. M. Rajpoot, ``Locality sensitive deep learning for detection and classification of nuclei in routine colon cancer histology images,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 11961206, May 2016.
  • [58] J. Xu et al., ``Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images,'' IEEE Trans. Med. Imag., vol. 35, no. 1, pp. 119-130, Jan. 2016.
  • [59] S. Albarqouni, C. Baur, F. Achilles, V. Belagiannis, S. Demirci, and N. Navab, ``AggNet: Deep learning from crowds for mitosis detection in breast cancer histology images,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1313-1321, May 2016.
  • [60] Z. Yan et al., ``Bodypart recognition using multi-stage deep learning,'' in Information Processing in Medical Imaging, vol. 24. Cham, Switzerland: Springer, Jun. 2015, pp. 449-461.
  • [61] H. R. Roth et al., ``Anatomy-specific classification of medical images using deep convolutional nets,'' in Proc. IEEE 12th Int. Symp. Biomed. Imag. (ISBI), Apr. 2015, pp. 101-104.
  • [62] H.-C. Shin, M. R. Orton, D. J. Collins, S. J. Doran, and M. O. Leach, ``Stacked autoencoders for unsupervised feature learning and multiple organ detection in a pilot study using 4D patient data,'' IEEE Trans. Pattern Anal. Mach. Intell., vol. 35, no. 8, pp. 1930-1943, Aug. 2013.
  • [63] Z. Akkus, A. Galimzianova, A. Hoogi, D. L. Rubin, and B. J. Erickson, ``Deep learning for brain MRI segmentation: State of the art and future directions,'' J. Digit. Imag., vol. 30, no. 4, pp. 449-459, 2017.
  • [64] P. Moeskops, M. A. Viergever, A. M. Mendrik, L. S. de Vries, M. J. N. L. Benders, and I. Isgum, ``Automatic segmentation of MR brain images with a convolutional neural network,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1252-1261, May 2016.
  • [65] S. Pereira, A. Pinto, V. Alves, and C. A. Silva, ``Brain tumor segmentation using convolutional neural networks in MRI images,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1240-1251, May 2016.
  • [66] M. Havaei et al., ``Brain tumor segmentation with deep neural networks,'' Med. Image Anal., vol. 35, pp. 18-31, Jan. 2017.
  • [67] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. (Jun. 2016). ``DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs.''
  • [68] A. Casamitjana, S. Puch, A. Aduriz, E. Sayrol, and V. Vilaplana, ``3D convolutional networks for brain tumor segmentation,'' in Proc. MICCAI Challenge Multimodal Brain Tumor Image Segmentation (BRATS), 2016, pp. 65-68.
  • [69] T. Brosch, L. Y.W. Tang, Y. Yoo, D. K. B. Li, A. Traboulsee, and R. Tam, ``Deep 3D convolutional encoder networks with shortcuts for multiscale feature integration applied to multiple sclerosis lesion segmentation,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1229-1239, May 2016.
  • [70] F. E.-Z. A. El-Gamal, M. Elmogy, and A. Atwan, ``Current trends in medical image registration and fusion,'' Egyptian Inform. J., vol. 17, no. 1, pp. 99-124, 2016.
  • [71] X. Yang, R. Kwitt, M. Styner, and M. Niethammer, ``Quicksilver: Fast predictive image registration A deep learning approach,'' Neuroimage, vol. 158, pp. 378-396, Jul. 2017.
  • [72] S. Miao, Z. J. Wang, and R. Liao, ``A CNN regression approach for real-time 2D/3D registration,'' IEEE Trans. Med. Imag., vol. 35, no. 5, pp. 1352-1363, May 2016.
  • [73] C. Sun, A. Shrivastava, S. Singh, and A. Gupta. (Aug. 2017). ``Revisiting unreasonable effectiveness of data in deep learning era.
  • [74] C. Szegedy et al., ``Going deeper with convolutions,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2015, pp. 1-9.
  • [75] R. Socher, B. Huval, B. Bath, C. D. Manning, and A. Y. Ng, ``Convolutional-recursive deep learning for 3D object classification,'' in Proc. Adv. Neural Inf. Process. Syst., 2012, pp. 656-664.
  • [76] J. Snoek, H. Larochelle, and R. P. Adams, ``Practical Bayesian optimization of machine learning algorithms,'' in Proc. Adv. Neural Inf. Process. Syst., 2012, pp. 2951-2959.
  • [77] J. Cho, K. Lee, E. Shin, G. Choy, and S. Do. (Nov. 2015). ``How much data is needed to train a medical image deep learning system to achieve necessary high accuracy?''
  • [78] J. T. Guibas, T. S. Virdi, and P. S. Li. (Dec. 2017). ``Synthetic medical images from dual generative adversarial networks.''
  • [79] P. Costa et al. (Jan. 2017). ``Towards adversarial retinal image synthesis.''
  • [80] P. Moeskops, M. Veta, M. W. Lafarge, K. A. Eppenhof, and J. P. Pluim, ``Adversarial training and dilated convolutions for brain MRI segmentation,'' in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Cham, Switzerland: Springer, 2017, pp. 56-64.
  • [81] K. Kamnitsas et al., ``Unsupervised domain adaptation in brain lesion segmentation with adversarial networks,'' in Proc. Int. Conf. Inf. Process. Med. Imag., 2017, pp. 597-609.
  • [82] V. Alex, M. S. KP, S. S. Chennamsetty, and G. Krishnamurthi, ``Generative adversarial networks for brain lesion detection,'' in Proc. Med. Imag., Image Process., vol. 101330. Feb. 2017, p. 101330G.
  • [83] M. A. Mazurowski, P. A. Habas, J. M. Zurada, J. Y. Lo, J. A. Baker, and G. D. Tourassi, ``Training neural network classifiers for medical decision making: The effects of imbalanced datasets on classification performance,'' Neural Netw., vol. 21, nos. 2-3, pp. 427-436, 2008.
  • [84] H.-I. Suk et al., ``Latent feature representation with stacked autoencoder for AD/MCI diagnosis,'' Brain Struct. Funct., vol. 220, no. 2, pp. 841-859, 2015.
  • [85] X. Liu, H. R. Tizhoosh, and J. Kofman, ``Generating binary tags for fast medical image retrieval based on convolutional nets and radon transform,'' in Proc. Int. Joint Conf. Neural Netw. (IJCNN), 2016, pp. 2872-2878.
  • [86] Y. Anavi, I. Kogan, E. Gelbart, O. Geva, and H. Greenspan, ``Visualizing and enhancing a deep learning framework using patients age and gender for chest X-ray image retrieval,'' in Proc. Medi. Imag., Comput.-Aided Diagnosis, vol. 9785. Jul. 2016, p. 978510.
  • [87] X. Wang et al. (Mar. 2016). ``Unsupervised category discovery via looped deep pseudo-task optimization using a large scale radiology image database.''
  • [88] H.-C. Shin, L. Lu, L. Kim, A. Seff, J. Yao, and R. M. Summers, ``Interleaved text/image Deep Mining on a large-scale radiology database,'' in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2015, pp. 1090-1099.
  • [89] Y. Duan et al. (Dec. 2017). ``One-shot imitation learning.'' [Online]. Available: https://arxiv.org/abs/1703.07326
  • [90] S. Levine, C. Finn, T. Darrell, and P. Abbeel, ``End-to-end training of deep visuomotor policies,'' J. Mach. Learn. Res., vol. 17, no. 39, pp. 1-40, 2016.
  • [91] B. Thananjeyan, A. Garg, S. Krishnan, C. Chen, L. Miller, and K. Goldberg, ``Multilateral surgical pattern cutting in 2D orthotropic gauze with deep reinforcement learning policies for tensioning,'' in Proc. IEEE Int. Conf. Robot. Autom. (ICRA), May/Jun. 2017, pp. 2371-2378.
  • [92] D. Seita, S. Krishnan, R. Fox, S. McKinley, J. Canny, and K. Goldberg. (Sep. 2017). ``Fast and reliable autonomous surgical debridement with cable-driven robots using a two-phase calibration procedure.'' [Online]. Available: https://arxiv.org/abs/1709.06668
There are 92 citations in total.

Details

Primary Language English
Journal Section Articles
Authors

Zülfikar Aslan 0000-0002-2706-5715

Publication Date January 1, 2019
Acceptance Date July 18, 2018
Published in Issue Year 2018 Volume: 3 Issue: 2

Cite

APA Aslan, Z. (2019). ON THE USE OF DEEP LEARNING METHODS ON MEDICAL IMAGES. The International Journal of Energy and Engineering Sciences, 3(2), 1-15.

IMPORTANT NOTES

No part of the material protected by this copyright may be reproduced or utilized in any form or by any means, without the prior written permission of the copyright owners, unless the use is a fair dealing for the purpose of private study, research or review. The authors reserve the right that their material can be used for purely educational and research purposes. All the authors are responsible for the originality and plagiarism, multiple publication, disclosure and conflicts of interest and fundamental errors in the published works.

*Please note that  All the authors are responsible for the originality and plagiarism, multiple publication, disclosure and conflicts of interest and fundamental errors in the published works. Author(s) submitting a manuscript for publication in IJEES also accept that the manuscript may go through screening for plagiarism check using IThenticate software. For experimental works involving animals, approvals from relevant ethics committee should have been obtained beforehand assuring that the experiment was conducted according to relevant national or international guidelines on care and use of laboratory animals.  Authors may be requested to provide evidence to this end.
 
**Authors are highly recommended to obey the IJEES policies regarding copyrights/Licensing and ethics before submitting their manuscripts.


Copyright © 2024. AA. All rights reserved