Derleme
BibTex RIS Kaynak Göster

Deep Generative Models in Medical Imaging: A Literature Review

Yıl 2024, ERKEN GÖRÜNÜM, 1 - 1

Öz

Deep learning has been used extensively in recent years in numerous studies across many disciplines, including medical imaging. GANs (Generative Adversarial Networks) have started to be widely used in the medical field due to their ability to generate realistic images. Recent research has concentrated on three different deep generative models for improving medical images, and a review of deep learning architectures for data augmentation has been done. In this article, other generative models are emphasized, given the dominance of GANs in the field. Studies have conducted a literature review comparing different deep generative models for medical image data augmentation, without focusing solely on GANs or traditional data augmentation methods. In contrast to variational autoencoders, generative adversarial networks (GANs) are the generative model that is most frequently employed for enhancing medical image data. Recent studies have shown that diffusion models have received more attention in recent years compared to variational autoencoders and GANs for medical image data augmentation. This trend is thought to be related to the fact that many GAN-related research directions have previously been investigated, making it more challenging to advance these architectures' current applications.

Kaynakça

  • [1] Marki, M., Frydrychowicz, A., Kozerke, S., Hope, M., & Wieben, O., “4D flow MRI”, Journal of Magnetic Resonance Imaging, 36(5): 1015-1036, (2012).
  • [2] Garvey, C. J., & Hanlon, R., “Computed tomography in clinical practice”, BMJ, 324(7345): 1077-1080, (2002).
  • [3] Awaja, F., & Pavel, D., “Recycling of PET”, European Polymer Journal, 41(7): 1453-1477, (2005).
  • [4] Zhang, J., Xie, Y., Wu, Q., & Xia, Y., “Medical image classification using synergic deep learning”, Medical Image Analysis, 54: 10-19, (2019).
  • [5] Haralick, R. M., & Shapiro, L. G., “Image segmentation techniques”, Computer Vision, Graphics, and Image Processing, 29(1): 100-132, (1985).
  • [6] Rekanos, I. T., “Neural-network-based inverse-scattering technique for online microwave medical imaging”, IEEE Transactions on Magnetics, 38(2): 1061-1064, (2002).
  • [7] Shorten, C., & Khoshgoftaar, T. M., “A survey on image data augmentation for deep learning”, Journal of Big Data, 6(1): 1-48, (2019).
  • [8] Li, B., Hou, Y., & Che, W., “Data augmentation approaches in natural language processing: A survey”, AI Open, 3: 71-90, (2022).
  • [9] Nishio, M., Noguchi, S., & Fujimoto, K., “Automatic pancreas segmentation using coarse-scaled 2D model of deep learning: Usefulness of data augmentation and deep U-net”, Applied Sciences, 10(10): 3360, (2020).
  • [10] Tripathi, A. M., & Paul, K., “Data augmentation guided knowledge distillation for environmental sound classification”, Neurocomputing, 489: 59-77, (2022).
  • [11] Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., & Bharath, A. A., “Generative adversarial networks: An overview”, IEEE Signal Processing Magazine, 35(1): 53-65, (2018).
  • [12] Sandfort, V., Yan, K., Pickhardt, P. J., & Summers, R. M., “Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks”, Scientific Reports, 9(1): 16884, (2019).
  • [13] Singh, N. K., & Raza, K., “Medical image generation using generative adversarial networks: A review”, Health Informatics: A Computational Perspective in Healthcare, 77-96, (2021).
  • [14] Verma, R., Mehrotra, R., Rane, C., Tiwari, R., & Agariya, A. K., “Synthetic image augmentation with generative adversarial network for enhanced performance in protein classification”, Biomedical Engineering Letters, 10: 443-452, (2020).
  • [15] Su, L., Fu, X., & Hu, Q., “Generative adversarial network based data augmentation and gender-last training strategy with application to bone age assessment”, Computer Methods and Programs in Biomedicine, 212: 106456, (2021).
  • [16] Abdelhalim, I. S. A., Mohamed, M. F., & Mahdy, Y. B., “Data augmentation for skin lesion using self-attention based progressive generative adversarial network”, Expert Systems with Applications, 165: 113922, (2021).
  • [17] Kushwaha, V., & Nandi, G. C., “Study of prevention of mode collapse in generative adversarial network (GAN)”, In 2020 IEEE 4th Conference on Information & Communication Technology (CICT), Chennai, India, 1-6, (2020).
  • [18] Chen, X., Xu, J., Zhou, R., Chen, W., Fang, J., & Liu, C., “TrajVAE: A variational autoencoder model for trajectory generation”, Neurocomputing, 428: 332-339, (2021).
  • [19] Stöckl, A., “Evaluating a synthetic image dataset generated with stable diffusion”, In International Congress on Information and Communication Technology, Singapore: Springer Nature Singapore, 805-818, (2023).
  • [20] Özbey, M., Dalmaz, O., Dar, S. U., Bedel, H. A., Özturk, Ş., Güngör, A., & Çukur, T., “Unsupervised medical image translation with adversarial diffusion models”, IEEE Transactions on Medical Imaging, 1, (2023).
  • [21] De Souza, V. L. T., Marques, B. A. D., Batagelo, H. C., & Gois, J. P., “A review on generative adversarial networks for image generation”, Computers & Graphics, 114: 13-25, (2023).
  • [22] Mumuni, A., & Mumuni, F., “Data augmentation: A comprehensive survey of modern approaches”, Array, 16: 100258, (2022).
  • [23] Wang, Y., Pan, X., Song, S., Zhang, H., Huang, G., & Wu, C., “Implicit semantic data augmentation for deep networks”, Advances in Neural Information Processing Systems, 32, (2019). [24] Engelmann, J., & Lessmann, S., “Conditional Wasserstein GAN-based oversampling of tabular data for imbalanced learning”, Expert Systems with Applications, 174: 114582, (2021).
  • [25] Kim, H. J., & Lee, D., “Image denoising with conditional generative adversarial networks (CGAN) in low dose chest images”, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 954: 161914, (2020).
  • [26] Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A., “Image-to-image translation with conditional adversarial networks”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1125-1134, (2017).
  • [27] Yin, X. X., Sun, L., Fu, Y., Lu, R., & Zhang, Y., “U-Net-Based medical image segmentation”, Journal of Healthcare Engineering, (2022).
  • [28] Dewi, C., Chen, R. C., Liu, Y. T., & Tai, S. K., “Synthetic data generation using DCGAN for improved traffic sign recognition”, Neural Computing and Applications, 34(24): 21465-21480, (2022).
  • [29] Han, C., Rundo, L., Araki, R., Furukawa, Y., Mauri, G., Nakayama, H., & Hayashi, H., “Infinite brain MR images: PGGAN-based data augmentation for tumor detection”, In Neural Approaches to Dynamics of Signal Exchanges, Singapore, 291-303, Springer, (2019).
  • [30] Li, W., Zhong, X., Shao, H., Cai, B., & Yang, X., “Multi-mode data augmentation and fault diagnosis of rotating machinery using modified ACGAN designed with new framework”, Advanced Engineering Informatics, 52: 101552, (2022).
  • [31] Niu, Z., Yu, K., & Wu, X., “LSTM-based VAE-GAN for time-series anomaly detection”, Sensors, 20(13): 3738, (2022).
  • [32] Bilgili, A. K., Akpınar, Ö., Öztürk, M. K., Özçelik, S., & Özbay, E., “XRD vs Raman for InGaN/GaN structures”, Politeknik Dergisi, 23(2), 291-296, (2020).
  • [33] Chen, X., Sun, Y., Zhang, M., & Peng, D., “Evolving deep convolutional variational autoencoders for image classification”, IEEE Transactions on Evolutionary Computation, 25(5): 815-829, (2020).
  • [34] Levy, S., Laloy, E., & Linde, N., “Variational Bayesian inference with complex geostatistical priors using inverse autoregressive flows”, Computers & Geosciences, 105263, (2022).
  • [35] Ye, F., & Bors, A. G., “Deep mixture generative autoencoders”, IEEE Transactions on Neural Networks and Learning Systems, 33(10): 5789-5803, (2021).
  • [36] Xia, Y., Chen, C., Shu, M., & Liu, R., “A denoising method of ECG signal based on variational autoencoder and masked convolution”, Journal of Electrocardiology, 80: 81-90, (2023).
  • [37] Yao, W., Shen, Y., Nicolls, F., & Wang, S. Q., “Conditional diffusion model-based data augmentation for Alzheimer’s prediction”, In International Conference on Neural Computing for Advanced Applications, Singapore, 33-46, Springer Nature Singapore, (2023).
  • [38] Sun, W., Chen, D., Wang, C., Ye, D., Feng, Y., & Chen, C., “Accelerating diffusion sampling with classifier-based feature distillation”, In 2023 IEEE International Conference on Multimedia and Expo (ICME), 810-815, (2023).
  • [39] Kong, Z., & Ping, W., “On fast sampling of diffusion probabilistic models”, arXiv preprint arXiv:2106.00132, (2021).
  • [40] Croitoru, F. A., Hondru, V., Ionescu, R. T., & Shah, M., “Diffusion models in vision: A survey”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9): 10850-10869, (2023).
  • [41] Han, C., Hayashi, H., Rundo, L., Araki, R., Shimoda, W., Muramatsu, S., ... & Nakayama, H., “GAN-based synthetic brain MR image generation”, In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 734-738, (2018).
  • [42] Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J., & Greenspan, H., “GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification”, Neurocomputing, 321: 321-331, (2018).
  • [43] Guibas, J. T., Virdi, T. S., & Li, P. S., “Synthetic medical images from dual generative adversarial networks”, arXiv preprint arXiv:1709.01872, (2017).
  • [44] Platscher, M., Zopes, J., & Federau, C., “Image translation for medical image generation: Ischemic stroke lesion segmentation”, Biomedical Signal Processing and Control, 72: 103283, (2022).
  • [45] Fawakherji, M., Potena, C., Prevedello, I., Pretto, A., Bloisi, D. D., & Nardi, D., “Data augmentation using GANs for crop/weed segmentation in precision farming”, In 2020 IEEE Conference on Control Technology and Applications (CCTA), Montreal, QC, Canada, 279-284, (2020).
  • [46] Yurt, M., Dar, S. U., Erdem, A., Erdem, E., Oguz, K. K., & Çukur, T., “mustGAN: Multi-stream generative adversarial networks for MR image synthesis”, Medical Image Analysis, 70: 101944, (2021).
  • [47] Dar, S. U., Yurt, M., Karacan, L., Erdem, A., Erdem, E., & Çukur, T., “Image synthesis in multi-contrast MRI with conditional generative adversarial networks”, IEEE Transactions on Medical Imaging, 38(10): 2375-2388, (2019).
  • [48] Sun, Y., Yuan, P., & Sun, Y., “MM-GAN: 3D MRI data augmentation for medical image segmentation via generative adversarial networks”, In 2020 IEEE International Conference on Knowledge Graph (ICKG), Nanjing, China, 227-234, (2020).
  • [49] Huang, P., Liu, X., & Huang, Y., “Data augmentation for medical MR image using generative adversarial networks”, arXiv preprint, arXiv:2111.14297, (2021).
  • [50] Costa, P., Galdran, A., Meyer, M. I., Niemeijer, M., Abràmoff, M., Mendonça, A. M., & Campilho, A., “End-to-end adversarial retinal image synthesis”, IEEE Transactions on Medical Imaging, 37(3): 781-791, (2017).
  • [51] Zhuang, P., Schwing, A. G., & Koyejo, O., “fMRI data augmentation via synthesis”, In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 1783-1787, (2019).
  • [52] Liang, J., & Chen, J., “Data augmentation of thyroid ultrasound images using generative adversarial network”, In 2021 IEEE International Ultrasonics Symposium (IUS), 1-4, (2021).
  • [53] Beers, A., Brown, J., Chang, K., Campbell, J. P., Ostmo, S., Chiang, M. F., & Kalpathy-Cramer, J., “High-resolution medical image synthesis using progressively grown generative adversarial networks”, arXiv preprint, arXiv:1805.03144, (2018).
  • [54] Sun, L., Wang, J., Huang, Y., Ding, X., Greenspan, H., & Paisley, J., “An adversarial learning approach to medical image synthesis for lesion detection”, IEEE Journal of Biomedical and Health Informatics, 24(8): 2303-2314, (2020).
  • [55] Wang, Q., Zhang, X., Chen, W., Wang, K., & Zhang, X., “Class-aware multi-window adversarial lung nodule synthesis conditioned on semantic features” In Medical Image Computing and Computer Assisted Intervention– MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part VI 23, 589-598, Springer International Publishing, (2020).
  • [56] Geng, X., Yao, Q., Jiang, K., & Zhu, Y., “Deep neural generative adversarial model based on VAE+ GAN for disorder diagnosis”, In 2020 International Conference on Internet of Things and Intelligent Applications (ITIA), Zhenjiang, China, 1-7, (2020).
  • [57] Baur, C., Albarqouni, S., & Navab, N., “Generating highly realistic images of skin lesions with GANs. In OR 2.0 Context-Aware Operating Theaters”, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis: First International Workshop, Granada, Spain, 260-267, Springer International Publishing, (2018).
  • [58] Ben-Cohen, A., Klang, E., Raskin, S. P., Amitai, M. M., & Greenspan, H., “Virtual PET images from CT data using deep convolutional networks: initial results”, In Simulation and Synthesis in Medical Imaging: Second International Workshop, SASHIMI 2017, Held in Conjunction with MICCAI 2017, QC, Canada, Proceedings 2, 49-57, Springer International Publishing, (2017).
  • [59] Phukan, S., Singh, J., Gogoi, R., Dhar, S., & Jana, N. D., “Covid-19 chest x-ray image generation using resnet-dcgan model”, In Advances in Intelligent Computing and Communication: Proceedings of ICAC 2021, Singapore, 227-234, Springer Nature Singapore, (2022).
  • [60] Han, C., Rundo, L., Murao, K., Noguchi, T., Shimahara, Y., Milacski, Z. Á., ... & Satoh, S. I., “MADGAN: Unsupervised medical anomaly detection GAN using multiple adjacent brain MRI slice reconstruction”, BMC bioinformatics, 22(2): 1-20, (2021).
  • [61] Hirte, A. U., Platscher, M., Joyce, T., Heit, J. J., Tranvinh, E., & Federau, C., “Realistic generation of diffusion-weighted magnetic resonance brain images with deep generative models”, Magnetic Resonance Imaging, 81: 60-66, (2021).
  • [62] Zhao, D., Zhu, D., Lu, J., Luo, Y., & Zhang, G., “Synthetic medical images using F&BGAN for improved lung nodules classification by multi-scale VGG16”, Symmetry, 10(10): 519, (2018).
  • [63] Guan, Q., Chen, Y., Wei, Z., Heidari, A. A., Hu, H., Yang, X. H., ... & Chen, F., “Medical image augmentation for lesion detection using a texture-constrained multichannel progressive GAN”, Computers in Biology and Medicine, 145: 105444, (2022).
  • [64] Ahmad, B., Sun, J., You, Q., Palade, V., & Mao, Z., “Brain tumor classification using a combination of variational autoencoders and generative adversarial networks”, Biomedicines, 10(2): 223, (2022).
  • [65] Pombo, G., Gray, R., Cardoso, M. J., Ourselin, S., Rees, G., Ashburner, J., & Nachev, P., “Equitable modelling of brain imaging by counterfactual augmentation with morphologically constrained 3d deep generative models”, Medical Image Analysis, 84: 102723, (2023).
  • [66] Tan, J., Jing, L., Huo, Y., Li, L., Akin, O., & Tian, Y., “LGAN: Lung segmentation in CT scans using generative adversarial network”, Computerized Medical Imaging and Graphics, 87: 101817, (2021).
  • [67] Wang, S., Chen, Z., You, S., Wang, B., Shen, Y., & Lei, B., “Brain stroke lesion segmentation using consistent perception generative adversarial network”, Neural Computing and Applications, 34(11): 8657-8669, (2022).
  • [68] Yu, Z., Xiang, Q., Meng, J., Kou, C., Ren, Q., & Lu, Y., “Retinal image synthesis from multiple-landmarks input with generative adversarial networks”, Biomedical engineering online, 18(1): 1-15, (2019).
  • [69] Zhang, J., Yu, L., Chen, D., Pan, W., Shi, C., Niu, Y., ... & Cheng, Y., “Dense GAN and multi-layer attention based lesion segmentation method for COVID-19 CT images”, Biomedical Signal Processing and Control, 69: 102901, (2021).
  • [70] Mahapatra, D., Bozorgtabar, B., Thiran, J. P., & Reyes, M., “Efficient active learning for image classification and segmentation using a sample selection and conditional generative adversarial network”, In International Conference on Medical Image Computing and Computer-Assisted Intervention, 580-588, Cham: Springer International Publishing, (2018).
  • [71] Qasim, A. B., Ezhov, I., Shit, S., Schoppe, O., Paetzold, J. C., Sekuboyina, A., ... & Menze, B., “Red-GAN: Attacking class imbalance via conditioned generation. Yet another medical imaging perspective”, In Medical imaging with deep learning, 655-668, (2020).
  • [72] Z, Y., Yang, Z., Zhang, H., Eric, I., Chang, C., Fan, Y., & Xu, Y., “3D segmentation guided style-based generative adversarial networks for pet synthesis”, IEEE Transactions on Medical Imaging, 41(8): 2092-2104, (2022).
  • [73] Yang, T., Wu, T., Li, L., & Zhu, C., “SUD-GAN: deep convolution generative adversarial network combined with short connection and dense block for retinal vessel segmentation”, Journal of digital imaging, 33: 946-957, (2020).
  • [74] Gu, X., Knutsson, H., Nilsson, M., & Eklund, A., “Generating diffusion MRI scalar maps from T1 weighted images using generative adversarial networks”, In Image Analysis: 21st Scandinavian Conference, SCIA 2019, Norrköping, Sweden”, June 11–13, 2019, Proceedings 21: 489-498, Springer International Publishing, (2019).
  • [75] Zhang, Z., Yang, L., & Zheng, Y., “Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network”, In Proceedings of the IEEE conference on computer vision and pattern Recognition, 9242-9251, (2018).
  • [76] Toda, R., Teramoto, A., Kondo, M., Imaizumi, K., Saito, K., & Fujita, H., “Lung cancer CT image generation from a free-form sketch using style-based pix2pix for data augmentation”, Scientific reports, 12(1): 12867, (2022).
  • [77] Hu, X., “Multi-texture GAN: exploring the multi-scale texture translation for brain MR images”, arXiv preprint, arXiv:2102.07225, (2021).
  • [78] Yang, H., Lu, X., Wang, S. H., Lu, Z., Yao, J., Jiang, Y., & Qian, P., “Synthesizing multi-contrast MR images via novel 3D conditional Variational auto-encoding GAN”, Mobile Networks and Applications, 26: 415-424, (2021).
  • [79] Sikka, A., Virk, J. S., & Bathula, D. R., “MRI to PET Cross-Modality Translation using Globally and Locally Aware GAN (GLA-GAN) for Multi-Modal Diagnosis of Alzheimer's Disease”, arXiv preprint arXiv:2108.02160, (2021).
  • [80] Amirrajab, S., Lorenz, C., Weese, J., Pluim, J., & Breeuwer, M., ”Pathology Synthesis of 3D Consistent Cardiac MR Images Using 2D VAEs and GANs”, In International Workshop on Simulation and Synthesis in Medical Imaging, 34-42, Cham: Springer International Publishing, (2022).
  • [81] Pesteie, M., Abolmaesumi, P., & Rohling, R. N., “Adaptive augmentation of medical data using independently conditional variational auto-encoders”, IEEE transactions on medical imaging, 38(12): 2807-2820, (2019).
  • [82] Chadebec, C., Thibeau-Sutre, E., Burgos, N., & Allassonnière, S., “Data augmentation in high dimensional low sample size setting using a geometry-based variational autoencoder”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(3): 2879-2896, (2022).
  • [83] Huo, J., Vakharia, V., Wu, C., Sharan, A., Ko, A., Ourselin, S., & Sparks, R., “Brain Lesion Synthesis via Progressive Adversarial Variational Auto-Encoder”, In International Workshop on Simulation and Synthesis in Medical Imaging, Singapure, 101-111, Cham: Springer International Publishing, (2022).
  • [84] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C., “Improved training of wasserstein gans”, Advances in neural information processing systems, 30, (2017).
  • [85] Wang, L., Guo, D., Wang, G., & Zhang, S., “Annotation-efficient learning for medical image segmentation based on noisy pseudo labels and adversarial learning”, IEEE Transactions on Medical Imaging, 40(10): 2795-2807, (2020).
  • [86] Naval Marimont, S., & Tarroni, G., “Implicit field learning for unsupervised anomaly detection in medical images”, In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, Proceedings, Part II 24: 189-198, Springer International Publishing, (2021).
  • [87] Madan, Y., Veetil, I. K., V, S., EA, G., & KP, S., “Synthetic Data Augmentation of MRI using Generative Variational Autoencoder for Parkinson’s Disease Detection”, In Evolution in Computational Intelligence: Proceedings of the 9th International Conference on Frontiers in Intelligent Computing: Theory and Applications (FICTA 2021), Singapore, 171-178, Singapore: Springer Nature Singapore, (2022).
  • [88] Chadebec, C., & Allassonnière, S., “Data augmentation with variational autoencoders and manifold sampling”, In Deep Generative Models, and Data Augmentation, Labelling, and Imperfections: First Workshop, DGM4MICCAI 2021, and First Workshop, DALI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, 184-192, Springer International Publishing, (2021).
  • [89] Celik, N., Ali, S., Gupta, S., Braden, B., & Rittscher, J., “Endouda: a modality independent segmentation approach for endoscopy imaging”, In International Conference on Medical Image Computing and Computer-Assisted Intervention, 303-312, Cham: Springer International Publishing, (2021).
  • [90] Pinaya, W. H. L., Tudosiu, P. D., Gray, R., Rees, G., Nachev, P., Ourselin, S., & Cardoso, M. J., “Unsupervised brain anomaly detection and segmentation with transformers”, arXiv preprint, arXiv:2102.11650, (2021).
  • [91] Zhu, H., Togo, R., Ogawa, T., & Haseyama, M., “Diversity Learning Based on Multi-Latent Space for Medical Image Visual Question Generation”, Sensors, 23(3): 1057, (2023).
  • [92] Biffi, C., Oktay, O., Tarroni, G., Bai, W., De Marvao, A., Doumou, G., ... & Rueckert, D., “Learning interpretable anatomical features through deep generative models: Application to cardiac remodeling.”, In Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain,16-20, 2018, Proceedings, 464-471, Springer International Publishing, (2018).
  • [93] Volokitin, A., Erdil, E., Karani, N., Tezcan, K. C., Chen, X., Van Gool, L., & Konukoglu, E., “Modelling the distribution of 3D brain MRI using a 2D slice VAE”, In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part VII 23, 657-666, Springer International Publishing, (2020).
  • [94] Huang, Q., Qiao, C., Jing, K., Zhu, X., & Ren, K., “Biomarkers identification for Schizophrenia via VAE and GSDAE-based data augmentation”, Computers in Biology and Medicine, 146, 105603, (2022).
  • [95] Diamantis, D. E., Gatoula, P., & Iakovidis, D. K., “EndoVAE: Generating Endoscopic Images with a Variational Autoencoder”, In 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), 1-5, (2022).
  • [96] Sundgaard, J. V., Hannemose, M. R., Laugesen, S., Bray, P., Harte, J., Kamide, Y., ... & Christensen, A. N., “Multi-modal data generation with a deep metric variational autoencoder”, arXiv preprint arXiv:2202.03434, (2022).
  • [97] Pinaya, W. H., Tudosiu, P. D., Dafflon, J., Da Costa, P. F., Fernandez, V., Nachev, P., ... & Cardoso, M. J., “Brain imaging generation with latent diffusion models”, In MICCAI Workshop on Deep Generative Models, Singapure, 117-126, Cham: Springer Nature Switzerland, (2022).
  • [98] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B., “High-resolution image synthesis with latent diffusion models”, In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, USA, 10684-10695, (2022).
  • [99] Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., & Paul Smolley, S., “Least squares generative adversarial networks”, In Proceedings of the IEEE international conference on computer vision, 2794-2802, (2017).
  • [100] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S., “Gans trained by a two time-scale update rule converge to a local nash equilibrium”, Advances in neural information processing systems, 30, (2017).
  • [101] Fernandez, V., Pinaya, W. H. L., Borges, P., Tudosiu, P. D., Graham, M. S., Vercauteren, T., & Cardoso, M. J., “Can segmentation models be trained with fully synthetically generated data?”, In International Workshop on Simulation and Synthesis in Medical Imaging, Singapure, 79-90, Cham: Springer International Publishing, (2022).
  • [102] Isensee, F., Petersen, J., Klein, A., Zimmerer, D., Jaeger, P. F., Kohl, S., ... & Maier-Hein, K. H., “nnu-net: Self-adapting framework for u-net-based medical image segmentation”, arXiv preprint, arXiv:1809.10486, (2018). [103] Lyu, Q., & Wang, G., “Conversion between ct and mri images using diffusion and score-matching models”, arXiv preprint, arXiv:2209.12104, (2022).
  • [104] Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., & Poole, B., “Score-based generative modeling through stochastic differential equations”, arXiv preprint, arXiv:2011.13456, (2020).
  • [105] Nyholm, T., Svensson, S., Andersson, S., Jonsson, J., Sohlin, M., Gustafsson, C., ... & Gunnlaugsson, A., “MR and CT data with multiobserver delineations of organs in the pelvic area—Part of the Gold Atlas Project”, Medical physics, 45(3): 1295-1300, (2018).
  • [106] Darıcı, M. B., “Performance analysis of combination of cnn-based models with adaboost algorithm to diagnose covid-19 disease”, Politeknik Dergisi, 26(1), 179-190, (2023).
  • [107] Dorjsembe, Z., Odonchimed, S., & Xiao, F., “Three-dimensional medical image synthesis with denoising diffusion probabilistic models”, In Medical Imaging with Deep Learning, Switzerland, (2022).
  • [108] Jäger, P. F., Bickelhaupt, S., Laun, F. B., Lederer, W., Heidi, D., Kuder, T. A., ... & Maier-Hein, K. H., “Revealing hidden potentials of the q-space signal in breast cancer”, In Medical Image Computing and Computer Assisted Intervention− MICCAI 2017: 20th International Conference Quebec City, QC, Canada, 664-671, Springer International Publishing, (2017).
  • [109] Mao, W., Chen, C., Gao, H., Xiong, L., & Lin, Y., “A deep learning-based automatic staging method for early endometrial cancer on MRI images”, Frontiers in Physiology, 13: 974245, (2022).
  • [110] Sangeetha, S. K. B., Muthukumaran, V., Deeba, K., Rajadurai, H., Maheshwari, V., & Dalu, G. T., “Multiconvolutional Transfer Learning for 3D Brain Tumor Magnetic Resonance Images”, Computational Intelligence and Neuroscience, (2022).
  • [111] Barın, S., & Güraksın, G. E., “An automatic skin lesion segmentation system with hybrid FCN-ResAlexNet”, Engineering Science and Technology, an International Journal, 34, 101174, (2022).
  • [112] Sagers, L. W., Diao, J. A., Groh, M., Rajpurkar, P., Adamson, A. S., & Manrai, A. K., “Improving dermatology classifiers across populations using images generated by large diffusion models”, arXiv preprint, arXiv:2211.13352, (2022).
  • [113] Peng, W., Adeli, E., Zhao, Q., & Pohl, K. M., “Generating Realistic 3D Brain MRIs Using a Conditional Diffusion Probabilistic Model”, arXiv preprint, arXiv:2212.08034, (2022).
  • [114] Ali, H., Murad, S., & Shah, Z., “Spot the fake lungs: Generating synthetic medical images using neural diffusion models”, In Irish Conference on Artificial Intelligence and Cognitive Science, Ireland, 32-39, Cham: Springer Nature Switzerland, (2022).
  • [115] Saeed, S. U., Syer, T., Yan, W., Yang, Q., Emberton, M., Punwani, S., ... & Hu, Y., “Bi-parametric prostate MR image synthesis using pathology and sequence-conditioned stable diffusion”, arXiv preprint, arXiv:2303.02094, (2023).
  • [116] Weber, T., Ingrisch, M., Bischl, B., & Rügamer, D., “Cascaded Latent Diffusion Models for High-Resolution Chest X-ray Synthesis”, In Pacific-Asia Conference on Knowledge Discovery and Data Mining, 180-191, Cham: Springer Nature Switzerland, (2023).
  • [117] Peng, J., Qiu, R. L., Wynne, J. F., Chang, C. W., Pan, S., Wang, T., ... & Yang, X., “CBCT-Based Synthetic CT Image Generation Using Conditional Denoising Diffusion Probabilistic Model”, arXiv preprint, arXiv:2303.02649, (2023).
  • [118] Meng, X., Gu, Y., Pan, Y., Wang, N., Xue, P., Lu, M., ... & Shen, D., “A novel unified conditional score-based generative framework for multi-modal medical image completion”, arXiv preprint, arXiv:2207.03430, (2022).
  • [119]Kim, B., & Ye, J. C., “Diffusion deformable model for 4D temporal medical image generation”, In International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 539-548, Cham: Springer Nature Switzerland, (2022).
  • [120]Şenalp, F. M., & Ceylan, M., “Termal yüz görüntülerinden oluşan yeni bir veri seti için derin öğrenme tabanlı süper çözünürlük uygulaması”, Politeknik Dergisi, 1-1, (2022).
  • [121]Kazerouni, A., Aghdam, E. K., Heidari, M., Azad, R., Fayyaz, M., Hacihaliloglu, I., & Merhof, D., “Diffusion models for medical image analysis: A comprehensive survey”, arXiv preprint, arXiv:2211.07804, (2022).
  • [122]Abdollahi, B., Tomita, N., & Hassanpour, S., “Data augmentation in training deep learning models for medical image analysis” Deep learners and deep learner descriptors for medical applications, Germany, 167-180, (2020).
  • [123]Huang, H., He, R., Sun, Z., & Tan, T., “Introvae: Introspective variational autoencoders for photographic image synthesis”, Advances in neural information processing systems, 31, (2018).
  • [124]Amyar, A., Ruan, S., Vera, P., Decazes, P., & Modzelewski, R., “RADIOGAN: Deep convolutional conditional generative adversarial network to generate PET images”, In Proceedings of the 7th International Conference on Bioinformatics Research and Applications, Berlin, Germany, 28-33, (2020).
  • [125]Kynkäänniemi, T., Karras, T., Laine, S., Lehtinen, J., & Aila, T., “Improved precision and recall metric for assessing generative models”, Advances in Neural Information Processing Systems, 32, (2019).
  • [126]Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P, “Image quality assessment: from error visibility to structural similarity”, IEEE transactions on image processing, 13(4): 600-612, (2004).
  • [127]Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., ... & Shi, W., “Photo-realistic single image super-resolution using a generative adversarial network”, In Proceedings of the IEEE conference on computer vision and pattern recognition, USA, 4681-4690, (2017).
  • [128]Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O., “The unreasonable effectiveness of deep features as a perceptual metric”, In Proceedings of the IEEE conference on computer vision and pattern recognition, USA, 586-595, (2018).
  • [129]Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X., “Improved techniques for training gans”, Advances in neural information processing systems, 29, (2016).
  • [130]Sudre, C. H., Li, W., Vercauteren, T., Ourselin, S., & Jorge Cardoso, M., “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations”, In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, 240-248, Springer International Publishing, (2017).
  • [131]Rockafellar, R. T., & Wets, R. J. B., “Variational analysis”, Springer Science & Business Media, 317, (2009).
  • [132]Gretton, A., Borgwardt, K. M., Rasch, M. J., Schölkopf, B., & Smola, A., “A kernel two-sample test”, The Journal of Machine Learning Research, 13(1): 723-773, (2012).
  • [133]Cover, T., & Hart, P., “Nearest neighbor pattern classification”, IEEE transactions on information theory, 13(1): 21-27, (1967).
  • [134]Bounliphone, W., Belilovsky, E., Blaschko, M. B., Antonoglou, I., & Gretton, A., “A test of relative similarity for model selection in generative models”, arXiv preprint, arXiv:1511.04581, (2015).
  • [135]Vaserstein, L. N., “Markov processes over denumerable products of spaces, describing large systems of automata”, Problemy Peredachi Informatsii, 5(3): 64-72, (1969).
  • [136] Fawcett, T., “An introduction to ROC analysis”, Pattern recognition letters, 27(8): 861-874, (2006).
  • [137]Nguyen, X., Wainwright, M. J., & Jordan, M. I., “Estimating divergence functionals and the likelihood ratio by convex risk minimization”, IEEE Transactions on Information Theory, 56(11): 5847-5861, (2010).
  • [138]Sheikh, H. R., & Bovik, A. C., “A visual information fidelity approach to video quality assessment”, In The first international workshop on video processing and quality metrics for consumer electronics, 7(2): 2117-2128, (2005).
  • [139]Wang, Z., & Bovik, A. C., “A universal image quality index”, IEEE signal processing letters, 9(3): 81-84, (2002).
  • [140]Tavse, S., Varadarajan, V., Bachute, M., Gite, S., & Kotecha, K., “A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI”, Future Internet, 14(12): 351, (2022).
  • [141]Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., ... & Sutskever, I., “Zero-shot text-to-image generation”, In International Conference on Machine Learning, 8821-8831, Online, PMLR, (2021).
  • [142]Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E. L., ... & Norouzi, M., “Photorealistic text-to-image diffusion models with deep language understanding”, Advances in Neural Information Processing Systems, 35: 36479-36494, (2022).
  • [143]Kang, M., Zhu, J. Y., Zhang, R., Park, J., Shechtman, E., Paris, S., & Park, T., “Scaling up gans for text-to-image synthesis”, In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10124-10134, (2023).
  • [144]Sauer, A., Karras, T., Laine, S., Geiger, A., & Aila, T., “Stylegan-t: Unlocking the power of gans for fast large-scale text-to-image synthesis”, arXiv preprint, arXiv:2301.09515, (2023).
  • [145]Delgado, J. M. D., & Oyedele, L., “Deep learning with small datasets: using autoencoders to address limited datasets in construction management”, Applied Soft Computing, 112: 07836, (2021).
  • [146]Caterini, A. L., Doucet, A., & Sejdinovic, D., “Hamiltonian variational auto-encoder”, Advances in Neural Information Processing Systems, 31, (2018).
  • [147]He, Y., Wang, L., Yang, F., Clarysse, P., Robini, M., & Zhu, Y., “Effect of different configurations of diffusion gradient directions on accuracy of diffusion tensor estimation in cardiac DTI”, In 2022 16th IEEE International Conference on Signal Processing (ICSP), China, 1: 437-441, (2022).
  • [148]Talo, M., Baloglu, U. B., Yıldırım, Ö., & Acharya, U. R., “Application of deep transfer learning for automated brain abnormality classification using MR images”, Cognitive Systems Research, 54: 176-188, (2019).
  • [149]Ren, P., Xiao, Y., Chang, X., Huang, P. Y., Li, Z., Gupta, B. B., ... & Wang, X., “A survey of deep active learning”, ACM computing surveys (CSUR), 54(9): 1-40, (2021).
  • [150]Rahimi, S., Oktay, O., Alvarez-Valle, J., & Bharadwaj, S., “Addressing the exorbitant cost of labeling medical images with active learning”, In International Conference on Machine Learning in Medical Imaging and Analysis, Spain, 1, (2021).

Tıbbi Görüntülemede Derin Üretken Modeller : Bir Literatür Taraması

Yıl 2024, ERKEN GÖRÜNÜM, 1 - 1

Öz

Derin öğrenme, son yıllarda tıbbi görüntüleme de dahil olmak üzere birçok disiplinde yapılan çok sayıda çalışmada yaygın olarak kullanılmaktadır. GAN'lar (Generative Adversarial Networks), gerçekçi görüntüler üretebilme yeteneklerinden dolayı tıp alanında yaygın olarak kullanılmaya başlanmıştır. Son araştırmalar, tıbbi görüntülerin iyileştirilmesine yönelik üç farklı derin üretken modele odaklanmaktadır ve veri artırmaya yönelik derin öğrenme mimarilerinin bir incelemesi yapılmıştır. Bu makalede GAN'ların alandaki hakimiyeti dikkate alınarak diğer üretken modeller üzerinde durulmaktadır. Çalışmada, yalnızca GAN'lara veya geleneksel veri artırma yöntemlerine odaklanmadan, tıbbi görüntü verisi artırmaya yönelik farklı derin üretken modelleri karşılaştıran bir literatür taraması gerçekleştirilmiştir Değişken otomatik kodlayıcıların aksine, üretken çekişmeli ağlar (GAN'lar), tıbbi görüntü verilerini geliştirmek için en sık kullanılan üretken modeldir. Son araştırmalar, difüzyon modellerinin son yıllarda tıbbi görüntü verisi artırmaya yönelik varyasyonel otomatik kodlayıcılar ve GAN'lara kıyasla daha fazla ilgi gördüğünü göstermiştir. Bu eğilimin, GAN ile ilgili birçok araştırma yönünün daha önce araştırılmış olmasıyla ilişkili olduğu ve bu mimarilerin mevcut uygulamalarını geliştirmeyi daha da zorlaştırdığı düşünülmektedir.

Kaynakça

  • [1] Marki, M., Frydrychowicz, A., Kozerke, S., Hope, M., & Wieben, O., “4D flow MRI”, Journal of Magnetic Resonance Imaging, 36(5): 1015-1036, (2012).
  • [2] Garvey, C. J., & Hanlon, R., “Computed tomography in clinical practice”, BMJ, 324(7345): 1077-1080, (2002).
  • [3] Awaja, F., & Pavel, D., “Recycling of PET”, European Polymer Journal, 41(7): 1453-1477, (2005).
  • [4] Zhang, J., Xie, Y., Wu, Q., & Xia, Y., “Medical image classification using synergic deep learning”, Medical Image Analysis, 54: 10-19, (2019).
  • [5] Haralick, R. M., & Shapiro, L. G., “Image segmentation techniques”, Computer Vision, Graphics, and Image Processing, 29(1): 100-132, (1985).
  • [6] Rekanos, I. T., “Neural-network-based inverse-scattering technique for online microwave medical imaging”, IEEE Transactions on Magnetics, 38(2): 1061-1064, (2002).
  • [7] Shorten, C., & Khoshgoftaar, T. M., “A survey on image data augmentation for deep learning”, Journal of Big Data, 6(1): 1-48, (2019).
  • [8] Li, B., Hou, Y., & Che, W., “Data augmentation approaches in natural language processing: A survey”, AI Open, 3: 71-90, (2022).
  • [9] Nishio, M., Noguchi, S., & Fujimoto, K., “Automatic pancreas segmentation using coarse-scaled 2D model of deep learning: Usefulness of data augmentation and deep U-net”, Applied Sciences, 10(10): 3360, (2020).
  • [10] Tripathi, A. M., & Paul, K., “Data augmentation guided knowledge distillation for environmental sound classification”, Neurocomputing, 489: 59-77, (2022).
  • [11] Creswell, A., White, T., Dumoulin, V., Arulkumaran, K., Sengupta, B., & Bharath, A. A., “Generative adversarial networks: An overview”, IEEE Signal Processing Magazine, 35(1): 53-65, (2018).
  • [12] Sandfort, V., Yan, K., Pickhardt, P. J., & Summers, R. M., “Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks”, Scientific Reports, 9(1): 16884, (2019).
  • [13] Singh, N. K., & Raza, K., “Medical image generation using generative adversarial networks: A review”, Health Informatics: A Computational Perspective in Healthcare, 77-96, (2021).
  • [14] Verma, R., Mehrotra, R., Rane, C., Tiwari, R., & Agariya, A. K., “Synthetic image augmentation with generative adversarial network for enhanced performance in protein classification”, Biomedical Engineering Letters, 10: 443-452, (2020).
  • [15] Su, L., Fu, X., & Hu, Q., “Generative adversarial network based data augmentation and gender-last training strategy with application to bone age assessment”, Computer Methods and Programs in Biomedicine, 212: 106456, (2021).
  • [16] Abdelhalim, I. S. A., Mohamed, M. F., & Mahdy, Y. B., “Data augmentation for skin lesion using self-attention based progressive generative adversarial network”, Expert Systems with Applications, 165: 113922, (2021).
  • [17] Kushwaha, V., & Nandi, G. C., “Study of prevention of mode collapse in generative adversarial network (GAN)”, In 2020 IEEE 4th Conference on Information & Communication Technology (CICT), Chennai, India, 1-6, (2020).
  • [18] Chen, X., Xu, J., Zhou, R., Chen, W., Fang, J., & Liu, C., “TrajVAE: A variational autoencoder model for trajectory generation”, Neurocomputing, 428: 332-339, (2021).
  • [19] Stöckl, A., “Evaluating a synthetic image dataset generated with stable diffusion”, In International Congress on Information and Communication Technology, Singapore: Springer Nature Singapore, 805-818, (2023).
  • [20] Özbey, M., Dalmaz, O., Dar, S. U., Bedel, H. A., Özturk, Ş., Güngör, A., & Çukur, T., “Unsupervised medical image translation with adversarial diffusion models”, IEEE Transactions on Medical Imaging, 1, (2023).
  • [21] De Souza, V. L. T., Marques, B. A. D., Batagelo, H. C., & Gois, J. P., “A review on generative adversarial networks for image generation”, Computers & Graphics, 114: 13-25, (2023).
  • [22] Mumuni, A., & Mumuni, F., “Data augmentation: A comprehensive survey of modern approaches”, Array, 16: 100258, (2022).
  • [23] Wang, Y., Pan, X., Song, S., Zhang, H., Huang, G., & Wu, C., “Implicit semantic data augmentation for deep networks”, Advances in Neural Information Processing Systems, 32, (2019). [24] Engelmann, J., & Lessmann, S., “Conditional Wasserstein GAN-based oversampling of tabular data for imbalanced learning”, Expert Systems with Applications, 174: 114582, (2021).
  • [25] Kim, H. J., & Lee, D., “Image denoising with conditional generative adversarial networks (CGAN) in low dose chest images”, Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 954: 161914, (2020).
  • [26] Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A., “Image-to-image translation with conditional adversarial networks”, In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1125-1134, (2017).
  • [27] Yin, X. X., Sun, L., Fu, Y., Lu, R., & Zhang, Y., “U-Net-Based medical image segmentation”, Journal of Healthcare Engineering, (2022).
  • [28] Dewi, C., Chen, R. C., Liu, Y. T., & Tai, S. K., “Synthetic data generation using DCGAN for improved traffic sign recognition”, Neural Computing and Applications, 34(24): 21465-21480, (2022).
  • [29] Han, C., Rundo, L., Araki, R., Furukawa, Y., Mauri, G., Nakayama, H., & Hayashi, H., “Infinite brain MR images: PGGAN-based data augmentation for tumor detection”, In Neural Approaches to Dynamics of Signal Exchanges, Singapore, 291-303, Springer, (2019).
  • [30] Li, W., Zhong, X., Shao, H., Cai, B., & Yang, X., “Multi-mode data augmentation and fault diagnosis of rotating machinery using modified ACGAN designed with new framework”, Advanced Engineering Informatics, 52: 101552, (2022).
  • [31] Niu, Z., Yu, K., & Wu, X., “LSTM-based VAE-GAN for time-series anomaly detection”, Sensors, 20(13): 3738, (2022).
  • [32] Bilgili, A. K., Akpınar, Ö., Öztürk, M. K., Özçelik, S., & Özbay, E., “XRD vs Raman for InGaN/GaN structures”, Politeknik Dergisi, 23(2), 291-296, (2020).
  • [33] Chen, X., Sun, Y., Zhang, M., & Peng, D., “Evolving deep convolutional variational autoencoders for image classification”, IEEE Transactions on Evolutionary Computation, 25(5): 815-829, (2020).
  • [34] Levy, S., Laloy, E., & Linde, N., “Variational Bayesian inference with complex geostatistical priors using inverse autoregressive flows”, Computers & Geosciences, 105263, (2022).
  • [35] Ye, F., & Bors, A. G., “Deep mixture generative autoencoders”, IEEE Transactions on Neural Networks and Learning Systems, 33(10): 5789-5803, (2021).
  • [36] Xia, Y., Chen, C., Shu, M., & Liu, R., “A denoising method of ECG signal based on variational autoencoder and masked convolution”, Journal of Electrocardiology, 80: 81-90, (2023).
  • [37] Yao, W., Shen, Y., Nicolls, F., & Wang, S. Q., “Conditional diffusion model-based data augmentation for Alzheimer’s prediction”, In International Conference on Neural Computing for Advanced Applications, Singapore, 33-46, Springer Nature Singapore, (2023).
  • [38] Sun, W., Chen, D., Wang, C., Ye, D., Feng, Y., & Chen, C., “Accelerating diffusion sampling with classifier-based feature distillation”, In 2023 IEEE International Conference on Multimedia and Expo (ICME), 810-815, (2023).
  • [39] Kong, Z., & Ping, W., “On fast sampling of diffusion probabilistic models”, arXiv preprint arXiv:2106.00132, (2021).
  • [40] Croitoru, F. A., Hondru, V., Ionescu, R. T., & Shah, M., “Diffusion models in vision: A survey”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(9): 10850-10869, (2023).
  • [41] Han, C., Hayashi, H., Rundo, L., Araki, R., Shimoda, W., Muramatsu, S., ... & Nakayama, H., “GAN-based synthetic brain MR image generation”, In 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 734-738, (2018).
  • [42] Frid-Adar, M., Diamant, I., Klang, E., Amitai, M., Goldberger, J., & Greenspan, H., “GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification”, Neurocomputing, 321: 321-331, (2018).
  • [43] Guibas, J. T., Virdi, T. S., & Li, P. S., “Synthetic medical images from dual generative adversarial networks”, arXiv preprint arXiv:1709.01872, (2017).
  • [44] Platscher, M., Zopes, J., & Federau, C., “Image translation for medical image generation: Ischemic stroke lesion segmentation”, Biomedical Signal Processing and Control, 72: 103283, (2022).
  • [45] Fawakherji, M., Potena, C., Prevedello, I., Pretto, A., Bloisi, D. D., & Nardi, D., “Data augmentation using GANs for crop/weed segmentation in precision farming”, In 2020 IEEE Conference on Control Technology and Applications (CCTA), Montreal, QC, Canada, 279-284, (2020).
  • [46] Yurt, M., Dar, S. U., Erdem, A., Erdem, E., Oguz, K. K., & Çukur, T., “mustGAN: Multi-stream generative adversarial networks for MR image synthesis”, Medical Image Analysis, 70: 101944, (2021).
  • [47] Dar, S. U., Yurt, M., Karacan, L., Erdem, A., Erdem, E., & Çukur, T., “Image synthesis in multi-contrast MRI with conditional generative adversarial networks”, IEEE Transactions on Medical Imaging, 38(10): 2375-2388, (2019).
  • [48] Sun, Y., Yuan, P., & Sun, Y., “MM-GAN: 3D MRI data augmentation for medical image segmentation via generative adversarial networks”, In 2020 IEEE International Conference on Knowledge Graph (ICKG), Nanjing, China, 227-234, (2020).
  • [49] Huang, P., Liu, X., & Huang, Y., “Data augmentation for medical MR image using generative adversarial networks”, arXiv preprint, arXiv:2111.14297, (2021).
  • [50] Costa, P., Galdran, A., Meyer, M. I., Niemeijer, M., Abràmoff, M., Mendonça, A. M., & Campilho, A., “End-to-end adversarial retinal image synthesis”, IEEE Transactions on Medical Imaging, 37(3): 781-791, (2017).
  • [51] Zhuang, P., Schwing, A. G., & Koyejo, O., “fMRI data augmentation via synthesis”, In 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 1783-1787, (2019).
  • [52] Liang, J., & Chen, J., “Data augmentation of thyroid ultrasound images using generative adversarial network”, In 2021 IEEE International Ultrasonics Symposium (IUS), 1-4, (2021).
  • [53] Beers, A., Brown, J., Chang, K., Campbell, J. P., Ostmo, S., Chiang, M. F., & Kalpathy-Cramer, J., “High-resolution medical image synthesis using progressively grown generative adversarial networks”, arXiv preprint, arXiv:1805.03144, (2018).
  • [54] Sun, L., Wang, J., Huang, Y., Ding, X., Greenspan, H., & Paisley, J., “An adversarial learning approach to medical image synthesis for lesion detection”, IEEE Journal of Biomedical and Health Informatics, 24(8): 2303-2314, (2020).
  • [55] Wang, Q., Zhang, X., Chen, W., Wang, K., & Zhang, X., “Class-aware multi-window adversarial lung nodule synthesis conditioned on semantic features” In Medical Image Computing and Computer Assisted Intervention– MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part VI 23, 589-598, Springer International Publishing, (2020).
  • [56] Geng, X., Yao, Q., Jiang, K., & Zhu, Y., “Deep neural generative adversarial model based on VAE+ GAN for disorder diagnosis”, In 2020 International Conference on Internet of Things and Intelligent Applications (ITIA), Zhenjiang, China, 1-7, (2020).
  • [57] Baur, C., Albarqouni, S., & Navab, N., “Generating highly realistic images of skin lesions with GANs. In OR 2.0 Context-Aware Operating Theaters”, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis: First International Workshop, Granada, Spain, 260-267, Springer International Publishing, (2018).
  • [58] Ben-Cohen, A., Klang, E., Raskin, S. P., Amitai, M. M., & Greenspan, H., “Virtual PET images from CT data using deep convolutional networks: initial results”, In Simulation and Synthesis in Medical Imaging: Second International Workshop, SASHIMI 2017, Held in Conjunction with MICCAI 2017, QC, Canada, Proceedings 2, 49-57, Springer International Publishing, (2017).
  • [59] Phukan, S., Singh, J., Gogoi, R., Dhar, S., & Jana, N. D., “Covid-19 chest x-ray image generation using resnet-dcgan model”, In Advances in Intelligent Computing and Communication: Proceedings of ICAC 2021, Singapore, 227-234, Springer Nature Singapore, (2022).
  • [60] Han, C., Rundo, L., Murao, K., Noguchi, T., Shimahara, Y., Milacski, Z. Á., ... & Satoh, S. I., “MADGAN: Unsupervised medical anomaly detection GAN using multiple adjacent brain MRI slice reconstruction”, BMC bioinformatics, 22(2): 1-20, (2021).
  • [61] Hirte, A. U., Platscher, M., Joyce, T., Heit, J. J., Tranvinh, E., & Federau, C., “Realistic generation of diffusion-weighted magnetic resonance brain images with deep generative models”, Magnetic Resonance Imaging, 81: 60-66, (2021).
  • [62] Zhao, D., Zhu, D., Lu, J., Luo, Y., & Zhang, G., “Synthetic medical images using F&BGAN for improved lung nodules classification by multi-scale VGG16”, Symmetry, 10(10): 519, (2018).
  • [63] Guan, Q., Chen, Y., Wei, Z., Heidari, A. A., Hu, H., Yang, X. H., ... & Chen, F., “Medical image augmentation for lesion detection using a texture-constrained multichannel progressive GAN”, Computers in Biology and Medicine, 145: 105444, (2022).
  • [64] Ahmad, B., Sun, J., You, Q., Palade, V., & Mao, Z., “Brain tumor classification using a combination of variational autoencoders and generative adversarial networks”, Biomedicines, 10(2): 223, (2022).
  • [65] Pombo, G., Gray, R., Cardoso, M. J., Ourselin, S., Rees, G., Ashburner, J., & Nachev, P., “Equitable modelling of brain imaging by counterfactual augmentation with morphologically constrained 3d deep generative models”, Medical Image Analysis, 84: 102723, (2023).
  • [66] Tan, J., Jing, L., Huo, Y., Li, L., Akin, O., & Tian, Y., “LGAN: Lung segmentation in CT scans using generative adversarial network”, Computerized Medical Imaging and Graphics, 87: 101817, (2021).
  • [67] Wang, S., Chen, Z., You, S., Wang, B., Shen, Y., & Lei, B., “Brain stroke lesion segmentation using consistent perception generative adversarial network”, Neural Computing and Applications, 34(11): 8657-8669, (2022).
  • [68] Yu, Z., Xiang, Q., Meng, J., Kou, C., Ren, Q., & Lu, Y., “Retinal image synthesis from multiple-landmarks input with generative adversarial networks”, Biomedical engineering online, 18(1): 1-15, (2019).
  • [69] Zhang, J., Yu, L., Chen, D., Pan, W., Shi, C., Niu, Y., ... & Cheng, Y., “Dense GAN and multi-layer attention based lesion segmentation method for COVID-19 CT images”, Biomedical Signal Processing and Control, 69: 102901, (2021).
  • [70] Mahapatra, D., Bozorgtabar, B., Thiran, J. P., & Reyes, M., “Efficient active learning for image classification and segmentation using a sample selection and conditional generative adversarial network”, In International Conference on Medical Image Computing and Computer-Assisted Intervention, 580-588, Cham: Springer International Publishing, (2018).
  • [71] Qasim, A. B., Ezhov, I., Shit, S., Schoppe, O., Paetzold, J. C., Sekuboyina, A., ... & Menze, B., “Red-GAN: Attacking class imbalance via conditioned generation. Yet another medical imaging perspective”, In Medical imaging with deep learning, 655-668, (2020).
  • [72] Z, Y., Yang, Z., Zhang, H., Eric, I., Chang, C., Fan, Y., & Xu, Y., “3D segmentation guided style-based generative adversarial networks for pet synthesis”, IEEE Transactions on Medical Imaging, 41(8): 2092-2104, (2022).
  • [73] Yang, T., Wu, T., Li, L., & Zhu, C., “SUD-GAN: deep convolution generative adversarial network combined with short connection and dense block for retinal vessel segmentation”, Journal of digital imaging, 33: 946-957, (2020).
  • [74] Gu, X., Knutsson, H., Nilsson, M., & Eklund, A., “Generating diffusion MRI scalar maps from T1 weighted images using generative adversarial networks”, In Image Analysis: 21st Scandinavian Conference, SCIA 2019, Norrköping, Sweden”, June 11–13, 2019, Proceedings 21: 489-498, Springer International Publishing, (2019).
  • [75] Zhang, Z., Yang, L., & Zheng, Y., “Translating and segmenting multimodal medical volumes with cycle-and shape-consistency generative adversarial network”, In Proceedings of the IEEE conference on computer vision and pattern Recognition, 9242-9251, (2018).
  • [76] Toda, R., Teramoto, A., Kondo, M., Imaizumi, K., Saito, K., & Fujita, H., “Lung cancer CT image generation from a free-form sketch using style-based pix2pix for data augmentation”, Scientific reports, 12(1): 12867, (2022).
  • [77] Hu, X., “Multi-texture GAN: exploring the multi-scale texture translation for brain MR images”, arXiv preprint, arXiv:2102.07225, (2021).
  • [78] Yang, H., Lu, X., Wang, S. H., Lu, Z., Yao, J., Jiang, Y., & Qian, P., “Synthesizing multi-contrast MR images via novel 3D conditional Variational auto-encoding GAN”, Mobile Networks and Applications, 26: 415-424, (2021).
  • [79] Sikka, A., Virk, J. S., & Bathula, D. R., “MRI to PET Cross-Modality Translation using Globally and Locally Aware GAN (GLA-GAN) for Multi-Modal Diagnosis of Alzheimer's Disease”, arXiv preprint arXiv:2108.02160, (2021).
  • [80] Amirrajab, S., Lorenz, C., Weese, J., Pluim, J., & Breeuwer, M., ”Pathology Synthesis of 3D Consistent Cardiac MR Images Using 2D VAEs and GANs”, In International Workshop on Simulation and Synthesis in Medical Imaging, 34-42, Cham: Springer International Publishing, (2022).
  • [81] Pesteie, M., Abolmaesumi, P., & Rohling, R. N., “Adaptive augmentation of medical data using independently conditional variational auto-encoders”, IEEE transactions on medical imaging, 38(12): 2807-2820, (2019).
  • [82] Chadebec, C., Thibeau-Sutre, E., Burgos, N., & Allassonnière, S., “Data augmentation in high dimensional low sample size setting using a geometry-based variational autoencoder”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(3): 2879-2896, (2022).
  • [83] Huo, J., Vakharia, V., Wu, C., Sharan, A., Ko, A., Ourselin, S., & Sparks, R., “Brain Lesion Synthesis via Progressive Adversarial Variational Auto-Encoder”, In International Workshop on Simulation and Synthesis in Medical Imaging, Singapure, 101-111, Cham: Springer International Publishing, (2022).
  • [84] Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., & Courville, A. C., “Improved training of wasserstein gans”, Advances in neural information processing systems, 30, (2017).
  • [85] Wang, L., Guo, D., Wang, G., & Zhang, S., “Annotation-efficient learning for medical image segmentation based on noisy pseudo labels and adversarial learning”, IEEE Transactions on Medical Imaging, 40(10): 2795-2807, (2020).
  • [86] Naval Marimont, S., & Tarroni, G., “Implicit field learning for unsupervised anomaly detection in medical images”, In Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, Proceedings, Part II 24: 189-198, Springer International Publishing, (2021).
  • [87] Madan, Y., Veetil, I. K., V, S., EA, G., & KP, S., “Synthetic Data Augmentation of MRI using Generative Variational Autoencoder for Parkinson’s Disease Detection”, In Evolution in Computational Intelligence: Proceedings of the 9th International Conference on Frontiers in Intelligent Computing: Theory and Applications (FICTA 2021), Singapore, 171-178, Singapore: Springer Nature Singapore, (2022).
  • [88] Chadebec, C., & Allassonnière, S., “Data augmentation with variational autoencoders and manifold sampling”, In Deep Generative Models, and Data Augmentation, Labelling, and Imperfections: First Workshop, DGM4MICCAI 2021, and First Workshop, DALI 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, 184-192, Springer International Publishing, (2021).
  • [89] Celik, N., Ali, S., Gupta, S., Braden, B., & Rittscher, J., “Endouda: a modality independent segmentation approach for endoscopy imaging”, In International Conference on Medical Image Computing and Computer-Assisted Intervention, 303-312, Cham: Springer International Publishing, (2021).
  • [90] Pinaya, W. H. L., Tudosiu, P. D., Gray, R., Rees, G., Nachev, P., Ourselin, S., & Cardoso, M. J., “Unsupervised brain anomaly detection and segmentation with transformers”, arXiv preprint, arXiv:2102.11650, (2021).
  • [91] Zhu, H., Togo, R., Ogawa, T., & Haseyama, M., “Diversity Learning Based on Multi-Latent Space for Medical Image Visual Question Generation”, Sensors, 23(3): 1057, (2023).
  • [92] Biffi, C., Oktay, O., Tarroni, G., Bai, W., De Marvao, A., Doumou, G., ... & Rueckert, D., “Learning interpretable anatomical features through deep generative models: Application to cardiac remodeling.”, In Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain,16-20, 2018, Proceedings, 464-471, Springer International Publishing, (2018).
  • [93] Volokitin, A., Erdil, E., Karani, N., Tezcan, K. C., Chen, X., Van Gool, L., & Konukoglu, E., “Modelling the distribution of 3D brain MRI using a 2D slice VAE”, In Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, October 4–8, 2020, Proceedings, Part VII 23, 657-666, Springer International Publishing, (2020).
  • [94] Huang, Q., Qiao, C., Jing, K., Zhu, X., & Ren, K., “Biomarkers identification for Schizophrenia via VAE and GSDAE-based data augmentation”, Computers in Biology and Medicine, 146, 105603, (2022).
  • [95] Diamantis, D. E., Gatoula, P., & Iakovidis, D. K., “EndoVAE: Generating Endoscopic Images with a Variational Autoencoder”, In 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), 1-5, (2022).
  • [96] Sundgaard, J. V., Hannemose, M. R., Laugesen, S., Bray, P., Harte, J., Kamide, Y., ... & Christensen, A. N., “Multi-modal data generation with a deep metric variational autoencoder”, arXiv preprint arXiv:2202.03434, (2022).
  • [97] Pinaya, W. H., Tudosiu, P. D., Dafflon, J., Da Costa, P. F., Fernandez, V., Nachev, P., ... & Cardoso, M. J., “Brain imaging generation with latent diffusion models”, In MICCAI Workshop on Deep Generative Models, Singapure, 117-126, Cham: Springer Nature Switzerland, (2022).
  • [98] Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B., “High-resolution image synthesis with latent diffusion models”, In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, USA, 10684-10695, (2022).
  • [99] Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., & Paul Smolley, S., “Least squares generative adversarial networks”, In Proceedings of the IEEE international conference on computer vision, 2794-2802, (2017).
  • [100] Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., & Hochreiter, S., “Gans trained by a two time-scale update rule converge to a local nash equilibrium”, Advances in neural information processing systems, 30, (2017).
  • [101] Fernandez, V., Pinaya, W. H. L., Borges, P., Tudosiu, P. D., Graham, M. S., Vercauteren, T., & Cardoso, M. J., “Can segmentation models be trained with fully synthetically generated data?”, In International Workshop on Simulation and Synthesis in Medical Imaging, Singapure, 79-90, Cham: Springer International Publishing, (2022).
  • [102] Isensee, F., Petersen, J., Klein, A., Zimmerer, D., Jaeger, P. F., Kohl, S., ... & Maier-Hein, K. H., “nnu-net: Self-adapting framework for u-net-based medical image segmentation”, arXiv preprint, arXiv:1809.10486, (2018). [103] Lyu, Q., & Wang, G., “Conversion between ct and mri images using diffusion and score-matching models”, arXiv preprint, arXiv:2209.12104, (2022).
  • [104] Song, Y., Sohl-Dickstein, J., Kingma, D. P., Kumar, A., Ermon, S., & Poole, B., “Score-based generative modeling through stochastic differential equations”, arXiv preprint, arXiv:2011.13456, (2020).
  • [105] Nyholm, T., Svensson, S., Andersson, S., Jonsson, J., Sohlin, M., Gustafsson, C., ... & Gunnlaugsson, A., “MR and CT data with multiobserver delineations of organs in the pelvic area—Part of the Gold Atlas Project”, Medical physics, 45(3): 1295-1300, (2018).
  • [106] Darıcı, M. B., “Performance analysis of combination of cnn-based models with adaboost algorithm to diagnose covid-19 disease”, Politeknik Dergisi, 26(1), 179-190, (2023).
  • [107] Dorjsembe, Z., Odonchimed, S., & Xiao, F., “Three-dimensional medical image synthesis with denoising diffusion probabilistic models”, In Medical Imaging with Deep Learning, Switzerland, (2022).
  • [108] Jäger, P. F., Bickelhaupt, S., Laun, F. B., Lederer, W., Heidi, D., Kuder, T. A., ... & Maier-Hein, K. H., “Revealing hidden potentials of the q-space signal in breast cancer”, In Medical Image Computing and Computer Assisted Intervention− MICCAI 2017: 20th International Conference Quebec City, QC, Canada, 664-671, Springer International Publishing, (2017).
  • [109] Mao, W., Chen, C., Gao, H., Xiong, L., & Lin, Y., “A deep learning-based automatic staging method for early endometrial cancer on MRI images”, Frontiers in Physiology, 13: 974245, (2022).
  • [110] Sangeetha, S. K. B., Muthukumaran, V., Deeba, K., Rajadurai, H., Maheshwari, V., & Dalu, G. T., “Multiconvolutional Transfer Learning for 3D Brain Tumor Magnetic Resonance Images”, Computational Intelligence and Neuroscience, (2022).
  • [111] Barın, S., & Güraksın, G. E., “An automatic skin lesion segmentation system with hybrid FCN-ResAlexNet”, Engineering Science and Technology, an International Journal, 34, 101174, (2022).
  • [112] Sagers, L. W., Diao, J. A., Groh, M., Rajpurkar, P., Adamson, A. S., & Manrai, A. K., “Improving dermatology classifiers across populations using images generated by large diffusion models”, arXiv preprint, arXiv:2211.13352, (2022).
  • [113] Peng, W., Adeli, E., Zhao, Q., & Pohl, K. M., “Generating Realistic 3D Brain MRIs Using a Conditional Diffusion Probabilistic Model”, arXiv preprint, arXiv:2212.08034, (2022).
  • [114] Ali, H., Murad, S., & Shah, Z., “Spot the fake lungs: Generating synthetic medical images using neural diffusion models”, In Irish Conference on Artificial Intelligence and Cognitive Science, Ireland, 32-39, Cham: Springer Nature Switzerland, (2022).
  • [115] Saeed, S. U., Syer, T., Yan, W., Yang, Q., Emberton, M., Punwani, S., ... & Hu, Y., “Bi-parametric prostate MR image synthesis using pathology and sequence-conditioned stable diffusion”, arXiv preprint, arXiv:2303.02094, (2023).
  • [116] Weber, T., Ingrisch, M., Bischl, B., & Rügamer, D., “Cascaded Latent Diffusion Models for High-Resolution Chest X-ray Synthesis”, In Pacific-Asia Conference on Knowledge Discovery and Data Mining, 180-191, Cham: Springer Nature Switzerland, (2023).
  • [117] Peng, J., Qiu, R. L., Wynne, J. F., Chang, C. W., Pan, S., Wang, T., ... & Yang, X., “CBCT-Based Synthetic CT Image Generation Using Conditional Denoising Diffusion Probabilistic Model”, arXiv preprint, arXiv:2303.02649, (2023).
  • [118] Meng, X., Gu, Y., Pan, Y., Wang, N., Xue, P., Lu, M., ... & Shen, D., “A novel unified conditional score-based generative framework for multi-modal medical image completion”, arXiv preprint, arXiv:2207.03430, (2022).
  • [119]Kim, B., & Ye, J. C., “Diffusion deformable model for 4D temporal medical image generation”, In International Conference on Medical Image Computing and Computer-Assisted Intervention, Singapore, 539-548, Cham: Springer Nature Switzerland, (2022).
  • [120]Şenalp, F. M., & Ceylan, M., “Termal yüz görüntülerinden oluşan yeni bir veri seti için derin öğrenme tabanlı süper çözünürlük uygulaması”, Politeknik Dergisi, 1-1, (2022).
  • [121]Kazerouni, A., Aghdam, E. K., Heidari, M., Azad, R., Fayyaz, M., Hacihaliloglu, I., & Merhof, D., “Diffusion models for medical image analysis: A comprehensive survey”, arXiv preprint, arXiv:2211.07804, (2022).
  • [122]Abdollahi, B., Tomita, N., & Hassanpour, S., “Data augmentation in training deep learning models for medical image analysis” Deep learners and deep learner descriptors for medical applications, Germany, 167-180, (2020).
  • [123]Huang, H., He, R., Sun, Z., & Tan, T., “Introvae: Introspective variational autoencoders for photographic image synthesis”, Advances in neural information processing systems, 31, (2018).
  • [124]Amyar, A., Ruan, S., Vera, P., Decazes, P., & Modzelewski, R., “RADIOGAN: Deep convolutional conditional generative adversarial network to generate PET images”, In Proceedings of the 7th International Conference on Bioinformatics Research and Applications, Berlin, Germany, 28-33, (2020).
  • [125]Kynkäänniemi, T., Karras, T., Laine, S., Lehtinen, J., & Aila, T., “Improved precision and recall metric for assessing generative models”, Advances in Neural Information Processing Systems, 32, (2019).
  • [126]Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P, “Image quality assessment: from error visibility to structural similarity”, IEEE transactions on image processing, 13(4): 600-612, (2004).
  • [127]Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., ... & Shi, W., “Photo-realistic single image super-resolution using a generative adversarial network”, In Proceedings of the IEEE conference on computer vision and pattern recognition, USA, 4681-4690, (2017).
  • [128]Zhang, R., Isola, P., Efros, A. A., Shechtman, E., & Wang, O., “The unreasonable effectiveness of deep features as a perceptual metric”, In Proceedings of the IEEE conference on computer vision and pattern recognition, USA, 586-595, (2018).
  • [129]Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., & Chen, X., “Improved techniques for training gans”, Advances in neural information processing systems, 29, (2016).
  • [130]Sudre, C. H., Li, W., Vercauteren, T., Ourselin, S., & Jorge Cardoso, M., “Generalised dice overlap as a deep learning loss function for highly unbalanced segmentations”, In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: Third International Workshop, DLMIA 2017, and 7th International Workshop, ML-CDS 2017, Held in Conjunction with MICCAI 2017, Québec City, QC, Canada, 240-248, Springer International Publishing, (2017).
  • [131]Rockafellar, R. T., & Wets, R. J. B., “Variational analysis”, Springer Science & Business Media, 317, (2009).
  • [132]Gretton, A., Borgwardt, K. M., Rasch, M. J., Schölkopf, B., & Smola, A., “A kernel two-sample test”, The Journal of Machine Learning Research, 13(1): 723-773, (2012).
  • [133]Cover, T., & Hart, P., “Nearest neighbor pattern classification”, IEEE transactions on information theory, 13(1): 21-27, (1967).
  • [134]Bounliphone, W., Belilovsky, E., Blaschko, M. B., Antonoglou, I., & Gretton, A., “A test of relative similarity for model selection in generative models”, arXiv preprint, arXiv:1511.04581, (2015).
  • [135]Vaserstein, L. N., “Markov processes over denumerable products of spaces, describing large systems of automata”, Problemy Peredachi Informatsii, 5(3): 64-72, (1969).
  • [136] Fawcett, T., “An introduction to ROC analysis”, Pattern recognition letters, 27(8): 861-874, (2006).
  • [137]Nguyen, X., Wainwright, M. J., & Jordan, M. I., “Estimating divergence functionals and the likelihood ratio by convex risk minimization”, IEEE Transactions on Information Theory, 56(11): 5847-5861, (2010).
  • [138]Sheikh, H. R., & Bovik, A. C., “A visual information fidelity approach to video quality assessment”, In The first international workshop on video processing and quality metrics for consumer electronics, 7(2): 2117-2128, (2005).
  • [139]Wang, Z., & Bovik, A. C., “A universal image quality index”, IEEE signal processing letters, 9(3): 81-84, (2002).
  • [140]Tavse, S., Varadarajan, V., Bachute, M., Gite, S., & Kotecha, K., “A Systematic Literature Review on Applications of GAN-Synthesized Images for Brain MRI”, Future Internet, 14(12): 351, (2022).
  • [141]Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., ... & Sutskever, I., “Zero-shot text-to-image generation”, In International Conference on Machine Learning, 8821-8831, Online, PMLR, (2021).
  • [142]Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E. L., ... & Norouzi, M., “Photorealistic text-to-image diffusion models with deep language understanding”, Advances in Neural Information Processing Systems, 35: 36479-36494, (2022).
  • [143]Kang, M., Zhu, J. Y., Zhang, R., Park, J., Shechtman, E., Paris, S., & Park, T., “Scaling up gans for text-to-image synthesis”, In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10124-10134, (2023).
  • [144]Sauer, A., Karras, T., Laine, S., Geiger, A., & Aila, T., “Stylegan-t: Unlocking the power of gans for fast large-scale text-to-image synthesis”, arXiv preprint, arXiv:2301.09515, (2023).
  • [145]Delgado, J. M. D., & Oyedele, L., “Deep learning with small datasets: using autoencoders to address limited datasets in construction management”, Applied Soft Computing, 112: 07836, (2021).
  • [146]Caterini, A. L., Doucet, A., & Sejdinovic, D., “Hamiltonian variational auto-encoder”, Advances in Neural Information Processing Systems, 31, (2018).
  • [147]He, Y., Wang, L., Yang, F., Clarysse, P., Robini, M., & Zhu, Y., “Effect of different configurations of diffusion gradient directions on accuracy of diffusion tensor estimation in cardiac DTI”, In 2022 16th IEEE International Conference on Signal Processing (ICSP), China, 1: 437-441, (2022).
  • [148]Talo, M., Baloglu, U. B., Yıldırım, Ö., & Acharya, U. R., “Application of deep transfer learning for automated brain abnormality classification using MR images”, Cognitive Systems Research, 54: 176-188, (2019).
  • [149]Ren, P., Xiao, Y., Chang, X., Huang, P. Y., Li, Z., Gupta, B. B., ... & Wang, X., “A survey of deep active learning”, ACM computing surveys (CSUR), 54(9): 1-40, (2021).
  • [150]Rahimi, S., Oktay, O., Alvarez-Valle, J., & Bharadwaj, S., “Addressing the exorbitant cost of labeling medical images with active learning”, In International Conference on Machine Learning in Medical Imaging and Analysis, Spain, 1, (2021).
Toplam 148 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Derin Öğrenme, Yapay Zeka (Diğer)
Bölüm Derleme Makalesi
Yazarlar

Begüm Şener 0000-0002-2170-2162

Erken Görünüm Tarihi 7 Haziran 2024
Yayımlanma Tarihi
Gönderilme Tarihi 8 Eylül 2023
Yayımlandığı Sayı Yıl 2024 ERKEN GÖRÜNÜM

Kaynak Göster

APA Şener, B. (2024). Tıbbi Görüntülemede Derin Üretken Modeller : Bir Literatür Taraması. Politeknik Dergisi1-1.
AMA Şener B. Tıbbi Görüntülemede Derin Üretken Modeller : Bir Literatür Taraması. Politeknik Dergisi. Published online 01 Haziran 2024:1-1.
Chicago Şener, Begüm. “Tıbbi Görüntülemede Derin Üretken Modeller : Bir Literatür Taraması”. Politeknik Dergisi, Haziran (Haziran 2024), 1-1.
EndNote Şener B (01 Haziran 2024) Tıbbi Görüntülemede Derin Üretken Modeller : Bir Literatür Taraması. Politeknik Dergisi 1–1.
IEEE B. Şener, “Tıbbi Görüntülemede Derin Üretken Modeller : Bir Literatür Taraması”, Politeknik Dergisi, ss. 1–1, Haziran 2024.
ISNAD Şener, Begüm. “Tıbbi Görüntülemede Derin Üretken Modeller : Bir Literatür Taraması”. Politeknik Dergisi. Haziran 2024. 1-1.
JAMA Şener B. Tıbbi Görüntülemede Derin Üretken Modeller : Bir Literatür Taraması. Politeknik Dergisi. 2024;:1–1.
MLA Şener, Begüm. “Tıbbi Görüntülemede Derin Üretken Modeller : Bir Literatür Taraması”. Politeknik Dergisi, 2024, ss. 1-1.
Vancouver Şener B. Tıbbi Görüntülemede Derin Üretken Modeller : Bir Literatür Taraması. Politeknik Dergisi. 2024:1-.
 
TARANDIĞIMIZ DİZİNLER (ABSTRACTING / INDEXING)
181341319013191 13189 13187 13188 18016 

download Bu eser Creative Commons Atıf-AynıLisanslaPaylaş 4.0 Uluslararası ile lisanslanmıştır.