Research Article
BibTex RIS Cite

DAC: Derin Öğrenmede Türevlenebilir Oto-Kırpma

Year 2024, , 1382 - 1394, 02.12.2024
https://doi.org/10.35414/akufemubid.1475807

Abstract

Bir görüntünün sınırlarını ilgi alanına odaklanacak şekilde otomatik olarak ayarlama işlemi olan oto-kırpma, panoramik diş radyografilerinin teşhis kalitesinin iyileştirilmesi açısından çok önemlidir. Önemi, minimum bilgi kaybıyla farklı girdi görüntülerinin boyutunu standartlaştırma yeteneğinde yatmaktadır, böylece tutarlılık sağlanmakta ve sonraki görüntü işleme görevlerinin performansı iyileştirilmektedir. Çalışmaların birçoğunda CNN'ler yaygın olarak kullanılmasına rağmen, farklı boyutlardaki görüntüler için oto-kırpma kullanan araştırmalar sınırlı kalmaktadır. Bu çalışma, panoramik diş radyografilerinde türevlenebilir oto-kırpma kullanmanın potansiyelini araştırmayı amaçlamaktadır. Çalışmada, çoğunlukla 2836×1536 veya buna yakın çözünürlüklü, 3 diş hekimi tarafından beş farklı sınıfa bölünmüş 20.973 panoramik diş radyografisinden oluşan benzersiz bir veri kümesi kullanıldı; bu, önceki çalışmadaki aynı veri kümesidir (Top et al. 2023). Değerlendirme için bu veri kümesine en başarılı sonucu veren ResNet-101 modeli kullanıldı (Top et al. 2023). Varyansı azaltmak için, hem oto-kırpma olan hem de oto-kırpma olmayan eğitimlere 10 kat çapraz doğrulama kullanılarak model değerlendirildi. Daha doğru ve sağlam sonuçlara ulaşmak için veri artırma yöntemi de kullanıldı. Veri artırma, oto-kırpma olan eğitim için, oto-kırpma olmayan eğitime göre çok daha az etkili olacak şekilde ayarlandı. Veri kümesiyle ilgili sorunları azaltmak için geliştirilen önerilen oto-kırpma optimizasyonu sayesinde doğruluk %1,8 artarak %92,7'den %94,5'e çıktı. Makro ortalama AUC'si de 0,989'dan 0,993'e yükseldi. Önerilen oto-kırpma optimizasyonu, uçtan uca bir CNN'de eğitilebilir bir ağ katmanı olarak uygulanabilir ve diğer problemler için de kullanılabilir. Doğruluğu %92,7'den %94,5'e çıkarmak, iyileştirme için çok az alan kaldığından, azalan faydalar kanununa da bağlı olarak çok zorlu bir iştir. Sonuçlar, önerilen türevlenebilir oto-kırpma algoritmasının potansiyelini göstermekte ve farklı alanlarda kullanımını teşvik etmektedir.

References

  • Çelik, B., Çelik, M.E., 2022. Automated detection of dental restorations using deep learning on panoramic radiographs. Dentomaxillofacial Radiology 51, 20220244. https://doi.org/10.1259/dmfr.20220244
  • Chen, J., Bai, G., Liang, S., Li, Z., 2016. Automatic image cropping: A computational complexity study, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 507–515. https://doi.org/10.1109/CVPR.2016.61
  • Choi, J.-W., 2011. Assessment of panoramic radiography as a national oral examination tool: review of the literature. Imaging science in dentistry 41, 1–6. https://doi.org/10.5624%2Fisd.2011.41.1.1
  • Corbet, E., Ho, D., Lai, S., 2009. Radiographs in periodontal disease diagnosis and management. Australian dental journal 54, S27–S43.
  • Dai, J., He, K., Sun, J., 2016a. Instance-aware semantic segmentation via multi-task network cascades, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3150–3158. https://doi.org/10.1109/CVPR.2016.343
  • Dai, J., Li, Y., He, K., Sun, J., 2016b. R-fcn: Object detection via region-based fully convolutional networks. Advances in neural information processing systems 29. https://doi.org/10.48550/arXiv.1605.06409
  • Demir, K., Aksakalli, I.K., Bayğin, N., Sökmen, Ö.Ç., 2023. Deep Learning Based Lesion Detection on Dental Panoramic Radiographs, in: 2023 Innovations in Intelligent Systems and Applications Conference (ASYU). IEEE, pp. 1–6. Fidan, U., Uzunhisarcıklı, E., Çalıkuşu, İ., 2019. Classification of dermatological data with self organizing maps and support vector machine. Afyon Kocatepe University Journal of Science and Engineering 19, 894–901. https://doi.org/10.35414/akufemubid.591816
  • Fitzgerald, R., 2000. Phase-sensitive x-ray imaging. Physics today 53, 23–26. https://doi.org/10.1063/1.1292471
  • Girshick, R., 2015. Fast r-cnn, in: Proceedings of the IEEE International Conference on Computer Vision. pp. 1440–1448. https://doi.org/10.1109/ICCV.2015.169
  • Han, C., Ye, J., Zhong, Y., Tan, X., Zhang, C., Gao, C., Sang, N., 2019. Re-id driven localization refinement for person search, in: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 9814–9823. https://doi.org/10.1109/ICCV.2019.00991
  • He, K., Gkioxari, G., Dollár, P., Girshick, R., 2017. Mask r-cnn, in: Proceedings of the IEEE International Conference on Computer Vision. pp. 2961–2969. https://doi.org/10.1109/ICCV.2017.322
  • He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770–778. Jader, G., Fontineli, J., Ruiz, M., Abdalla, K., Pithon, M., Oliveira, L., 2018. Deep instance segmentation of teeth in panoramic X-ray images, in: 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, pp. 400–407.
  • Jaderberg, M., Simonyan, K., Zisserman, A., Kavukcuoglu, K., 2015. Spatial transformer networks. Advances in neural information processing systems 28. https://doi.org/10.48550/arXiv.1506.02025
  • Jiang, W., Sun, W., Tagliasacchi, A., Trulls, E., Yi, K.M., 2019. Linearized multi-sampling for differentiable image transformation, in: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 2988–2997. https://doi.org/10.1109/ICCV.2019.00308
  • Katsumata, A., 2023. Deep learning and artificial intelligence in dental diagnostic imaging. Japanese Dental Science Review 59, 329–333. Kemal, A., Kılıçarslan, S., 2021. COVID-19 diagnosis prediction in emergency care patients using convolutional neural network. Afyon Kocatepe University Journal of Science and Engineering 21, 300–309. https://doi.org/10.35414/akufemubid.788898
  • Kohinata, K., Kitano, T., Nishiyama, W., Mori, M., Iida, Y., Fujita, H., Katsumata, A., 2023. Deep learning for preliminary profiling of panoramic images. Oral Radiology 39, 275–281. Kooi, T., Litjens, G., Van Ginneken, B., Gubern-Mérida, A., Sánchez, C.I., Mann, R., den Heeten, A., Karssemeijer, N., 2017. Large scale deep learning for computer aided detection of mammographic lesions. Medical image analysis 35, 303–312. https://doi.org/10.1016/j.media.2016.07.007
  • Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. Imagenet classification with deep convolutional neural networks, in: Advances in Neural Information Processing Systems. pp. 1097–1105. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 2278–2324. https://doi.org/10.1109/5.726791
  • Liang, Y., Lv, J., Li, D., Yang, X., Wang, Z., Li, Q., 2022. Accurate Cobb Angle Estimation on Scoliosis X-Ray Images via Deeply-Coupled Two-Stage Network With Differentiable Cropping and Random Perturbation. IEEE Journal of Biomedical and Health Informatics 27, 1488–1499. https://doi.org/10.1109/JBHI.2022.3229847
  • Liedke, G.S., Spin-Neto, R., Vizzotto, M.B., Da Silveira, P.F., Silveira, H.E.D., Wenzel, A., 2015. Diagnostic accuracy of conventional and digital radiography for detecting misfit between the tooth and restoration in metal-restored teeth. The Journal of prosthetic dentistry 113, 39–47.
  • Liu, S., Lu, Y., Wang, J., Hu, S., Zhao, J., Zhu, Z., 2020. A new focus evaluation operator based on max–min filter and its application in high quality multi-focus image fusion. Multidimensional Systems and Signal Processing 31, 569–590. https://doi.org/10.1007/s11045-019-00675-2
  • Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32. https://doi.org/10.48550/arXiv.1912.01703
  • Pineda, L., Fan, T., Monge, M., Venkataraman, S., Sodhi, P., Chen, R.T., Ortiz, J., DeTone, D., Wang, A., Anderson, S., 2022. Theseus: A library for differentiable nonlinear optimization. Advances in Neural Information Processing Systems 35, 3801–3818. https://doi.org/10.48550/arXiv.2207.09442
  • Pröbster, L., Diehl, J., 1992. Slip-casting alumina ceramics for crown and bridge restorations. Quintessence International 23. Recasens, A., Kellnhofer, P., Stent, S., Matusik, W., Torralba, A., 2018. Learning to zoom: a saliency-based sampling layer for neural networks, in: Proceedings of the European Conference on Computer Vision (ECCV). pp. 51–66. https://doi.org/10.1007/978-3-030-01240-3_4
  • Ren, S., He, K., Girshick, R., Sun, J., 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems 28. https://doi.org/10.48550/arXiv.1506.01497
  • Riad, R., Teboul, O., Grangier, D., Zeghidour, N., 2022. Learning strides in convolutional neural networks. arXiv preprint arXiv:2202.01653. https://doi.org/10.48550/arXiv.2202.01653
  • Rippel, O., Snoek, J., Adams, R.P., 2015. Spectral representations for convolutional neural networks. Advances in neural information processing systems 28. https://doi.org/10.48550/arXiv.1506.03767
  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., 2015. Imagenet large scale visual recognition challenge. International journal of computer vision 115, 211–252. Sakaguchi, R.L., Powers, J.M., 2012. Craig’s restorative dental materials-e-book. Elsevier Health Sciences. Scarfe, W.C., Farman, A.G., 2008. What is cone-beam CT and how does it work? Dental Clinics of North America 52, 707–730. https://doi.org/10.1016/j.cden.2008.05.005
  • Shin, H.-C., Roth, H.R., Gao, M., Lu, L., Xu, Z., Nogues, I., Yao, J., Mollura, D., Summers, R.M., 2016. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging 35, 1285–1298. https://doi.org/10.1109/TMI.2016.2528162
  • Simonyan, K., Zisserman, A., 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. https://doi.org/10.48550/arXiv.1409.1556
  • Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A., 2017. Inception-v4, inception-resnet and the impact of residual connections on learning, in: Thirty-First AAAI Conference on Artificial Intelligence. Top, A.E., 2023. Evaluation of Fixed Restorations on Panoramic Radiographs using Deep Learning and Auto-Crop (PhD Thesis). Ankara Yıldırım Beyazıt Üniversitesi Fen Bilimleri Enstitüsü. Top, A.E., Özdoğan, M.S., Yeniad, M., 2023. Quantitative level determination of fixed restorations on panoramic radiographs using deep learning. International Journal of Computerized Dentistry 26. 285-299 https://doi.org/10.3290/j.ijcd.b3840521
  • White, S.C., Heslop, E.W., Hollender, L.G., Mosier, K.M., Ruprecht, A., Shrout, M.K., 2001. Parameters of radiologic care: An official report of the American Academy of Oral and Maxillofacial Radiology. Oral Surgery, Oral Medicine, Oral Pathology, Oral Radiology, and Endodontology 91, 498–511.
  • Yurttakal, A.H., Baş, H., 2021. Possibility Prediction Of Diabetes Mellitus At Early Stage Via Stacked Ensemble Deep Neural Network. Afyon Kocatepe University Journal of Science and Engineering 21, 812–819. https://doi.org/10.35414/akufemubid.946264
  • Zeiler, M.D., Fergus, R., 2014. Visualizing and understanding convolutional networks, in: European Conference on Computer Vision. Springer, pp. 818–833.

DAC: Differentiable Auto-Cropping in Deep Learning

Year 2024, , 1382 - 1394, 02.12.2024
https://doi.org/10.35414/akufemubid.1475807

Abstract

Auto-cropping, the process of automatically adjusting the boundaries of an image to focus on the region of interest, is crucial to improving the diagnostic quality of dental panoramic radiographs. Its importance lies in its ability to standardize the size of different input images with minimal loss of information, thus ensuring consistency and improving the performance of subsequent image-processing tasks. Despite the widespread use of CNNs in many studies, research on auto-cropping for different-sized images remains limited. This study aims to explore the potential of differentiable auto-cropping in dental panoramic radiographs. A unique dataset of 20,973 dental panoramic radiographs, mostly with a resolution of 2836×1536 or close, divided into five classes by 3 dentists, was used, which is the same dataset from the previous study (Top et al. 2023). ResNet-101 model, which was the most successful network for the dataset (Top et al. 2023), was used for the evaluation. To reduce variance, the model was evaluated using 10-fold cross-validation for both non-auto-cropped and auto-cropped trainings. Data augmentation was also used to produce more accurate and robust results. For auto-cropped training, it was adjusted to be much less effective than the non-auto-cropped one. Accuracy was improved by 1.8%, from 92.7% to 94.5%, thanks to the proposed auto-crop optimization developed to reduce dataset-related issues. Its macro-average AUC was also raised from 0.989 to 0.993. The proposed auto-crop optimization can be implemented as a trainable network layer in an end-to-end CNN and can be used for other problems as well. Increasing the accuracy from 92.7% to 94.5% is a very challenging task due to diminishing returns, as there is little room for improvement. The results show the potential of the proposed differentiable auto-crop algorithm and encourages its use in different fields.

References

  • Çelik, B., Çelik, M.E., 2022. Automated detection of dental restorations using deep learning on panoramic radiographs. Dentomaxillofacial Radiology 51, 20220244. https://doi.org/10.1259/dmfr.20220244
  • Chen, J., Bai, G., Liang, S., Li, Z., 2016. Automatic image cropping: A computational complexity study, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 507–515. https://doi.org/10.1109/CVPR.2016.61
  • Choi, J.-W., 2011. Assessment of panoramic radiography as a national oral examination tool: review of the literature. Imaging science in dentistry 41, 1–6. https://doi.org/10.5624%2Fisd.2011.41.1.1
  • Corbet, E., Ho, D., Lai, S., 2009. Radiographs in periodontal disease diagnosis and management. Australian dental journal 54, S27–S43.
  • Dai, J., He, K., Sun, J., 2016a. Instance-aware semantic segmentation via multi-task network cascades, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 3150–3158. https://doi.org/10.1109/CVPR.2016.343
  • Dai, J., Li, Y., He, K., Sun, J., 2016b. R-fcn: Object detection via region-based fully convolutional networks. Advances in neural information processing systems 29. https://doi.org/10.48550/arXiv.1605.06409
  • Demir, K., Aksakalli, I.K., Bayğin, N., Sökmen, Ö.Ç., 2023. Deep Learning Based Lesion Detection on Dental Panoramic Radiographs, in: 2023 Innovations in Intelligent Systems and Applications Conference (ASYU). IEEE, pp. 1–6. Fidan, U., Uzunhisarcıklı, E., Çalıkuşu, İ., 2019. Classification of dermatological data with self organizing maps and support vector machine. Afyon Kocatepe University Journal of Science and Engineering 19, 894–901. https://doi.org/10.35414/akufemubid.591816
  • Fitzgerald, R., 2000. Phase-sensitive x-ray imaging. Physics today 53, 23–26. https://doi.org/10.1063/1.1292471
  • Girshick, R., 2015. Fast r-cnn, in: Proceedings of the IEEE International Conference on Computer Vision. pp. 1440–1448. https://doi.org/10.1109/ICCV.2015.169
  • Han, C., Ye, J., Zhong, Y., Tan, X., Zhang, C., Gao, C., Sang, N., 2019. Re-id driven localization refinement for person search, in: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 9814–9823. https://doi.org/10.1109/ICCV.2019.00991
  • He, K., Gkioxari, G., Dollár, P., Girshick, R., 2017. Mask r-cnn, in: Proceedings of the IEEE International Conference on Computer Vision. pp. 2961–2969. https://doi.org/10.1109/ICCV.2017.322
  • He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 770–778. Jader, G., Fontineli, J., Ruiz, M., Abdalla, K., Pithon, M., Oliveira, L., 2018. Deep instance segmentation of teeth in panoramic X-ray images, in: 2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI). IEEE, pp. 400–407.
  • Jaderberg, M., Simonyan, K., Zisserman, A., Kavukcuoglu, K., 2015. Spatial transformer networks. Advances in neural information processing systems 28. https://doi.org/10.48550/arXiv.1506.02025
  • Jiang, W., Sun, W., Tagliasacchi, A., Trulls, E., Yi, K.M., 2019. Linearized multi-sampling for differentiable image transformation, in: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 2988–2997. https://doi.org/10.1109/ICCV.2019.00308
  • Katsumata, A., 2023. Deep learning and artificial intelligence in dental diagnostic imaging. Japanese Dental Science Review 59, 329–333. Kemal, A., Kılıçarslan, S., 2021. COVID-19 diagnosis prediction in emergency care patients using convolutional neural network. Afyon Kocatepe University Journal of Science and Engineering 21, 300–309. https://doi.org/10.35414/akufemubid.788898
  • Kohinata, K., Kitano, T., Nishiyama, W., Mori, M., Iida, Y., Fujita, H., Katsumata, A., 2023. Deep learning for preliminary profiling of panoramic images. Oral Radiology 39, 275–281. Kooi, T., Litjens, G., Van Ginneken, B., Gubern-Mérida, A., Sánchez, C.I., Mann, R., den Heeten, A., Karssemeijer, N., 2017. Large scale deep learning for computer aided detection of mammographic lesions. Medical image analysis 35, 303–312. https://doi.org/10.1016/j.media.2016.07.007
  • Krizhevsky, A., Sutskever, I., Hinton, G.E., 2012. Imagenet classification with deep convolutional neural networks, in: Advances in Neural Information Processing Systems. pp. 1097–1105. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 2278–2324. https://doi.org/10.1109/5.726791
  • Liang, Y., Lv, J., Li, D., Yang, X., Wang, Z., Li, Q., 2022. Accurate Cobb Angle Estimation on Scoliosis X-Ray Images via Deeply-Coupled Two-Stage Network With Differentiable Cropping and Random Perturbation. IEEE Journal of Biomedical and Health Informatics 27, 1488–1499. https://doi.org/10.1109/JBHI.2022.3229847
  • Liedke, G.S., Spin-Neto, R., Vizzotto, M.B., Da Silveira, P.F., Silveira, H.E.D., Wenzel, A., 2015. Diagnostic accuracy of conventional and digital radiography for detecting misfit between the tooth and restoration in metal-restored teeth. The Journal of prosthetic dentistry 113, 39–47.
  • Liu, S., Lu, Y., Wang, J., Hu, S., Zhao, J., Zhu, Z., 2020. A new focus evaluation operator based on max–min filter and its application in high quality multi-focus image fusion. Multidimensional Systems and Signal Processing 31, 569–590. https://doi.org/10.1007/s11045-019-00675-2
  • Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems 32. https://doi.org/10.48550/arXiv.1912.01703
  • Pineda, L., Fan, T., Monge, M., Venkataraman, S., Sodhi, P., Chen, R.T., Ortiz, J., DeTone, D., Wang, A., Anderson, S., 2022. Theseus: A library for differentiable nonlinear optimization. Advances in Neural Information Processing Systems 35, 3801–3818. https://doi.org/10.48550/arXiv.2207.09442
  • Pröbster, L., Diehl, J., 1992. Slip-casting alumina ceramics for crown and bridge restorations. Quintessence International 23. Recasens, A., Kellnhofer, P., Stent, S., Matusik, W., Torralba, A., 2018. Learning to zoom: a saliency-based sampling layer for neural networks, in: Proceedings of the European Conference on Computer Vision (ECCV). pp. 51–66. https://doi.org/10.1007/978-3-030-01240-3_4
  • Ren, S., He, K., Girshick, R., Sun, J., 2015. Faster r-cnn: Towards real-time object detection with region proposal networks. Advances in neural information processing systems 28. https://doi.org/10.48550/arXiv.1506.01497
  • Riad, R., Teboul, O., Grangier, D., Zeghidour, N., 2022. Learning strides in convolutional neural networks. arXiv preprint arXiv:2202.01653. https://doi.org/10.48550/arXiv.2202.01653
  • Rippel, O., Snoek, J., Adams, R.P., 2015. Spectral representations for convolutional neural networks. Advances in neural information processing systems 28. https://doi.org/10.48550/arXiv.1506.03767
  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., 2015. Imagenet large scale visual recognition challenge. International journal of computer vision 115, 211–252. Sakaguchi, R.L., Powers, J.M., 2012. Craig’s restorative dental materials-e-book. Elsevier Health Sciences. Scarfe, W.C., Farman, A.G., 2008. What is cone-beam CT and how does it work? Dental Clinics of North America 52, 707–730. https://doi.org/10.1016/j.cden.2008.05.005
  • Shin, H.-C., Roth, H.R., Gao, M., Lu, L., Xu, Z., Nogues, I., Yao, J., Mollura, D., Summers, R.M., 2016. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging 35, 1285–1298. https://doi.org/10.1109/TMI.2016.2528162
  • Simonyan, K., Zisserman, A., 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. https://doi.org/10.48550/arXiv.1409.1556
  • Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A., 2017. Inception-v4, inception-resnet and the impact of residual connections on learning, in: Thirty-First AAAI Conference on Artificial Intelligence. Top, A.E., 2023. Evaluation of Fixed Restorations on Panoramic Radiographs using Deep Learning and Auto-Crop (PhD Thesis). Ankara Yıldırım Beyazıt Üniversitesi Fen Bilimleri Enstitüsü. Top, A.E., Özdoğan, M.S., Yeniad, M., 2023. Quantitative level determination of fixed restorations on panoramic radiographs using deep learning. International Journal of Computerized Dentistry 26. 285-299 https://doi.org/10.3290/j.ijcd.b3840521
  • White, S.C., Heslop, E.W., Hollender, L.G., Mosier, K.M., Ruprecht, A., Shrout, M.K., 2001. Parameters of radiologic care: An official report of the American Academy of Oral and Maxillofacial Radiology. Oral Surgery, Oral Medicine, Oral Pathology, Oral Radiology, and Endodontology 91, 498–511.
  • Yurttakal, A.H., Baş, H., 2021. Possibility Prediction Of Diabetes Mellitus At Early Stage Via Stacked Ensemble Deep Neural Network. Afyon Kocatepe University Journal of Science and Engineering 21, 812–819. https://doi.org/10.35414/akufemubid.946264
  • Zeiler, M.D., Fergus, R., 2014. Visualizing and understanding convolutional networks, in: European Conference on Computer Vision. Springer, pp. 818–833.
There are 33 citations in total.

Details

Primary Language English
Subjects Computer Software
Journal Section Articles
Authors

Ahmet Esad Top 0000-0001-5017-1594

Mustafa Yeniad 0000-0002-9422-4974

Mahmut Sertaç Özdoğan 0000-0003-1312-8794

Fatih Nar 0000-0002-3003-8136

Early Pub Date November 11, 2024
Publication Date December 2, 2024
Submission Date April 30, 2024
Acceptance Date September 1, 2024
Published in Issue Year 2024

Cite

APA Top, A. E., Yeniad, M., Özdoğan, M. S., Nar, F. (2024). DAC: Differentiable Auto-Cropping in Deep Learning. Afyon Kocatepe Üniversitesi Fen Ve Mühendislik Bilimleri Dergisi, 24(6), 1382-1394. https://doi.org/10.35414/akufemubid.1475807
AMA Top AE, Yeniad M, Özdoğan MS, Nar F. DAC: Differentiable Auto-Cropping in Deep Learning. Afyon Kocatepe Üniversitesi Fen Ve Mühendislik Bilimleri Dergisi. December 2024;24(6):1382-1394. doi:10.35414/akufemubid.1475807
Chicago Top, Ahmet Esad, Mustafa Yeniad, Mahmut Sertaç Özdoğan, and Fatih Nar. “DAC: Differentiable Auto-Cropping in Deep Learning”. Afyon Kocatepe Üniversitesi Fen Ve Mühendislik Bilimleri Dergisi 24, no. 6 (December 2024): 1382-94. https://doi.org/10.35414/akufemubid.1475807.
EndNote Top AE, Yeniad M, Özdoğan MS, Nar F (December 1, 2024) DAC: Differentiable Auto-Cropping in Deep Learning. Afyon Kocatepe Üniversitesi Fen Ve Mühendislik Bilimleri Dergisi 24 6 1382–1394.
IEEE A. E. Top, M. Yeniad, M. S. Özdoğan, and F. Nar, “DAC: Differentiable Auto-Cropping in Deep Learning”, Afyon Kocatepe Üniversitesi Fen Ve Mühendislik Bilimleri Dergisi, vol. 24, no. 6, pp. 1382–1394, 2024, doi: 10.35414/akufemubid.1475807.
ISNAD Top, Ahmet Esad et al. “DAC: Differentiable Auto-Cropping in Deep Learning”. Afyon Kocatepe Üniversitesi Fen Ve Mühendislik Bilimleri Dergisi 24/6 (December 2024), 1382-1394. https://doi.org/10.35414/akufemubid.1475807.
JAMA Top AE, Yeniad M, Özdoğan MS, Nar F. DAC: Differentiable Auto-Cropping in Deep Learning. Afyon Kocatepe Üniversitesi Fen Ve Mühendislik Bilimleri Dergisi. 2024;24:1382–1394.
MLA Top, Ahmet Esad et al. “DAC: Differentiable Auto-Cropping in Deep Learning”. Afyon Kocatepe Üniversitesi Fen Ve Mühendislik Bilimleri Dergisi, vol. 24, no. 6, 2024, pp. 1382-94, doi:10.35414/akufemubid.1475807.
Vancouver Top AE, Yeniad M, Özdoğan MS, Nar F. DAC: Differentiable Auto-Cropping in Deep Learning. Afyon Kocatepe Üniversitesi Fen Ve Mühendislik Bilimleri Dergisi. 2024;24(6):1382-94.


Bu eser Creative Commons Atıf-GayriTicari 4.0 Uluslararası Lisansı ile lisanslanmıştır.