Görüntü-Metre ile görüntü işleme tabanlı mesafe ölçümü
Yıl 2023,
, 1129 - 1140, 07.10.2022
Haydar Yanık
,
Bülent Turan
Öz
Günümüzde görüntü sensörleri (kameralar), görüntü analizi (sınıflandırma, segmentasyon vb.) ve sentezi (nesne tespit, takip, mesafe tespiti vb.) için yaygın olarak kullanılmaktadır. Çalışmada lazer-metre, lidar-metre, radar ve benzeri endüstriyel amaçlar için kullanılabilecek, görüntü işleme tabanlı bir ölçüm cihazının (Image-meter) geliştirilmesi için teorik temellerin atılması amaçlanmaktadır. Bu amaçla literatürdeki görüntü işleme tabanlı mesafe tespit yöntemleri incelenmiştir. Bu yöntemlerin başarımını olumsuz etkileyen temel etkenler tespit edilmiş, bu etkenlerden etkilenmeyen yeni bir yöntem geliştirilmiştir. Geliştirilmesi planlanan ölçüm cihazı teorik temellere oturtulmuştur. Bu teorik temellerin işletilmesi donanımsal ve yazılımsal bileşenlere dayandırılmıştır. Çalışmada bu teorik temeller verilmiş, donanımsal ve yazılımsal bileşenlerin tasarımları gerçekleştirilmiştir. 1-1000m için yapılan hesaplamalar sonucunda %0.2’nin altında başarı oranına ulaşılabileceği belirlenmiştir. Donanımsal ve yazılımsal bileşenlerin bu hata oranını artıracağı aşikardır. Bu hatalar standart ve random hatalardan oluşacaktır. Çalışmada bu hatalar öngörülmüş ve mesafe ölçüm denklemine ilave edilmiştir. Öngörülen hataların tespiti donanımsal prototipin ve yazılım bileşenlerin geliştirilmesi ile gelecek çalışmada belirlenecektir.
Destekleyen Kurum
TOKAT GAZİOSMANPAŞA ÜNİVERSİTESİ REKTÖRLÜĞÜ Bilimsel Araştırma Projeleri Koordinasyon Birimi
Teşekkür
TOKAT GAZİOSMANPAŞA ÜNİVERSİTESİ REKTÖRLÜĞÜ Bilimsel Araştırma Projeleri Koordinasyon Birimi ve Tokat Gaziosmanpaşa Üniversitesi Teknoloji Transfer Ofisine
Kaynakça
- [1] A. Vyas, S. Yu, and J. Paik, “Fundamentals of digital image processing,” in Signals and Communication Technology, 2018.
- [2] H. Yanik, B. Turan, M. M. Yüksekokulu, T. G. Ü, M. Fakülte, and B. Mühendisliği, “Mesafe Ölçümünde Image -Metre : Görüntü İşleme ile Mesafe Tespit Yöntemlerinde Eksik Ne ?,” 4 th Int. Conf. Data Sci. Appl. (ICONDATA’21), June 4-6, 2021, TURKEY, 2021.
- [3] M. Takatsuka, G. A. W. West, S. Venkatesh, and T. M. Caelli, “Low-cost interactive active range finder,” Mach. Vis. Appl., vol. 14, no. 3, pp. 139–144, 2003, doi: 10.1007/s00138-003-0129-y.
- [4] J. Phelawan, P. Kittisut, and N. Pornsuwancharoen, “A new technique for distance measurement of between vehicles to vehicles by plate car using image processing,” Procedia Eng., vol. 32, pp. 348–353, 2012, doi: 10.1016/j.proeng.2012.01.1278.
- [5] K. Seshadrinathan, O. Nestares, Y. Wu, I. Corporation, and S. Clara, “Accurate measurement of point to point distances in 3D camera images,” IS&T Int. Symp. Electron. Imaging 2017, pp. 20–25, 2017, doi: https://doi.org/10.2352/ISSN.2470-1173.2017.15.DPMI-065.
- [6] M. F. A. Hassan, A. Hussain, M. M. H. Saad, and K. Win, “3D Distance Measurement Accuracy on Low-Cost Stereo Camera,” Sci. Int., vol. 29, no. 3, pp. 599–605, 2017.
- [7] M. Shortis, “Calibration techniques for accurate measurements by underwater camera systems,” Sensors (Switzerland), vol. 15, no. 12, pp. 30810–30827, 2015, doi: 10.3390/s151229831.
- [8] K. Katada, S. Chen, and L. Zhang, “The Triangulation of Toe-in Style Stereo Camera,” Proc. 2nd Int. Conf. Intell. Syst. Image Process. 2014, pp. 18–21, 2014, doi: 10.12792/icisip2014.007.
- [9] J. H. Oh and J.-I. P. Jungsik Park, Sang Hwa Lee, Boo Hwan Lee, “Error Modeling of Depth Measurement Using Fir Stereo Camera Systems,” pp. 470–475, 2013, doi: 10.1107/S0567740874007321.
- [10] M. E. Ashoori and M. Mahlouji, “Measuring the Distance between the Two Vehicles Using Stereo Vision with Optical Axes Cross,” Mod. Appl. Sci., vol. 12, no. 1, p. 165, 2017, doi: 10.5539/mas.v12n1p165.
- [11] N. Yamaguti and S. Oe, “A Method of Distance Measurement by Using Monocular Camera,” pp. 1255–1260.
- [12] M. A. Mahammed, A. I. Melhum, and F. A. Kochery, “10.1.1.380.1690,” vol. 2, no. 2, pp. 5–8, 2013.
- [13] P. Theodosis, L. Wilson, and S. Cheng, “EE368 Final Project: Road Sign Detection and Distance Estimation in Autonomous Car Application,” Stacks.Stanford.Edu, 2014, [Online]. Available: https://stacks.stanford.edu/file/druid:np318ty6250/Chen_Theodosis_Wilson_Stereo_Vision_in_Autonomous_Car_Application.pdf.
- [14] Y. Gan, X. Xu, W. Sun, and L. Lin, “Monocular Depth Estimation with Affinity, Vertical Pooling, and Label Enhancement,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11207 LNCS, pp. 232–247, 2018, doi: 10.1007/978-3-030-01219-9_14.
- [15] Z. Liang et al., “Learning for Disparity Estimation through Feature Constancy,” 2017, doi: 10.1109/CVPR.2016.438.
- [16] O. Zeglazi, M. Rziza, A. Amine, and C. Demonceaux, “Efficient dense disparity map reconstruction using sparse measurements,” VISIGRAPP 2018 - Proc. 13th Int. Jt. Conf. Comput. Vision, Imaging Comput. Graph. Theory Appl., vol. 5, 2018.
- [17] N. Mayer et al., “A Large Dataset to Train Convolutional Networks for Disparity, Optical Flow, and Scene Flow Estimation,” 2015, doi: 10.1109/CVPR.2016.438.
- [18] R. Fan, X. Ai, and N. Dahnoun, “Road Surface 3D Reconstruction Based on Dense Subpixel Disparity Map Estimation,” IEEE Trans. Image Process., vol. 27, no. 6, pp. 3025–3035, 2018, doi: 10.1109/TIP.2018.2808770.
- [19] L. Hantao, “Distance Determination from Pairs of Images from Low Cost Cameras The University of Edinburgh School of Engineering and Electronics MSc in Signal Processing and Communication MSc Project Mission Statement,” no. August, pp. 1–95, 2005.
- [20] S. Solak and E. D. Bolat, “Distance estimation using stereo vision for indoor mobile robot applications,” ELECO 2015 - 9th Int. Conf. Electr. Electron. Eng., pp. 685–688, 2016, doi: 10.1109/ELECO.2015.7394442.
- [21] S. Nagar and J. Verma, “Distance Measurement Using Stereo Vision,” vol. 07, no. 01, pp. 632–639, 2015.
- [22] X. B. Lai, H. S. Wang, and Y. H. Xu, “A real-time range finding system with binocular stereo vision,” Int. J. Adv. Robot. Syst., vol. 9, pp. 1–9, 2012, doi: 10.5772/50921.
- [23] B. Ummenhofer et al., “DeMoN: Depth and motion network for learning monocular stereo,” Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp. 5622–5631, 2017, doi: 10.1109/CVPR.2017.596.
- [24] S. Vedula, S. Baker, P. Rander, R. Collins, and T. Kanade, “Three-dimensional scene flow,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 3, pp. 475–480, 2005, doi: 10.1109/TPAMI.2005.63.
- [25] W. Budiharto, A. Santoso, D. Purwanto, and A. Jazidie, “Multiple Moving Obstacles Avoidance of Service Robot using Stereo Vision,” IAES-Telkomnika, vol. 9, no. 3, pp. 433–444, 2011, doi: 10.12928/telkomnika.v9i3.733.
- [26] N. Campbell, G. Vogiatzis, C. Hernández, and R. Cipolla, “Using Multiple Hypotheses to Improve Depth-Maps for Multi-View Stereo.,” Proc. 10$th Eur. Conf. Comput. Vis., pp. 1–14, 2008, doi: 10.1007/978-3-540-88682-2_58.
- [27] H. Tsung-Shiang, “An Improvement Stereo Vision Images Processing for Object Distance Measurement,” Int. J. Autom. Smart Technol., vol. 5, no. 2, pp. 85–90, 2015, doi: 10.5875/ausmt.v5i2.460.
- [28] L. Robert and R. Deriche, “Dense Depth Map Reconstruction : A Minimization and Regularization Approach which Preserves Discontinuities.”
- [29] M. Mancini, G. Costante, P. Valigi, T. A. Ciarfuglia, J. Delmerico, and D. Scaramuzza, “Toward Domain Independence for Learning-Based Monocular Depth Estimation,” IEEE Robot. Autom. Lett., vol. 2, no. 3, pp. 1778–1785, 2017, doi: 10.1109/LRA.2017.2657002.
- [30] M. W. Tao, S. Hadap, J. Malik, and R. Ramamoorthi, “Depth from combining defocus and correspondence using light-field cameras,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2, pp. 673–680, 2013, doi: 10.1109/ICCV.2013.89.
- [31] J. Jiao, Y. Cao, Y. Song, and R. Lau, “Look Deeper into Depth: Monocular Depth Estimation with Semantic Booster and Attention-Driven Loss,” pp. 1–17, [Online]. Available: http://www.cs.cityu.edu.hk/~rynson/papers/eccv18b.pdf.
- [32] M. Soyaslan, “Stereo kamera sisteminde aykırılık haritaları yardımıyla nesne uzaklıklarının tespit edilmesi Object distance detection through disparity map in stereo camera system,” pp. 111–119, 2016.
- [33] O. Montiel-Ross, R. Sepúlveda, O. Castillo, and J. Quiňones, “Efficient stereoscopic video matching and map reconstruction for a wheeled mobile robot,” Int. J. Adv. Robot. Syst., vol. 9, pp. 1–13, 2012, doi: 10.5772/50526.
- [34] G. P. Stein, Y. Gdalyahu, and A. Shashua, “Stereo-assist: Top-down stereo for driver assistance systems,” IEEE Intell. Veh. Symp. Proc., pp. 723–730, 2010, doi: 10.1109/IVS.2010.5548019.
- [35] S. Mez, T. Hb, and T. Hrc, “Nástrojová legovaná ocel pro práci za studena Mechanické vlastnosti,” pp. 801–804, 2012, doi: 10.1109/ICASSP.2012.6288005.
- [36] M. Kytö, M. Nuutinen, and P. Oittinen, “Method for measuring stereo camera depth accuracy based on stereoscopic vision,” no. January, p. 78640I, 2011, doi: 10.1117/12.872015.
- [37] Y. He, B. Liang, Y. Zou, J. He, and J. Yang, “Depth errors analysis and correction for time-of-flight (ToF) cameras,” Sensors (Switzerland), vol. 17, no. 1, pp. 1–23, 2017, doi: 10.3390/s17010092.
- [38] C. Holzmann and M. Hochgatterer, “Measuring distance with mobile phones using single-camera stereo vision,” Proc. - 32nd IEEE Int. Conf. Distrib. Comput. Syst. Work. ICDCSW 2012, pp. 88–93, 2012, doi: 10.1109/ICDCSW.2012.22.
- [39] A. Wedel, U. Franke, J. Klappstein, T. Brox, and D. Cremers, “Realtime depth estimation and obstacle detection from monocular video,” Pattern Recognit., pp. 475–484, 2006, doi: 10.1007/11861898_48.
- [40] Y. Zhong, Y. Dai, and H. Li, “Stereo computation for a single mixture image,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 11213 LNCS, pp. 441–456, 2018, doi: 10.1007/978-3-030-01240-3_27.
- [41] R. Srijha, “Methodology for Distance Measurement : A Comparative Study,” vol. 6, no. 8, pp. 451–457, 2017.
- [42] J. Il Jung and Y. S. Ho, “Depth map estimation from single-view image using object classification based on Bayesian learning,” 3DTV-CON 2010 True Vis. - Capture, Transm. Disp. 3D Video, pp. 4–7, 2010, doi: 10.1109/3DTV.2010.5506309.
- [43] M. Bai, W. Luo, K. Kundu, and R. Urtasun, “Exploiting semantic information and deep matching for optical flow,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9910 LNCS, pp. 154–170, 2016, doi: 10.1007/978-3-319-46466-4_10.
- [44] Y. L. Murphey, J. Chen, J. Crossman, J. Zhang, and L. Sieh, “DepthFinder, A Real-time Depth Detection System for Aided Driving,” vol. 2, no. Mi, pp. 122–127, 2000.
- [45] A. Rahman, A. Salam, M. Islam, and P. Sarker, “An Image Based Approach to Compute Object Distance,” International Journal of Computational Intelligence Systems, vol. 1, no. 4. pp. 304–312, 2008, doi: 10.1080/18756891.2008.9727627.
- [46] O. USLU and E. ÖZKAN, “IP CCTV SİSTEMLERİNDE PİXEL (PPM) HESAPLAMASI VE DOĞRU ÇÖZÜNÜRLÜK TESPİTİ,” İstanbul, 2016.