Araştırma Makalesi

Yıl 2023,
Cilt: 11 Sayı: 2, 41 - 46, 18.05.2023
### Öz

### Destekleyen Kurum

### Teşekkür

### Kaynakça

OPTİMAK SÜREÇ TASARIMI VE UYGULAMALARI A.Ş

Sağlamış olduğu çalışma ortamı ve ürün desteklerinden dolayı OPTİMAK STU A.Ş firmasına teşekkürlerimizi sunarız.

- Ohio State University, “How the brain sees the world in 3-D: Scientists record visual cortex combining 2-D and depth info,” ScienceDaily, 2017. %3Cwww.sciencedaily.com/releases/2017/03/170321110312.htm%3E
- M. S. Ahn, H. Chae, D. Noh, H. Nam, and D. Hong, “Analysis and Noise Modeling of the Intel RealSense D435 for Mobile Robots,” 2019. doi: 10.1109/URAI.2019.8768489.
- S. Lee, “Depth camera image processing and applications,” in 2012 19th IEEE International Conference on Image Processing, 2012, pp. 545–548.
- J.-H. Cho, I.-Y. Chang, S. Kim, and K. H. Lee, “Depth image processing technique for representing human actors in 3DTV using single depth camera,” in 2007 3DTV Conference, 2007, pp. 1–4.
- J.-S. Jeong, K.-C. Kwon, M.-U. Erdenebat, Y. Piao, N. Kim, and K.-H. Yoo, “Development of a real-time integral imaging display system based on graphics processing unit parallel processing using a depth camera,” Opt. Eng., vol. 53, no. 1, p. 15103, 2014.
- K. Adi and C. E. Widodo, “Distance Measurement With a Stereo Camera,” Int. J. Innov. Res. Adv. Eng., 2017.
- J. B. Kim, “Efficient vehicle detection and distance estimation based on aggregated channel features and inverse perspective mapping from a single camera,” Symmetry (Basel)., 2019, doi: 10.3390/sym11101205.
- G. Lin, Y. Tang, X. Zou, J. Xiong, and J. Li, “Guava detection and pose estimation using a low-cost RGB-D sensor in the field,” Sensors (Switzerland), 2019, doi: 10.3390/s19020428.
- B. Parr, M. Legg, and F. Alam, “Analysis of Depth Cameras for Proximal Sensing of Grapes,” Sensors, vol. 22, no. 11, 2022, doi: 10.3390/s22114179.
- P. Kurtser, O. Ringdahl, N. Rotstein, R. Berenstein, and Y. Edan, “In-field grape cluster size assessment for vine yield estimation using a mobile robot and a consumer level RGB-D Camera,” IEEE Robot. Autom. Lett., 2020, doi: 10.1109/LRA.2020.2970654.
- Z. Wang, K. B. Walsh, and B. Verma, “On-tree mango fruit size estimation using RGB-D images,” Sensors (Switzerland), 2017, doi: 10.3390/s17122738.
- N. Maeda, H. Suzuki, T. Kitajima, A. Kuwahara, and T. Yasuno, “Measurement of Tomato Leaf Area Using Depth Camera,” J. Signal Process., vol. 26, no. 4, pp. 123–126, 2022.
- B. Zheng, G. Sun, Z. Meng, and R. Nan, “Vegetable Size Measurement Based on Stereo Camera and Keypoints Detection,” Sensors, vol. 22, no. 4, 2022, doi: 10.3390/s22041617.
- C. Vo-Le, P. Van Muoi, N. H. Son, N. Van San, V. K. Duong, and N. T. Huyen, “Automatic Method for Measuring Object Size Using 3D Camera,” ICCE 2020 - 2020 IEEE 8th Int. Conf. Commun. Electron., pp. 365–369, 2021, doi: 10.1109/ICCE48956.2021.9352115.
- H. Xu, J. Xu, and W. Xu, “Survey of 3D modeling using depth cameras,” Virtual Reality and Intelligent Hardware. 2019. doi: 10.1016/j.vrih.2019.09.003.
- L. Wu, Y. Long, H. Sun, N. Liu, W. Wang, and Q. Dong, “Length Measurement of Potato Leaf using Depth Camera,” 2018. doi: 10.1016/j.ifacol.2018.08.197.
- A. Ruchay, V. Kober, K. Dorofeev, V. Kolpakov, and S. Miroshnikov, “Accurate body measurement of live cattle using three depth cameras and non-rigid 3-D shape recovery,” Comput. Electron. Agric., 2020, doi: 10.1016/j.compag.2020.105821.
- B. A. Griffin and J. J. Corso, “Depth from Camera Motion and Object Detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 1397–1406, 2021, doi: 10.1109/CVPR46437.2021.00145.
- L. Keselman et al., “Intel RealSense Stereoscopic Depth Cameras,” Comput. Vis. Pattern Recognit., 2017.

Yıl 2023,
Cilt: 11 Sayı: 2, 41 - 46, 18.05.2023
### Öz

### Anahtar Kelimeler

### Kaynakça

In traditional methods, a fixed object is taken as a reference for size determination. The size of the other object is calculated by comparing the dimensions of this reference object. However, when it is desired to measure objects at different heights, the measurement of the reference object must also be changed. In the study carried out, the size and angle of the products flowing through the line were determined by using a depth camera. The data set was created by taking the measurements of an object of known dimensions from 34 different distances. The mentioned data set consists of the ratio of the number of pixels of the edges of the object to the lengths of the edges. By comparing the correlation between the obtained data set and the distance values, a mathematical equation was extracted with the help of the MATLAB program. With the help of this equation, regardless of the height and color of the object, only the distance to the camera and all 3 dimensions can be calculated. In addition, the angle of the object with the software-generated reference line was calculated by taking the long side of the object as a reference. In this way, the size and angle of the products flowing through the line were determined with a single camera without the need for a reference object and without any color and size discrimination.

Depth camera dimension measurement angle measurement mathematical model

- Ohio State University, “How the brain sees the world in 3-D: Scientists record visual cortex combining 2-D and depth info,” ScienceDaily, 2017. %3Cwww.sciencedaily.com/releases/2017/03/170321110312.htm%3E
- M. S. Ahn, H. Chae, D. Noh, H. Nam, and D. Hong, “Analysis and Noise Modeling of the Intel RealSense D435 for Mobile Robots,” 2019. doi: 10.1109/URAI.2019.8768489.
- S. Lee, “Depth camera image processing and applications,” in 2012 19th IEEE International Conference on Image Processing, 2012, pp. 545–548.
- J.-H. Cho, I.-Y. Chang, S. Kim, and K. H. Lee, “Depth image processing technique for representing human actors in 3DTV using single depth camera,” in 2007 3DTV Conference, 2007, pp. 1–4.
- J.-S. Jeong, K.-C. Kwon, M.-U. Erdenebat, Y. Piao, N. Kim, and K.-H. Yoo, “Development of a real-time integral imaging display system based on graphics processing unit parallel processing using a depth camera,” Opt. Eng., vol. 53, no. 1, p. 15103, 2014.
- K. Adi and C. E. Widodo, “Distance Measurement With a Stereo Camera,” Int. J. Innov. Res. Adv. Eng., 2017.
- J. B. Kim, “Efficient vehicle detection and distance estimation based on aggregated channel features and inverse perspective mapping from a single camera,” Symmetry (Basel)., 2019, doi: 10.3390/sym11101205.
- G. Lin, Y. Tang, X. Zou, J. Xiong, and J. Li, “Guava detection and pose estimation using a low-cost RGB-D sensor in the field,” Sensors (Switzerland), 2019, doi: 10.3390/s19020428.
- B. Parr, M. Legg, and F. Alam, “Analysis of Depth Cameras for Proximal Sensing of Grapes,” Sensors, vol. 22, no. 11, 2022, doi: 10.3390/s22114179.
- P. Kurtser, O. Ringdahl, N. Rotstein, R. Berenstein, and Y. Edan, “In-field grape cluster size assessment for vine yield estimation using a mobile robot and a consumer level RGB-D Camera,” IEEE Robot. Autom. Lett., 2020, doi: 10.1109/LRA.2020.2970654.
- Z. Wang, K. B. Walsh, and B. Verma, “On-tree mango fruit size estimation using RGB-D images,” Sensors (Switzerland), 2017, doi: 10.3390/s17122738.
- N. Maeda, H. Suzuki, T. Kitajima, A. Kuwahara, and T. Yasuno, “Measurement of Tomato Leaf Area Using Depth Camera,” J. Signal Process., vol. 26, no. 4, pp. 123–126, 2022.
- B. Zheng, G. Sun, Z. Meng, and R. Nan, “Vegetable Size Measurement Based on Stereo Camera and Keypoints Detection,” Sensors, vol. 22, no. 4, 2022, doi: 10.3390/s22041617.
- C. Vo-Le, P. Van Muoi, N. H. Son, N. Van San, V. K. Duong, and N. T. Huyen, “Automatic Method for Measuring Object Size Using 3D Camera,” ICCE 2020 - 2020 IEEE 8th Int. Conf. Commun. Electron., pp. 365–369, 2021, doi: 10.1109/ICCE48956.2021.9352115.
- H. Xu, J. Xu, and W. Xu, “Survey of 3D modeling using depth cameras,” Virtual Reality and Intelligent Hardware. 2019. doi: 10.1016/j.vrih.2019.09.003.
- L. Wu, Y. Long, H. Sun, N. Liu, W. Wang, and Q. Dong, “Length Measurement of Potato Leaf using Depth Camera,” 2018. doi: 10.1016/j.ifacol.2018.08.197.
- A. Ruchay, V. Kober, K. Dorofeev, V. Kolpakov, and S. Miroshnikov, “Accurate body measurement of live cattle using three depth cameras and non-rigid 3-D shape recovery,” Comput. Electron. Agric., 2020, doi: 10.1016/j.compag.2020.105821.
- B. A. Griffin and J. J. Corso, “Depth from Camera Motion and Object Detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 1397–1406, 2021, doi: 10.1109/CVPR46437.2021.00145.
- L. Keselman et al., “Intel RealSense Stereoscopic Depth Cameras,” Comput. Vis. Pattern Recognit., 2017.

Toplam 19 adet kaynakça vardır.

Birincil Dil | İngilizce |
---|---|

Konular | Yapay Zeka |

Bölüm | Araştırma Makaleleri |

Yazarlar | |

Erken Görünüm Tarihi | 18 Mayıs 2023 |

Yayımlanma Tarihi | 18 Mayıs 2023 |

Gönderilme Tarihi | 15 Mayıs 2023 |

Yayımlandığı Sayı | Yıl 2023 Cilt: 11 Sayı: 2 |

Academic Platform Journal of Engineering and Smart Systems