Araştırma Makalesi
BibTex RIS Kaynak Göster
Yıl 2020, Cilt: 5 Sayı: 2, 66 - 72, 01.06.2020
https://doi.org/10.26833/ijeg.589489

Öz

Kaynakça

  • Akcay, O., Erenoglu, R. C., & Avsar, E. O. (2017). The Effect of JPEG Compression in Close Range Photogrammetry. International Journal of Engineering and Geosciences, 2(1), 35-40. doi: 10.26833/ijeg.287308
  • Amiri Parian, J., & Gruen, A. (2010). Sensor modeling, self-calibration and accuracy testing of panoramic cameras and laser scanners. ISPRS Journal of Photogrammetry and Remote Sensing, 65(1), 60-76. doi: https://doi.org/10.1016/j.isprsjprs.2009.08.005
  • Anguelov, D., Dulong, C., Filip, D., Frueh, C., Lafon, S., Lyon, R., . . . Weaver, J. (2010). Google Street View: Capturing the World at Street Level. Computer, 43(6), 32-38. doi: 10.1109/MC.2010.170
  • Bakirman, T., Gumusay, M. U., Reis, H. C., Selbesoglu, M. O., Yosmaoglu, S., Yaras, M. C., . . . Bayram, B. (2017). Comparison of low cost 3D structured light scanners for face modeling. Applied Optics, 56(4), 985-992. doi: 10.1364/AO.56.000985
  • Chapman, D., & Deacon, A. (1998). Panoramic imaging and virtual reality — filling the gaps between the lines. ISPRS Journal of Photogrammetry and Remote Sensing, 53(6), 311-319. doi: https://doi.org/10.1016/S0924-2716(98)00016-1
  • Curtis, J. W., Curtis, A., Mapes, J., Szell, A. B., & Cinderich, A. (2013). Using google street view for systematic observation of the built environment: analysis of spatio-temporal instability of imagery dates. International Journal of Health Geographics, 12(1), 53. doi: 10.1186/1476-072X-12-53
  • Fangi, G., & Nardinocchi, C. (2013). Photogrammetric Processing of Spherical Panoramas. The Photogrammetric Record, 28(143), 293-311. doi: 10.1111/phor.12031
  • Freedman, B., Shpunt, A., Machline, M., & Arieli, Y. (2013). Depth mapping using projected patterns: Google Patents.
  • Google. (2019). Where we’ve been. Retrieved 11.06.2019, from https://www.google.com/streetview/explore/
  • Gutierrez, M., Vexo, F., & Thalmann, D. (2008). Stepping into virtual reality: Springer Science & Business Media.
  • Hanson, C. S., Noland, R. B., & Brown, C. (2013). The severity of pedestrian crashes: an analysis using Google Street View imagery. Journal of Transport Geography, 33, 42-53. doi: https://doi.org/10.1016/j.jtrangeo.2013.09.002
  • Isikdag, U. (2020) An IoT Architecture for Facilitating Integration of GeoInformation. International Journal of Engineering and Geosciences, 5(1), 15-25. doi: 10.26833/ijeg.587023
  • Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., . . . Fitzgibbon, A. (2011). KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. Paper presented at the Proceedings of the 24th annual ACM symposium on User interface software and technology, Santa Barbara, California, USA.
  • Kelly, C. M., Wilson, J. S., Baker, E. A., Miller, D. K., & Schootman, M. (2012). Using Google Street View to Audit the Built Environment: Inter-rater Reliability Results. Annals of Behavioral Medicine, 45(suppl_1), S108-S112. doi: 10.1007/s12160-012-9419-9
  • Le Yaouanc, J.-M., Saux, É., & Claramunt, C. (2010). A semantic and language-based representation of an environmental scene. GeoInformatica, 14(3), 333-352. doi: 10.1007/s10707-010-0103-6
  • Point-Grey. (2014). Overview of the Ladybug Image Stitching Process. Retrieved 06.02.2019, from https://www.flir.eu/globalassets/support/iis/applicationnotes/tan2008010_overview_ladybug_image_stitching.pdf
  • Raheja, J. L., Chaudhary, A., & Singal, K. (2011). Tracking of Fingertips and Centers of Palm Using KINECT. Paper presented at the Proceedings of the 2011 Third International Conference on Computational Intelligence, Modelling & Simulation.
  • Roborealm. (2016). Kinect Targeting. Retrieved 01.11.2018, from http://www.roborealm.com/tutorial/FIRST/slide010.php
  • Roupé, M., Bosch-Sijtsema, P., & Johansson, M. (2014). Interactive navigation interface for Virtual Reality using the human body. Computers, Environment and Urban Systems, 43, 42-50. doi: https://doi.org/10.1016/j.compenvurbsys.2013.10.003
  • Rundle, A. G., Bader, M. D. M., Richards, C. A., Neckerman, K. M., & Teitler, J. O. (2011). Using Google Street View to audit neighborhood environments. American journal of preventive medicine, 40(1), 94-100. doi: 10.1016/j.amepre.2010.09.034
  • Schwarz, L. A., Mkhitaryan, A., Mateus, D., & Navab, N. (2012). Human skeleton tracking from depth data using geodesic distances and optical flow. Image and Vision Computing, 30(3), 217-226. doi: https://doi.org/10.1016/j.imavis.2011.12.001
  • Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., . . . Blake, A. (2011, 20-25 June 2011). Real-time human pose recognition in parts from single depth images. Paper presented at the CVPR 2011.
  • Suma, E. A., Krum, D. M., Lange, B., Koenig, S., Rizzo, A., & Bolas, M. (2013). Adapting user interfaces for gestural interaction with the flexible action and articulated skeleton toolkit. Computers & Graphics, 37(3), 193-201. doi: https://doi.org/10.1016/j.cag.2012.11.004
  • Tran, K. N., Kakadiaris, I. A., & Shah, S. K. (2012). Partbased motion descriptor image for human action recognition. Pattern Recognition, 45(7), 2562-2572. doi: https://doi.org/10.1016/j.patcog.2011.12.028
  • Xia, L., Chen, C., & Aggarwal, J. K. (2011, 20-25 June 2011). Human detection using depth information by Kinect. Paper presented at the CVPR 2011 WORKSHOPS.
  • Yemenicioglu, C., Kaya, S., & Seker, D. Z. (2016). Accuracy of 3D (Three-Dimensional) Terrain Models in Simulations. International Journal of Engineering and Geosciences, 1(1), 34-38, doi: 10.26833/ijeg.285223
  • Yue, H., Chen, W., Wu, X., & Liu, J. (2014). Fast 3D modeling in complex environments using a single Kinect sensor. Optics and Lasers in Engineering, 53, 104-111. doi: https://doi.org/10.1016/j.optlaseng.2013.08.009
  • Zhou, H., Wang, L., & Suter, D. (2009). Human action recognition by feature-reduced Gaussian process classification. Pattern Recognition Letters, 30(12), 1059-1066. doi: https://doi.org/10.1016/j.patrec.2009.03.013

Integration of custom street view and low-cost motion sensors

Yıl 2020, Cilt: 5 Sayı: 2, 66 - 72, 01.06.2020
https://doi.org/10.26833/ijeg.589489

Öz

Virtual reality is an artificial computer-generated environment generally referred as virtual reality environment which can be navigated and interacted with by a user. Street View, which was released by Google in 2007, is an ideal tool to discover places and locations. This service doesn’t only provide spatial information, but also a virtual reality environment for the user. Since this service is only available in certain locations, Google enables users to create a street view with custom panoramic images with the help of Google Maps Application Programming Interface (API) for JavaScript. In this study, it is aimed to integrate body motions with a custom created street view service for Yildiz Technical University Davutpasa Campus which has a historical environment and huge places to discover. Microsoft Kinect for Xbox 360 motion sensor along with Flexible Action and Articulated Skeleton Toolkit (FAAST) interface has been employed for this purpose. This integration provides a low-cost alternative for virtual reality experience. The proposed system can be implemented for virtual museums, heritage sites or planetariums consisting of panoramic images.

Kaynakça

  • Akcay, O., Erenoglu, R. C., & Avsar, E. O. (2017). The Effect of JPEG Compression in Close Range Photogrammetry. International Journal of Engineering and Geosciences, 2(1), 35-40. doi: 10.26833/ijeg.287308
  • Amiri Parian, J., & Gruen, A. (2010). Sensor modeling, self-calibration and accuracy testing of panoramic cameras and laser scanners. ISPRS Journal of Photogrammetry and Remote Sensing, 65(1), 60-76. doi: https://doi.org/10.1016/j.isprsjprs.2009.08.005
  • Anguelov, D., Dulong, C., Filip, D., Frueh, C., Lafon, S., Lyon, R., . . . Weaver, J. (2010). Google Street View: Capturing the World at Street Level. Computer, 43(6), 32-38. doi: 10.1109/MC.2010.170
  • Bakirman, T., Gumusay, M. U., Reis, H. C., Selbesoglu, M. O., Yosmaoglu, S., Yaras, M. C., . . . Bayram, B. (2017). Comparison of low cost 3D structured light scanners for face modeling. Applied Optics, 56(4), 985-992. doi: 10.1364/AO.56.000985
  • Chapman, D., & Deacon, A. (1998). Panoramic imaging and virtual reality — filling the gaps between the lines. ISPRS Journal of Photogrammetry and Remote Sensing, 53(6), 311-319. doi: https://doi.org/10.1016/S0924-2716(98)00016-1
  • Curtis, J. W., Curtis, A., Mapes, J., Szell, A. B., & Cinderich, A. (2013). Using google street view for systematic observation of the built environment: analysis of spatio-temporal instability of imagery dates. International Journal of Health Geographics, 12(1), 53. doi: 10.1186/1476-072X-12-53
  • Fangi, G., & Nardinocchi, C. (2013). Photogrammetric Processing of Spherical Panoramas. The Photogrammetric Record, 28(143), 293-311. doi: 10.1111/phor.12031
  • Freedman, B., Shpunt, A., Machline, M., & Arieli, Y. (2013). Depth mapping using projected patterns: Google Patents.
  • Google. (2019). Where we’ve been. Retrieved 11.06.2019, from https://www.google.com/streetview/explore/
  • Gutierrez, M., Vexo, F., & Thalmann, D. (2008). Stepping into virtual reality: Springer Science & Business Media.
  • Hanson, C. S., Noland, R. B., & Brown, C. (2013). The severity of pedestrian crashes: an analysis using Google Street View imagery. Journal of Transport Geography, 33, 42-53. doi: https://doi.org/10.1016/j.jtrangeo.2013.09.002
  • Isikdag, U. (2020) An IoT Architecture for Facilitating Integration of GeoInformation. International Journal of Engineering and Geosciences, 5(1), 15-25. doi: 10.26833/ijeg.587023
  • Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., . . . Fitzgibbon, A. (2011). KinectFusion: real-time 3D reconstruction and interaction using a moving depth camera. Paper presented at the Proceedings of the 24th annual ACM symposium on User interface software and technology, Santa Barbara, California, USA.
  • Kelly, C. M., Wilson, J. S., Baker, E. A., Miller, D. K., & Schootman, M. (2012). Using Google Street View to Audit the Built Environment: Inter-rater Reliability Results. Annals of Behavioral Medicine, 45(suppl_1), S108-S112. doi: 10.1007/s12160-012-9419-9
  • Le Yaouanc, J.-M., Saux, É., & Claramunt, C. (2010). A semantic and language-based representation of an environmental scene. GeoInformatica, 14(3), 333-352. doi: 10.1007/s10707-010-0103-6
  • Point-Grey. (2014). Overview of the Ladybug Image Stitching Process. Retrieved 06.02.2019, from https://www.flir.eu/globalassets/support/iis/applicationnotes/tan2008010_overview_ladybug_image_stitching.pdf
  • Raheja, J. L., Chaudhary, A., & Singal, K. (2011). Tracking of Fingertips and Centers of Palm Using KINECT. Paper presented at the Proceedings of the 2011 Third International Conference on Computational Intelligence, Modelling & Simulation.
  • Roborealm. (2016). Kinect Targeting. Retrieved 01.11.2018, from http://www.roborealm.com/tutorial/FIRST/slide010.php
  • Roupé, M., Bosch-Sijtsema, P., & Johansson, M. (2014). Interactive navigation interface for Virtual Reality using the human body. Computers, Environment and Urban Systems, 43, 42-50. doi: https://doi.org/10.1016/j.compenvurbsys.2013.10.003
  • Rundle, A. G., Bader, M. D. M., Richards, C. A., Neckerman, K. M., & Teitler, J. O. (2011). Using Google Street View to audit neighborhood environments. American journal of preventive medicine, 40(1), 94-100. doi: 10.1016/j.amepre.2010.09.034
  • Schwarz, L. A., Mkhitaryan, A., Mateus, D., & Navab, N. (2012). Human skeleton tracking from depth data using geodesic distances and optical flow. Image and Vision Computing, 30(3), 217-226. doi: https://doi.org/10.1016/j.imavis.2011.12.001
  • Shotton, J., Fitzgibbon, A., Cook, M., Sharp, T., Finocchio, M., Moore, R., . . . Blake, A. (2011, 20-25 June 2011). Real-time human pose recognition in parts from single depth images. Paper presented at the CVPR 2011.
  • Suma, E. A., Krum, D. M., Lange, B., Koenig, S., Rizzo, A., & Bolas, M. (2013). Adapting user interfaces for gestural interaction with the flexible action and articulated skeleton toolkit. Computers & Graphics, 37(3), 193-201. doi: https://doi.org/10.1016/j.cag.2012.11.004
  • Tran, K. N., Kakadiaris, I. A., & Shah, S. K. (2012). Partbased motion descriptor image for human action recognition. Pattern Recognition, 45(7), 2562-2572. doi: https://doi.org/10.1016/j.patcog.2011.12.028
  • Xia, L., Chen, C., & Aggarwal, J. K. (2011, 20-25 June 2011). Human detection using depth information by Kinect. Paper presented at the CVPR 2011 WORKSHOPS.
  • Yemenicioglu, C., Kaya, S., & Seker, D. Z. (2016). Accuracy of 3D (Three-Dimensional) Terrain Models in Simulations. International Journal of Engineering and Geosciences, 1(1), 34-38, doi: 10.26833/ijeg.285223
  • Yue, H., Chen, W., Wu, X., & Liu, J. (2014). Fast 3D modeling in complex environments using a single Kinect sensor. Optics and Lasers in Engineering, 53, 104-111. doi: https://doi.org/10.1016/j.optlaseng.2013.08.009
  • Zhou, H., Wang, L., & Suter, D. (2009). Human action recognition by feature-reduced Gaussian process classification. Pattern Recognition Letters, 30(12), 1059-1066. doi: https://doi.org/10.1016/j.patrec.2009.03.013
Toplam 28 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Bölüm Articles
Yazarlar

Tolga Bakirman 0000-0001-7828-9666

Mustafa Umit Gumusay 0000-0001-6464-919X

Yayımlanma Tarihi 1 Haziran 2020
Yayımlandığı Sayı Yıl 2020 Cilt: 5 Sayı: 2

Kaynak Göster

APA Bakirman, T., & Gumusay, M. U. (2020). Integration of custom street view and low-cost motion sensors. International Journal of Engineering and Geosciences, 5(2), 66-72. https://doi.org/10.26833/ijeg.589489
AMA Bakirman T, Gumusay MU. Integration of custom street view and low-cost motion sensors. IJEG. Haziran 2020;5(2):66-72. doi:10.26833/ijeg.589489
Chicago Bakirman, Tolga, ve Mustafa Umit Gumusay. “Integration of Custom Street View and Low-Cost Motion Sensors”. International Journal of Engineering and Geosciences 5, sy. 2 (Haziran 2020): 66-72. https://doi.org/10.26833/ijeg.589489.
EndNote Bakirman T, Gumusay MU (01 Haziran 2020) Integration of custom street view and low-cost motion sensors. International Journal of Engineering and Geosciences 5 2 66–72.
IEEE T. Bakirman ve M. U. Gumusay, “Integration of custom street view and low-cost motion sensors”, IJEG, c. 5, sy. 2, ss. 66–72, 2020, doi: 10.26833/ijeg.589489.
ISNAD Bakirman, Tolga - Gumusay, Mustafa Umit. “Integration of Custom Street View and Low-Cost Motion Sensors”. International Journal of Engineering and Geosciences 5/2 (Haziran 2020), 66-72. https://doi.org/10.26833/ijeg.589489.
JAMA Bakirman T, Gumusay MU. Integration of custom street view and low-cost motion sensors. IJEG. 2020;5:66–72.
MLA Bakirman, Tolga ve Mustafa Umit Gumusay. “Integration of Custom Street View and Low-Cost Motion Sensors”. International Journal of Engineering and Geosciences, c. 5, sy. 2, 2020, ss. 66-72, doi:10.26833/ijeg.589489.
Vancouver Bakirman T, Gumusay MU. Integration of custom street view and low-cost motion sensors. IJEG. 2020;5(2):66-72.