Research Article
BibTex RIS Cite

LCD projektör kullanarak statik bir etkinlik kamerasından statik sahnelerin görüntü yeniden oluşturulması

Year 2025, Volume: 5 Issue: 2, 783 - 795, 31.07.2025
https://doi.org/10.61112/jiens.1626247

Abstract

Event kameralar, çerçeve tabanlı kameralara kıyasla birçok avantaj sunan umut verici sensörlerdir. Geleneksel kameraların aksine, pikselleri ortak bir pozlama süresine sahip olmayan event tabanlı kameralar, yüksek dinamik aralıkta ve hareket bulanıklığı olmadan sahneleri yakalayabilen biyolojik ilhamlı yenilikçi bir teknoloji temsil etmektedir. Çalışma prensipleri gereği, bir pikselin parlaklığı değiştiğinde bir event üretilir. Bu nedenle, event kamera ile sahne arasında göreli bir hareket olmadığı senaryolarda event verisi oluşmaz. Ancak, bu çalışmada, sabit bir event kamera ile durağan sahnelerde event üretimini mümkün kılan yeni bir yöntem sunuyoruz. Böylece, event kameranın ve sahnedeki nesnelerin hareket etme gerekliliğini ortadan kaldırmayı hedefliyoruz. Özel olarak tasarlanmış gri tonlamalı desen dizilerini sabit sahnelere yansıtarak, kamera veya nesne hareketi gerektirmeden kontrollü bir event üretimini başarıyla gerçekleştirdik. Doğrudan siyah-beyaz geçişi yerine, event oranlarını düzenlemek ve bant genişliği aşımını önlemek için kontrast uyumlu gri tonlamalı projeksiyon desenleri kullandık. Event kaybını önlemek için tüm zaman damgalarını ilk zaman damgasına eşitledik. Event kameralarında eventler zamanla etkisini kaybettiği ve sıfırlandığı için yapılan bu ayarlama, event bilgisinin zaman içinde bozulmasını engelleyerek sürekli ve istikrarlı bir event üretimini sağlamaktadır. Hareket olmamasına rağmen, MSE, LPIPS ve SSIM gibi görüntü kalitesi metriklerinde makul sonuçlar elde ettik. Bu yöntemle, event kameraların kullanım alanlarını genişletmeyi ve özellikle statik kamera ve sahneler için veri toplama süreçlerinde önemli ilerlemeler kaydetmeyi amaçlıyoruz.

References

  • Scheerlinck C, Rebecq H, Gehrig D, Barnes N, Mahony RE, Scaramuzza D (2020) Fast image reconstruction with an event camera. IEEE Winter Conf Appl Comput Vision (WACV) 2020:156–163. https://doi.org/10.1109/WACV45572.2020.9093366
  • Alonso I, Murillo AC (2019) EV-SegNet: Semantic segmentation for event-based cameras. IEEE/ CVF Conf Comput Vis Pattern Recognit Work (CVPRW) 2019:1624–1633. https://doi.org/10.1109/CVPRW.2019.00205
  • Salah M, Ayyad A, Humais M, Gehrig D et al (2024) E-Calib: A fast, robust, and accurate calibration toolbox for event cameras. IEEE Trans Image Process 33:3977–3990. https://doi.org/10.1109/TIP.2024.3410673
  • Sun R, Shi D, Zhang Y, Li R, Li R (2021) Data-driven technology in event-based vision. Complexity 2021:1–19. https://doi.org/10.1155/2021/6689337
  • Zhu AZ, Thakur D, Ozaslan T, Pfrommer B, Kumar V, Daniilidis K (2018) The multivehicle stereo event camera dataset: An event camera dataset for 3D perception. IEEE Robot Autom Lett 3(3):2032–2039. https://doi.org/10.1109/LRA.2018.2800793
  • Stoffregen T, Gallego G, Drummond T, Kleeman L, Scaramuzza D (2019) Event-based motion segmentation by motion compensation. IEEE/CVF Int Conf Comput Vis (ICCV) 2019:7243–7252. https://doi.org/10.1109/ICCV.2019.00734
  • Rebecq H, Gehrig D, Scaramuzza D (2018) ESIM: An open event camera simulator. Conf Robot Learning (CoRL) 87:969–982
  • Pan L, Hartley R, Scheerlinck C, Liu M, Yu X, Dai Y (2022) High frame rate video reconstruction based on an event camera. IEEE Trans Pattern Anal Mach Intell 44(5):2519–2533. https://doi.org/10.1109/TPAMI.2020.3036667
  • Gallego G, Delbruck T, Orchard G, Bartolozzi C et al (2022) Event-based vision: A survey. IEEE Trans Pattern Anal Mach Intell 44(1):154–180. https://doi.org/10.1109/TPAMI.2020.3008413
  • Scheerlinck C, Rebecq H, Stoffregen T, Barnes N et al (2019) CED: Color event camera dataset. Conf Comput Vis Pattern Recognit Work (CVPRW) 2019:1684–1693. https://doi.org/10.1109/CVPRW.2019.00215
  • Adra M, Dugelay J-L (2024) TIME-E2V: Overcoming limitations of E2VID. IEEE Int Conf Adv Video Signal Based Surveillance (AVSS) 2024:1–7. https://doi.org/10.1109/AVSS61716.2024.10672575
  • Ronneberger O, Fischer P, Brox T (2015) U-Net: Convolutional networks for biomedical image segmentation. CoRR abs/1505.04597. http://arxiv.org/abs/1505.04597
  • Zhu L, Wang X, Chang Y, Li J, Huang T, Tian Y (2022) Event-based video reconstruction via potential-assisted spiking neural network. Proc IEEE/CVF Conf Comput Vis Pattern Recognit (CVPR) 2022:3594–3604.
  • Mei H, Wang Z, Yang X, Wei X, Delbruck T (2023) Deep polarization reconstruction with PDAVIS events. Proc IEEE/CVF Conf Comput Vis Pattern Recognit (CVPR) 2023:22149–22158.
  • Inivation (2025) Enhanced noise filtering from Inivation’s DV platform. https://inivation.com/enhanced-noise-filtering-from-inivations-dv-platform/. Accessed 30 January 2025
  • Pakdelazar O, Rezai Rad G (2011) Improvement of BM3D algorithm and employment to satellite and CFA images denoising. Int J Inf Sci Tech 1(3):23–33. https://doi.org/10.5121/ijist.2011.1303
  • Buades A, Coll B, Morel J (2011) Non-local means denoising pixelwise implementation. Image Process Line 1:208–212.
  • Shalini K, Prasad LVN (2014) Wavelet based soft thresholding approach for color image denoising. Int J Adv Eng Technol 7(4):1233–1237.
  • Cao R, Galor D, Kohli A, Yates JL, Waller L (2024) Noise2Image: Noise-enabled static scene recovery for event cameras. Optica. https://doi.org/10.1364/OPTICA.538916
  • Horé A, Ziou D (2010) Image quality metrics: PSNR vs. SSIM. 20th Int Conf Pattern Recognit (ICPR) 2010:2366–2369. https://doi.org/10.1109/ICPR.2010.579
  • Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: From error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612. https://doi.org/10.1109/TIP.2003.819861
  • Sara U, Akter M and Uddin M (2019) Image Quality Assessment through FSIM, SSIM, MSE and PSNR— A comparative study. J Comput Commun 7:8-18. https://doi.org/10.4236/jcc.2019.73002
  • Wang Z, Bovik AC (2009) Mean squared error: Love it or leave it? A new look at signal fidelity measures. IEEE Signal Process Mag 26(1):98–117. https://doi.org/10.1109/MSP.2008.930649
  • Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. IEEE/CVF Conf Comput Vis Pattern Recognit (CVPR) 2018:586–595. https://doi.org/10.1109/CVPR.2018.00068
  • Event Accumulation (2025) https://dv-processing.inivation.com/rel_1_7/accumulators.html. Accessed 29 April 2025.

Image reconstruction of static scenes from a static event camera using an LCD projector

Year 2025, Volume: 5 Issue: 2, 783 - 795, 31.07.2025
https://doi.org/10.61112/jiens.1626247

Abstract

Event cameras are promising sensors that show many advantages over frame-based cameras. Unlike conventional cameras, whose pixels share a common exposure time, event-based cameras represent a novel bio-inspired technology capable of capturing scenes with a high dynamic range and without motion blur. Due to their working principle, an event is generated when a pixel's brightness changes. Therefore, no event data is generated in a scenario where there is no relative motion between the event camera and the scene. However, in this study, we present a new method to enable event generation with a static event camera on a scene with static objects, aiming to eliminate the requirement of relative motion between event camera and the scene. By projecting custom designed grayscale pattern sequences onto static scenes, we successfully triggered a controlled event generation without requiring camera or object motion. Instead of a direct black-to-white transition, we used a sequence of contrast compatible grayscale projection pattern to regulate event rates and prevent bandwidth overload. To prevent event loss over time, we equalized all timestamps to the first timestamp. Since events in event cameras gradually lose their impact and reset over time, this adjustment prevents the decay of event information and ensures a continuous and stable event generation. Despite the absence of motion, we achieved reasonable results in image quality metrics such as MSE, LPIPS, SSIM. In this way, we aim to expand the usage areas of event cameras and make significant progress in data collection processes, especially for static camera and scenes.

Ethical Statement

All event data used in this study were collected by the authors using their own experimental setup involving a static scene and an LCD projector. No human subjects, personal data, or third-party datasets were involved. Therefore, ethical approval was not required for this study.

References

  • Scheerlinck C, Rebecq H, Gehrig D, Barnes N, Mahony RE, Scaramuzza D (2020) Fast image reconstruction with an event camera. IEEE Winter Conf Appl Comput Vision (WACV) 2020:156–163. https://doi.org/10.1109/WACV45572.2020.9093366
  • Alonso I, Murillo AC (2019) EV-SegNet: Semantic segmentation for event-based cameras. IEEE/ CVF Conf Comput Vis Pattern Recognit Work (CVPRW) 2019:1624–1633. https://doi.org/10.1109/CVPRW.2019.00205
  • Salah M, Ayyad A, Humais M, Gehrig D et al (2024) E-Calib: A fast, robust, and accurate calibration toolbox for event cameras. IEEE Trans Image Process 33:3977–3990. https://doi.org/10.1109/TIP.2024.3410673
  • Sun R, Shi D, Zhang Y, Li R, Li R (2021) Data-driven technology in event-based vision. Complexity 2021:1–19. https://doi.org/10.1155/2021/6689337
  • Zhu AZ, Thakur D, Ozaslan T, Pfrommer B, Kumar V, Daniilidis K (2018) The multivehicle stereo event camera dataset: An event camera dataset for 3D perception. IEEE Robot Autom Lett 3(3):2032–2039. https://doi.org/10.1109/LRA.2018.2800793
  • Stoffregen T, Gallego G, Drummond T, Kleeman L, Scaramuzza D (2019) Event-based motion segmentation by motion compensation. IEEE/CVF Int Conf Comput Vis (ICCV) 2019:7243–7252. https://doi.org/10.1109/ICCV.2019.00734
  • Rebecq H, Gehrig D, Scaramuzza D (2018) ESIM: An open event camera simulator. Conf Robot Learning (CoRL) 87:969–982
  • Pan L, Hartley R, Scheerlinck C, Liu M, Yu X, Dai Y (2022) High frame rate video reconstruction based on an event camera. IEEE Trans Pattern Anal Mach Intell 44(5):2519–2533. https://doi.org/10.1109/TPAMI.2020.3036667
  • Gallego G, Delbruck T, Orchard G, Bartolozzi C et al (2022) Event-based vision: A survey. IEEE Trans Pattern Anal Mach Intell 44(1):154–180. https://doi.org/10.1109/TPAMI.2020.3008413
  • Scheerlinck C, Rebecq H, Stoffregen T, Barnes N et al (2019) CED: Color event camera dataset. Conf Comput Vis Pattern Recognit Work (CVPRW) 2019:1684–1693. https://doi.org/10.1109/CVPRW.2019.00215
  • Adra M, Dugelay J-L (2024) TIME-E2V: Overcoming limitations of E2VID. IEEE Int Conf Adv Video Signal Based Surveillance (AVSS) 2024:1–7. https://doi.org/10.1109/AVSS61716.2024.10672575
  • Ronneberger O, Fischer P, Brox T (2015) U-Net: Convolutional networks for biomedical image segmentation. CoRR abs/1505.04597. http://arxiv.org/abs/1505.04597
  • Zhu L, Wang X, Chang Y, Li J, Huang T, Tian Y (2022) Event-based video reconstruction via potential-assisted spiking neural network. Proc IEEE/CVF Conf Comput Vis Pattern Recognit (CVPR) 2022:3594–3604.
  • Mei H, Wang Z, Yang X, Wei X, Delbruck T (2023) Deep polarization reconstruction with PDAVIS events. Proc IEEE/CVF Conf Comput Vis Pattern Recognit (CVPR) 2023:22149–22158.
  • Inivation (2025) Enhanced noise filtering from Inivation’s DV platform. https://inivation.com/enhanced-noise-filtering-from-inivations-dv-platform/. Accessed 30 January 2025
  • Pakdelazar O, Rezai Rad G (2011) Improvement of BM3D algorithm and employment to satellite and CFA images denoising. Int J Inf Sci Tech 1(3):23–33. https://doi.org/10.5121/ijist.2011.1303
  • Buades A, Coll B, Morel J (2011) Non-local means denoising pixelwise implementation. Image Process Line 1:208–212.
  • Shalini K, Prasad LVN (2014) Wavelet based soft thresholding approach for color image denoising. Int J Adv Eng Technol 7(4):1233–1237.
  • Cao R, Galor D, Kohli A, Yates JL, Waller L (2024) Noise2Image: Noise-enabled static scene recovery for event cameras. Optica. https://doi.org/10.1364/OPTICA.538916
  • Horé A, Ziou D (2010) Image quality metrics: PSNR vs. SSIM. 20th Int Conf Pattern Recognit (ICPR) 2010:2366–2369. https://doi.org/10.1109/ICPR.2010.579
  • Wang Z, Bovik AC, Sheikh HR, Simoncelli EP (2004) Image quality assessment: From error visibility to structural similarity. IEEE Trans Image Process 13(4):600–612. https://doi.org/10.1109/TIP.2003.819861
  • Sara U, Akter M and Uddin M (2019) Image Quality Assessment through FSIM, SSIM, MSE and PSNR— A comparative study. J Comput Commun 7:8-18. https://doi.org/10.4236/jcc.2019.73002
  • Wang Z, Bovik AC (2009) Mean squared error: Love it or leave it? A new look at signal fidelity measures. IEEE Signal Process Mag 26(1):98–117. https://doi.org/10.1109/MSP.2008.930649
  • Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018) The unreasonable effectiveness of deep features as a perceptual metric. IEEE/CVF Conf Comput Vis Pattern Recognit (CVPR) 2018:586–595. https://doi.org/10.1109/CVPR.2018.00068
  • Event Accumulation (2025) https://dv-processing.inivation.com/rel_1_7/accumulators.html. Accessed 29 April 2025.
There are 25 citations in total.

Details

Primary Language English
Subjects Computer Vision, Image Processing
Journal Section Research Article
Authors

Beyza Eraslan 0000-0003-3190-5198

Gökhan Koray Gültekin 0000-0003-2895-7042

Submission Date February 13, 2025
Acceptance Date July 5, 2025
Publication Date July 31, 2025
Published in Issue Year 2025 Volume: 5 Issue: 2

Cite

APA Eraslan, B., & Gültekin, G. K. (2025). Image reconstruction of static scenes from a static event camera using an LCD projector. Journal of Innovative Engineering and Natural Science, 5(2), 783-795. https://doi.org/10.61112/jiens.1626247


by.png
Journal of Innovative Engineering and Natural Science by İdris Karagöz is licensed under CC BY 4.0