Research Article

Person Re-Identification in Surveillance Videos using Deep Learning based Body Part Partition and Gaussian Filtering

November 30, 2020
TR EN

Person Re-Identification in Surveillance Videos using Deep Learning based Body Part Partition and Gaussian Filtering

Abstract

In this paper, we concentrate on Person Re-Identification (Re-ID) that consists of searching for a person who has been previously observed over a camera network. Person Re-ID is important for searching suspicious or missing persons if we have sample images of the person of interest. Despite the fact that there are many researches on vision-based Person re-identification, it still remains a challenging problem. We propose a person re-identification system using a deep learning based human body part segmentation, and Gaussian filtering based smooth mask generation. A semantic partition technique is used to segment human body parts and generate local binary masks. These masks are deterministic binary images. These binary masks have strict boundaries, and we lose some features with these deterministic masks. Therefore, we apply Gaussian filter for smoothing masks so that features near the boundaries are also taken into account slightly. These smooth masks are applied to the final feature maps generated at the end of network on contrary to other methods which apply mask at the beginning or in the middle of the deep learning network. Therefore, our work is new and different from other works because of using semantic partition and masking at the end of network, as well as our mask are smoothed with Gaussian filter to handle errors during the partitioning stage. We use a well-known pre-trained network, namely ResNet-50, to extract global features, and a method called Cross-Domain Complementary Learning for human body partitioning. Applying Gaussian filtered smooth local masks to the global features, which are extracted at the end of Resnet-50 network, increases the performance of Person Re-Identification system. Evaluation is conducted on a commonly accepted Market-1501 dataset, and results are promising.

Keywords

Supporting Institution

Middle East Technical University – Northern Cyprus Campus Scientific Research Project Fund

Project Number

Grant no: FEN-19-D-3

References

  1. Bai, X., & Yang, M., & Huang, T., & Dou, Z., & Yu, R., & Xu, Y. (2017). Deep-Person: Learning Discriminative Deep Features for Person Re-Identification. arXiv preprint, arXiv:1711.10658.
  2. Bai, X. et al. (2017B). Deep-Person: Learning Discriminative Deep Features for Person Re-Identification, arXiv:1711.10658.
  3. Cheng, D., & Gong, Y., & Zhou, S., & Wang, J., & Zheng, N. (2016). Person Re-identification by Multi-Channel Parts-Based CNN with Improved Triplet Loss Function, IEEE Conference on Computer Vision and Pattern Recognition, 1335-1344.
  4. Cong, D. N. T., & Achard, C., & Khoudour, L. & Douadi, L. (2009). Video Sequences Association for People Re-Identification Across Multiple Non-Overlapping Cameras. International Conference on Image Analysis and Processing, 179–189.
  5. Deng, J., & Dong, W., & Socher, R., & Li, L., & Kai, L., Fei-Fei, L. (2009). ImageNet: A large-scale hierarchical image database, IEEE Conference on Computer Vision and Pattern Recognition, 248-255.
  6. Ding, S., & Lin, L., & Wang, G., & Chao, H. (2015). Deep Feature Learning with Relative Distance Comparison for Person Re-Identification. Pattern Recognition, 48(10), 2993–3003.
  7. Farenzena, M. & Bazzani, L., & Perina, A., & Murino, V., & Cristani, M. (2010). Person Re-Identification by Symmetry-Driven Accumulation of Local Features. IEEE Computer Vision and Pattern Recognition (CVPR), 2360–2367.
  8. Gray, D., & Brennan, S., & Tao, H. (2007). Evaluating appearance models for recognition, reacquisition, and tracking, IEEE International Workshop on Performance Evaluation for Tracking and Surveillance (PETS), 1–7.

Details

Primary Language

English

Subjects

Engineering

Journal Section

Research Article

Publication Date

November 30, 2020

Submission Date

November 9, 2020

Acceptance Date

November 9, 2020

Published in Issue

Year 2020

APA
Aksu, F., & Direkoğlu, C. (2020). Person Re-Identification in Surveillance Videos using Deep Learning based Body Part Partition and Gaussian Filtering. Avrupa Bilim Ve Teknoloji Dergisi, 291-296. https://doi.org/10.31590/ejosat.823257
AMA
1.Aksu F, Direkoğlu C. Person Re-Identification in Surveillance Videos using Deep Learning based Body Part Partition and Gaussian Filtering. EJOSAT. Published online November 1, 2020:291-296. doi:10.31590/ejosat.823257
Chicago
Aksu, Fatih, and Cem Direkoğlu. 2020. “Person Re-Identification in Surveillance Videos Using Deep Learning Based Body Part Partition and Gaussian Filtering”. Avrupa Bilim Ve Teknoloji Dergisi, November 1, 291-96. https://doi.org/10.31590/ejosat.823257.
EndNote
Aksu F, Direkoğlu C (November 1, 2020) Person Re-Identification in Surveillance Videos using Deep Learning based Body Part Partition and Gaussian Filtering. Avrupa Bilim ve Teknoloji Dergisi 291–296.
IEEE
[1]F. Aksu and C. Direkoğlu, “Person Re-Identification in Surveillance Videos using Deep Learning based Body Part Partition and Gaussian Filtering”, EJOSAT, pp. 291–296, Nov. 2020, doi: 10.31590/ejosat.823257.
ISNAD
Aksu, Fatih - Direkoğlu, Cem. “Person Re-Identification in Surveillance Videos Using Deep Learning Based Body Part Partition and Gaussian Filtering”. Avrupa Bilim ve Teknoloji Dergisi. November 1, 2020. 291-296. https://doi.org/10.31590/ejosat.823257.
JAMA
1.Aksu F, Direkoğlu C. Person Re-Identification in Surveillance Videos using Deep Learning based Body Part Partition and Gaussian Filtering. EJOSAT. 2020;:291–296.
MLA
Aksu, Fatih, and Cem Direkoğlu. “Person Re-Identification in Surveillance Videos Using Deep Learning Based Body Part Partition and Gaussian Filtering”. Avrupa Bilim Ve Teknoloji Dergisi, Nov. 2020, pp. 291-6, doi:10.31590/ejosat.823257.
Vancouver
1.Fatih Aksu, Cem Direkoğlu. Person Re-Identification in Surveillance Videos using Deep Learning based Body Part Partition and Gaussian Filtering. EJOSAT. 2020 Nov. 1;291-6. doi:10.31590/ejosat.823257