Research Article
BibTex RIS Cite

Driver Pattern Recognition Using ML Quantization and Tabular Cloning Approach

Year 2025, Volume: 9 Issue: 4, 461 - 474, 31.12.2025
https://doi.org/10.30939/ijastech..1662297

Abstract

It's crucial in the transportation and automotive sectors for detecting speeding, aggressive, and distracted driving; improving safety; predicting dangers; and providing real-time feedback, thus enhancing transportation network efficiency and security. Machine learning, a subset of artificial intelligence, uses algorithms and statistical models to analyze large datasets, identifying patterns that may be difficult for humans to detect. Driver behavior detection employs machine learning techniques such as supervised, unsupervised, and reinforcement learning. In a related manner, quantization in machine learning has been known to improve the efficiency, speed, and performance of the trained model, making it beneficial for edge computing and IoT devices and enabling compact deployments and practical applications. This research focuses on the precomputation of a machine-learning algorithm for all input attributes, storing the response in a designated memory for efficient inference as an alternative to inferring a trained model with new input. The efficacy of this strategy depends on the quantization of input features, necessitating the selection of a certain number of bits within the existing memory capacity constraints. The author tested the strategy on two machine-learning models, K Nearest Neighbor (KNN) and Random Forest (RF), using quantization bits ranging from 2 to 8, resulting in varying accuracies compared to the original model's outcomes. The RF model achieved 67% and 93% accuracy with 5 and 7 quantization bits, while the KNN model achieved 62% and 95% accuracy with 3 and 5 quantization bits. By quantifying input features and listing all possible input bit values along with their corresponding query output, this method can shorten the time it takes to draw conclusions, which makes it easier to use the machine-learning model on computers that are close to the edge or don't have a lot of resources.

Supporting Institution

None

Project Number

None

Thanks

Thanks

References

  • [1] Bianco S, Cadene R, Celona L, Napoletano P. Benchmark analysis of representative deep neural network architectures. IEEE access. 2018 Oct 24;6:64270-7. https://doi.org/10.1109/ACCESS.2018.2877890
  • [2] Hong J, Duan J, Zhang C, Li Z, Xie C, Lieberman K, Diffenderfer J, Bartoldson B, Jaiswal A, Xu K, Kailkhura B. Decoding comp-ressed trust: Scrutinizing the trustworthiness of efficient LLMs un-der compression. arXiv [Preprint]. 2024. doi:10.48550/arXiv.2403.15447.
  • [3] Agustsson E, Theis L. Universally quantized neural compression. Advances in neural information processing systems. 2020; 33:1236-76. https://doi.org/10.48550/arXiv.2006.09952.
  • [4] Maleki S, Gao Y, Garzarán M, Wong K, Padua D. An evaluation of vectorizing compilers. In: Proc IEEE Int Parallel Distrib Process Symp (IPDPS); 2011; Anchorage, AK, USA. IEEE; 2011. p. 372–82.
  • [5] Hong S, Kim H. An integrated GPU power and performance mo-del. In: Proc Int Symp Comput Archit (ISCA); 2010; Saint-Malo, France. IEEE; 2010. p. 280–9.
  • [6] Bakhoda A, Yuan G, Fung W, Wong H, Aamodt T. Analyzing CUDA workloads using a detailed GPU simulator. In: Proc IEEE Int Symp Perform Anal Syst Softw (ISPASS); 2009; Boston, MA, USA. IEEE; 2009. p. 163–74.
  • [7] Moudgill M, Glossner J, Huang W, Tian C, Xu C, Yang N, Wang L, Liang T, Shi S, Zhang X, Iancu D. Heterogeneous edge CNN hardware accelerator. In: Proc Int Conf Wireless Commun Signal Process (WCSP); 2020 Oct 21; Nanjing, China. IEEE; 2020. p. 636–41.https://doi.org/10.1109/WCSP49889.2020.9299736.
  • [8] Medina E. Habana Labs presentation. In: Proc IEEE Hot Chips 31 Symp (HCS); 2019 Aug 18; Cupertino, CA, USA. IEEE; 2019. p. 1–29. https://doi.org/10.1109/HOTCHIPS.2019.8875670.
  • [9] Guo K, Zeng S, Yu J, Wang Y, Yang H. A survey of FPGA-based neural network accelerator. arXiv [Preprint]. 2017. doi:10.48550/arXiv.1712.08934.
  • [10] Abdelouahab K, Pelcat M, Serot J, Berry F. Accelerating CNN inference on FPGAs: A survey. arXiv [Preprint]. 2018. doi:10.48550/arXiv.1806.01683.
  • [11] Miao L, Han W. Simulation analysis of electrical vehicle’s remai-ning discharge energy based on driving profile prediction. In: Proc IEEE 10th Int Power Electron Motion Control Conf (IPEMC-ECCE Asia); 2024 May 17; Hefei, China. IEEE; 2024. p. 4883–8. https://doi.org/10.1109/IPEMC-ECCEAsia60879.2024.10567478.
  • [12] Gheni HM, AbdulRahaim LA, Abdellatif A. Real-time driver iden-tification in IoV: A deep learning and cloud integration approach. Heliyon. 2024 Apr 15;10(7). https://doi.org/10.1016/j.heliyon.2024.e28109
  • [13] Islam MR, Yusupov K, Batzorig M, Oh I, Yim K. Enhancing road safety with in-vehicle network abnormal driving behavior detection. In: Proc Int Conf Innovative Mobile Internet Serv Ubiquitous Comput; 2024 Jul 1; Istanbul, Türkiye. Springer; 2024. p. 79–88. https://doi.org/10.1007/978-3-031-64766-6_9.
  • [14] Khosravinia P, Perumal T, Zarrin J. Enhancing road safety through accurate detection of hazardous driving behaviors with graph con-volutional recurrent networks. arXiv [Preprint]. 2023. doi:10.48550/arXiv.2305.05670.
  • [15] Fuketa H, Katashita T, Hori Y, Hioki M. Multiplication-free loo-kup-based cnn accelerator using residual vector quantization and its fpga implementation. IEEE Access. 2024 Jul 24. https://doi.org/10.1109/ACCESS.2024.3432979
  • [16] Wang E, Auffret M, Stavrou GI, Cheung PY, Constantinides GA, Abdelfattah MS, Davis JJ. Logic Shrinkage: Learned Connectivity Sparsification for LUT-Based Neural Networks. ACM Transacti-ons on Reconfigurable Technology and Systems. 2023 Sep 1;16(4):1-25. https://doi.org/10.1145/3583075
  • [17] Tang X, Wang Y, Cao T, Zhang LL, Chen Q, Cai D, Liu Y, Yang M. Lut-nn: Empower efficient neural network inference with cent-roid learning and table lookup. InProceedings of the 29th Annual International Conference on Mobile Computing and Networking 2023 Oct 2 (pp. 1-15). https://doi.org/10.1145/3570361.3613285
  • [18] Su Y, Seng KP, Ang LM, Smith J. Binary neural networks in fpgas: Architectures, tool flows and hardware comparisons. Sen-sors. 2023 Nov 17;23(22):9254. https://doi.org/10.3390/s23229254
  • [19] Weng O, Andronic M, Zuberi D, Chen J, Geniesse C, Constantini-des GA, Tran N, Fraser NJ, Duarte JM, Kastner R. Greater than the Sum of its LUTs: Scaling Up LUT-based Neural Networks with AmigoLUT. InProceedings of the 2025 ACM/SIGDA Inter-national Symposium on Field Programmable Gate Arrays 2025 Feb 27 (pp. 25-35). https://doi.org/10.1145/3706628.3708874.
  • [20] Wuraola A, Patel N. Resource efficient activation functions for neural network accelerators. Neurocomputing. 2022 Apr 14;482:163-85. https://doi.org/10.1016/j.neucom.2021.11.032
  • [21] Sutradhar PR, Bavikadi S, Indovina M, Pudukotai Dinakarrao SM, Ganguly A. Flutpim: A look-up table-based processing in memory architecture with floating-point computation support for deep lear-ning applications. In: Proc Great Lakes Symp VLSI; 2023 Jun 5; Knoxville, TN, USA. ACM; 2023. p. 207–11. https://doi.org/10.1145/3583781.3590313
  • [22] Han S, Mao H, Dally WJ. Deep compression: Compressing deep neural networks with pruning, trained quantization and Huffman coding. arXiv [Preprint]. 2015. doi:10.48550/arXiv.1510.00149.
  • [23] Fiesler E, Choudry A, Caulfield HJ. Weight discretization para-digm for optical neural networks. In: Proc Optical Interconnections and Networks; 1990 Aug 1; San Diego, CA, USA. SPIE; 1990. p. 164–73. https://doi.org/10.1117/12.20700.
  • [24] W. Balzer, M. Takahashi, J. Ohta, K. Kyuma Weight quantization in Boltzmann machines Neural Networks, 4 (1991), pp. 405- 409. https://doi.org/10.1016/0893-6080(91)90077-I.
  • [25] Vanhoucke V, Senior A, Mao MZ. Improving the speed of neural networks on CPUs. In: Proc NIPS Workshop Deep Learning Un-supervised Feature Learning; 2011 Dec 16; Granada, Spain. https://andrewsenior.com/papers/VanhouckeNIPS11.pdf
  • [26] Klemetti A, Raatikainen M, Myllyaho L, Mikkonen T, Nurminen JK. Systematic Literature Review on Cost-Efficient Deep Learning. IEEE Access. 2023 May 11;11:90158-80. https://doi.org/10.1109/ACCESS.2023.3275431
  • [27] Banner R, Nahshan Y, Soudry D. Post-training 4-bit quantization of convolutional networks for rapid deployment. In: Proc Adv Neural Inf Process Syst (NeurIPS); 2019; Vancouver, BC, Cana-da. Curran Associates; 2019;32. https://doi.org/10.48550/arXiv.1810.05723
  • [28] Jacob B, Kligys S, Chen B, Zhu M, Tang M, Howard A, Adam H, Kalenichenko D. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR); 2018; Salt Lake City, UT, USA. IEEE; 2018. p. 2704–13. https://doi.org/10.48550/arXiv.1712.05877.
  • [29] Li Y, Xu S, Zhang B, Cao X, Gao P, Guo G. Q-ViT: accurate and fully quantized low-bit vision transformer. In: Proc Adv Neural Inf Process Syst (NeurIPS); 2022; New Orleans, LA, USA. Curran Associates; 2022. p. 34451–63. https://doi.org/10.48550/arXiv.2210.06707
  • [30] Yao Z, Dong Z, Zheng Z, Gholami A, Yu J, Tan E, Wang L, Hu-ang Q, Wang Y, Mahoney M, Keutzer K. HAWQ-V3: Dyadic neural network quantization. In: Proc Int Conf Mach Learn (ICML); 2021 Jul 1; Virtual Conference. PMLR; 2021. p. 11875–86. https://doi.org/10.48550/arXiv.2011.10680
  • [31] McKinstry JL, Esser SK, Appuswamy R, Bablani D, Arthur JV, Yildiz IB, Modha DS. Discovering low-precision networks close to full-precision networks for efficient embedded inference. arXiv [Preprint]. 2018. doi:10.48550/arXiv.1809.04191.
  • [32] Krishnamoorthi R. Quantizing deep convolutional networks for efficient inference: A white paper. arXiv [Preprint]. 2018. doi:10.48550/arXiv.1806.08342.
  • [33] Wu H, Judd P, Zhang X, Isaev M, Micikevicius P. Integer quanti-zation for deep learning inference: Principles and empirical evalua-tion. arXiv [Preprint]. 2020. doi:10.48550/arXiv.2004.09602.
  • [34] Migacz S. 8-bit inference with TensorRT. In: Proc GPU Technol Conf (GTC); 2017 May; San Jose, CA, USA. NVIDIA; 2017. https://www.cse.iitd.ac.in/~rijurekha/course/tensorrt.pdf.
  • [35] Chen T, Moreau T, Jiang Z, Zheng L, Yan E, Shen H, Cowan M, Wang L, Hu Y, Ceze L, Guestrin C. TVM: An automated end-to-end optimizing compiler for deep learning. In: Proc 13th USENIX Symp Oper Syst Des Implement (OSDI); 2018; Carlsbad, CA, USA. USENIX; 2018. p. 578–94. https://doi.org/10.48550/arXiv.1802.04799
  • [36] Jacob B, Kligys S, Chen B, Zhu M, Tang M, Howard A, Adam H, Kalenichenko D. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR); 2018 Jun 18–22; Salt Lake City, UT, USA. IEEE; 2018. p. 2704–13. doi:10.48550/arXiv.1712.05877.
  • [37] Choi J, Wang Z, Venkataramani S, Chuang PI, Srinivasan V, Go-palakrishnan K. PACT: Parameterized clipping activation for quan-tized neural networks. arXiv [Preprint]. 2018. doi:10.48550/arXiv.1805.06085.
  • [38] Gong R, Liu X, Jiang S, Li T, Hu P, Lin J, Yu F, Yan J. Differen-tiable soft quantization: Bridging full-precision and low-bit neural networks. In: Proc IEEE/CVF Int Conf Comput Vis (ICCV); 2019; Seoul, Republic of Korea. IEEE; 2019. p. 4852–61. https://doi.org/10.1109/iccv.2019.00495.
  • [39] Jin Q, Yang L, Liao Z. Towards efficient training for neural network quantization. arXiv [Preprint]. 2019. doi:10.48550/arXiv.1912.10207.
  • [40] Liu J, Cai J, Zhuang B. Sharpness-aware quantization for deep neural networks. arXiv [Preprint]. 2021. doi:10.48550/arXiv.2111.12273.
  • [41] Zhuang B, Liu L, Tan M, Shen C, Reid I. Training quantized neural networks with a full-precision auxiliary module. In: Proc IEEE/CVF Conf Comput Vis Pattern Recognit (CVPR); 2020; Se-attle, WA, USA. IEEE; 2020. p. 1488–97. https://doi.org/10.48550/arXiv.1903.11236.
  • [42] Diao H, Li G, Xu S, Kong C, Wang W. Attention round for post-training quantization. Neurocomputing. 2024 Jan 14;565:127012. https://doi.org/10.1016/j.neucom.2023.127012.
  • [43] Wu B, Wang Y, Zhang P, Tian Y, Vajda P, Keutzer K. Mixed precision quantization of convnets via differentiable neural architec-ture search. arXiv [Preprint]. 2018. doi:10.48550/arXiv.1812.00090.
  • [44] Wang K, Liu Z, Lin Y, Lin J, Han S. HAQ: Hardware-aware au-tomated quantization with mixed precision. In: Proc IEEE/CVF Conf Comput Vis Pattern Recognit (CVPR); 2019; Long Beach, CA, USA. IEEE; 2019. p. 8612–20. https://doi.org/10.1109/CVPR.2019.00881.
  • [45] Dong Z, Yao Z, Gholami A, Mahoney MW, Keutzer K. Hawq: Hessian aware quantization of neural networks with mixed-precision. In Proceedings of the IEEE/CVF international conferen-ce on computer vision 2019 (pp. 293-302). https://doi.org/10.48550/arXiv.1905.03696
  • [46] Dong P, Li L, Wei Z, Niu X, Tian Z, Pan H. EMQ: Evolving trai-ning-free proxies for automated mixed precision quantization. In: Proc IEEE/CVF Int Conf Comput Vis (ICCV); 2023; Paris, Fran-ce. IEEE; 2023. p. 17076–86. https://doi.org/10.48550/arXiv.2307.10554
  • [47] Tang C, Ouyang K, Chai Z, Bai Y, Meng Y, Wang Z, Zhu W. SEAM: Searching transferable mixed-precision quantization policy through large margin regularization. In: Proc 31st ACM Int Conf Multimedia; 2023 Oct 26; Ottawa, Canada. ACM; 2023. p. 7971–80. https://doi.org/10.1145/3581783.3611975.
  • [48] Dong Z, Yao Z, Arfeen D, Gholami A, Mahoney MW, Keutzer K. HAWQ-V2: Hessian aware trace-weighted quantization of neural networks. In: Proc Adv Neural Inf Process Syst (NeurIPS); 2020; Virtual Conference. Curran Associates; 2020. p. 18518–29.
  • [49] Tang C, Ouyang K, Wang Z, Zhu Y, Ji W, Wang Y, Zhu W. Mixed-precision neural network quantization via learned layer-wise importance. In: Proc Eur Conf Comput Vis (ECCV); 2022 Oct 23; Tel Aviv, Israel. Springer; 2022. p. 259–75. https://doi.org/10.48550/arXiv.2203.08368
  • [50] Antoniadis A, Lambert-Lacroix S, Poggi JM. Random forests for global sensitivity analysis: A selective review. Reliability Enginee-ring & System Safety. 2021 Feb 1;206:107312. https://doi.org/10.1016/j.ress.2020.107312.
  • [51] Zhang S. Challenges in KNN classification. IEEE Transactions on Knowledge and Data Engineering. 2021 Jan 5;34(10):4663-75. https://doi.org/10.1109/TKDE.2021.3049250.
  • [52] Kim HK. Know your master: Driver Profiling-based Anti-theft method. In PST 2016 (Privacy, Security and Trust 2016) 2016 Dec 13. Unitech. https://doi.org/10.48550/arXiv.1704.05223. https://doi.org/10.1109/PST.2016.7906929
  • [53] Marinó GC, Petrini A, Malchiodi D, Frasca M. Deep neural networks compression: A comparative survey and choice recom-mendations. Neurocomputing. 2023 Feb 1; 520:152-70. https://doi.org/10.1016/j.neucom.2022.11.072
  • [54] Tapia-Fernández S, García-García D, García-Hernandez P. Key Concepts, Weakness and Benchmark on Hash Table Data Structu-res. Algorithms. 2022 Mar 21;15(3):100. https://doi.org/10.3390/a15030100
There are 54 citations in total.

Details

Primary Language English
Subjects Automotive Engineering (Other)
Journal Section Research Article
Authors

Mohamed Hashim Al-meer 0000-0001-7595-8852

Project Number None
Submission Date March 20, 2025
Acceptance Date October 13, 2025
Early Pub Date December 16, 2025
Publication Date December 31, 2025
Published in Issue Year 2025 Volume: 9 Issue: 4

Cite

Vancouver Al-meer MH. Driver Pattern Recognition Using ML Quantization and Tabular Cloning Approach. IJASTECH. 2025;9(4):461-74.


International Journal of Automotive Science and Technology (IJASTECH) is published by Society of Automotive Engineers Turkey

by.png