Privacy-preserving collaborative filtering (PPCF) denotes a methodology that recommends items to users according to their interests or behaviors while safeguarding user privacy and data confidentiality. PPCF is especially crucial in situations involving sensitive user data, such as purchase history, surfing activity, or personal preferences. Privacy-preserving model-based recommendation methods are preferred over privacy-preserving memory-based schemes due to their online efficiency. There is a lot of work in the literature on model-based CF and PPCF schemes. Model-based prediction algorithms without privacy concerns have been adequately studied in terms of shilling attacks. However, there is a limited number of studies that measure the robustness of model-based PPCF schemes against shilling attacks. PPCF schemes can also be affected by shilling attacks, as can model-based prediction algorithms without privacy concerns. In this study, we investigate the robustness of genetic algorithm based PPCF schemes against shilling attacks. In this paper firstly, we apply masked data-based profile injection attacks to genetic algorithm-based PPCF prediction algorithms. Then, we perform extensive experiments using real data to evaluate their robustness against profile injection attacks. We then compare other model-based methods that have been studied in the literature in terms of robustness. Our empirical analyses show that the model-based scheme with privacy is very robust against shilling attacks.
Primary Language | English |
---|---|
Subjects | Machine Learning Algorithms |
Journal Section | Articles |
Authors | |
Publication Date | September 25, 2025 |
Submission Date | April 9, 2025 |
Acceptance Date | July 10, 2025 |
Published in Issue | Year 2025 Volume: 26 Issue: 3 |