Research Article
BibTex RIS Cite

Derin Öğrenmede Diferansiyel Mahremiyet

Year 2020, Volume: 6 Issue: 1, 1 - 16, 30.06.2020
https://doi.org/10.18640/ubgmd.750310

Abstract

Verinin boyut ve çeşitlilik olarak arttığı, kişisel verilerin kolaylıkla paylaşıldığı ve ihlallerinin sayısının hızla yükseldiği günümüzde veri mahremiyeti, üzerinde çokça çalışılan ve önlemler geliştirilen konuların başında gelmektedir. Kişisel verileri kullanan, depolayan veya işleyen her türlü uygulama, ürün veya sistem, veri mahremiyetini sağlamak, korumak ve doğru bir şekilde uygulandığını göstermek zorundadır. Son yıllarda veri mahremiyeti kapsamında pek çok yeni çözümler geliştirilse de teknolojik gelişmeler, yapay zekâdaki ilerlemeler, derin öğrenme yaklaşımlarının uygulama başarısı, bu yaklaşımların pek çok alanda kullanılmaya başlanması ve yapısı itibariyle kara-kutu çözüm sağlaması, veri mahremiyeti açısından yeni endişeleri de beraberinde getirmiştir. Bu çalışmada, günümüzün önemli yapay zekâ teknolojilerinden biri olan derin öğrenmede, kişisel bilgi içeren verilerin analiz edilmesi sürecinde mahremiyet koruyucu çeşitli önlemler incelenmiş, bu önlemlerden en çok kullanılanı olan diferansiyel mahremiyet açıklanmış ve derin öğrenmedeki uygulamaları ve tehditler karşılaştırılmıştır. Sunulan bu çalışmanın, kişisel verileri işleyen derin öğrenme tabanlı uygulamalarda, oluşabilecek ihlallerin önlenmesine, karşılaşılabilecek risklerin doğru belirlenmesine ve gereken önlemlerin daha sağlıklı alınmasına katkı sağlayacağı değerlendirilmektedir.

References

  • [1] S. De Capitani Di Vimercati, S. Foresti, G. Livraga, and P. Samarati, "Data Privacy: Definitions and Techniques," International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 20, pp. 793-817, 2012.
  • [2] P. Jain, M. Gyanchandani, and N. Khare, "Big Data Privacy: A Technological Perspective and Review," Journal of Big Data, vol. 3, p. 25, 2016.
  • [3] Y. Canbay, "Aykırı Veri Yönelimli Fayda Temelli Büyük Veri Anonimleştirme Modeli," Doktora Tezi, Fen Bilimleri Enstitüsü, Gazi Üniversitesi, Ankara, 2019.
  • [4] B. Fung, K. Wang, R. Chen, and P. Yu, "Privacy-Preserving Data Publishing: A Survey of Recent Developments," Computing Surveys, vol. 42, p. 14, 2010.
  • [5] B. C. Fung, K. Wang, A. W. Fu, and S. Y. Philip, Introduction to Privacy-Preserving Data Publishing: Concepts and Techniques. USA: CRC Press, 2010.
  • [6] C. C. Aggarwal and S. Y. Philip, Privacy-Preserving Data Mining: Models and Algorithms. USA: Springer Science & Business Media, 2008.
  • [7] (11.03.2020). Kişisel Verilerin Korunması Kanunu. İnternet: http://www.resmigazete.gov.tr/eskiler/2016/04/20160407-8.pdf
  • [8] (12.03.2020). General Data Protection Regulation. İnternet: https://eur-lex.europa.eu/eli/reg/2016/679/oj
  • [9] J. Ker, L. Wang, J. Rao, and T. Lim, "Deep learning applications in medical image analysis," IEEE Access, vol. 6, pp. 9375-9389, 2017.
  • [10] Z. Sun, Y. Wang, M. Shu, R. Liu, and H. Zhao, "Differential Privacy for Data and Model Publishing of Medical Data," IEEE Access, vol. 7, pp. 152103-152114, 2019.
  • [11] L. Sweeney, "k-Anonymity: A Model for Protecting Privacy," International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 10, pp. 557-570, 2002.
  • [12] A. Machanavajjhala, J. Gehrke, D. Kifer, and M. Venkitasubramaniam, "l-Diversity: Privacy Beyond k-Anonymity," International Conference on Data Engineering, Atlanta, USA, 2006.
  • [13] N. Li, T. Li, and S. Venkatasubramanian, "t-Closeness: Privacy Beyond k-Anonymity and l-Diversity," IEEE International Conference on Data Engineering, Istanbul, Turkey, 2007, pp. 106-115.
  • [14] C. Dwork, "Differential Privacy," International Colloquium on Automata, Languages and Programming, Berlin, Heidelberg, 2006, pp. 1-12.
  • [15] L. Sweeney. (19.02.2019). Simple Demographics Often Identify People Uniquely. Internet: https://dataprivacylab.org
  • [16] R. Motwani and S. Nabar, "Anonymizing Unstructured Data," arXiv:0810.5582, 2008.
  • [17] A. Meyerson and R. Williams, "On the Complexity of Optimal k-Anonymity," ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, Paris, France, 2004, pp. 223-228.
  • [18] C. Aggarwal, "On k-Anonymity and the Curse of Dimensionality," International Conference on Very Large Data Bases, Trondheim, Norway, 2005, pp. 901-909.
  • [19] F. Mirshghallah, M. Taram, P. Vepakomma, A. Singh, R. Raskar, and H. Esmaeilzadeh, "Privacy in Deep Learning: A Survey," arXiv:2004.12254, 2020.
  • [20] B. Chen, K. LeFevre, and R. Ramakrishnan, "Privacy Skyline: Privacy with Multidimensional Adversarial Knowledge," International Conference on Very Large Data Bases, Vienna, Austria, 2007, pp. 770-781.
  • [21] Y. Canbay, Y. Vural, and Ş. Sağıroğlu, "OAN: aykırı kayıt yönelimli fayda temelli mahremiyet koruma modeli," Journal of the Faculty of Engineering & Architecture of Gazi University, vol. 35, 2020.
  • [22] R. C. Wong, A. W. Fu, K. Wang, and J. Pei, "Minimality Attack In Privacy Preserving Data Publishing," International Conference on Very Large Data Bases, Vienna, Austria, 2007, pp. 543-554.
  • [23] G. Duncan and D. Lambert, "The risk of disclosure for microdata," Journal of Business & Economic Statistics, vol. 7, pp. 207-217, 1989.
  • [24] C. Skinner and D. J. Holmes, "Estimating the Re-Identification Risk per Record in Microdata," Journal of Official Statistics, vol. 14, pp. 361-372, 1998.
  • [25] F. K. Dankar, K. El Emam, A. Neisa, and T. Roffey, "Estimating the Re-Identification Risk of Clinical Data Sets," Bmc Medical Informatics and Decision Making, vol. 12, p. 66, 2012.
  • [26] W. Winkler, "Masking and Re-Identification Methods for Public-Use Microdata: Overview and Research Problems," in International Workshop on Privacy in Statistical Databases, Barcelona, Spain, 2004, pp. 231-246.
  • [27] J. Domingo-Ferrer and V. Torra, "A Critique of k-Anonymity and Some of Its Enhancements," International Conference on Availability, Reliability and Security, Barcelona, Spain, 2008, pp. 990-993.
  • [28] X. Sun, L. Sun, and H. Wang, "Extended k-Anonymity Models Against Sensitive Attribute Disclosure," Computer Communications, vol. 34, pp. 526-535, 2011.
  • [29] M. E. Nergiz, M. Atzori, and C. Clifton, "Hiding the Presence of Individuals from Shared Databases," ACM SIGMOD International Conference on Management of Data, Beijing, China, 2007, pp. 665-676.
  • [30] L. Sweeney, "Computational Disclosure Control: A Primer on Data Privacy Protection," Ph. D. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, USA, 2001.
  • [31] X. Zhang, L. T. Yang, C. Liu, and J. Chen, "A Scalable Two-Phase Top-Down Specialization Approach for Data Anonymization Using Mapreduce on Cloud," IEEE Transactions on Parallel and Distributed Systems, vol. 25, pp. 363-373, 2014.
  • [32] B. Kenig and T. Tassa, "A practical approximation algorithm for optimal k-anonymity," Data Mining and Knowledge Discovery, vol. 25, pp. 134-168, 2012.
  • [33] G. Aggarwal, T. Feder, K. Kenthapadi, R. Motwani, R. Panigrahy, D. Thomas, et al., "Approximation Algorithms for k-Anonymity," Journal of Privacy Technology, pp. 1-18, 2005.
  • [34] G. Aggarwal, T. Feder, K. Kenthapadi, R. Motwani, R. Panigrahy, D. Thomas, et al., "Anonymizing Tables," in International Conference on Database Theory, Edinburgh, UK, 2005, pp. 246-258.
  • [35] N. Li, T. Li, and S. Venkatasubramanian, "Closeness: A New Privacy Measure for Data Publishing," IEEE Transactions on Knowledge and Data Engineering, vol. 22, pp. 943-956, 2010.
  • [36] A. Gkoulalas Divanis and G. Loukides, Medical Data Privacy Handbook. Switzerland: Springer, 2015.
  • [37] T. Zhu, G. Li, W. Zhou, and S. Y. Philip, "Differentially private data publishing and analysis: A survey," IEEE Transactions on Knowledge and Data Engineering, vol. 29, pp. 1619-1638, 2017.
  • [38] A. Beimel, K. Nissim, and U. Stemmer, "Private learning and sanitization: Pure vs. approximate differential privacy," Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, Springer, 2013, pp. 363-378.
  • [39] A. Alnemari, C. J. Romanowski, and R. K. Raj, "An adaptive differential privacy algorithm for range queries over healthcare data," 2017 IEEE International Conference on Healthcare Informatics, 2017, pp. 397-402.
  • [40] C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor, "Our data, ourselves: Privacy via distributed noise generation," Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2006, pp. 486-503.
  • [41] C. Dwork and A. Roth, "The algorithmic foundations of differential privacy," Foundations and Trends in Theoretical Computer Science, vol. 9, pp. 211-407, 2014.
  • [42] P. C. M. Arachchige, P. Bertok, I. Khalil, D. Liu, S. Camtepe, and M. Atiquzzaman, "Local Differential Privacy for Deep Learning," IEEE Internet of Things Journal, 2019.
  • [43] Y. Sei, H. Okumura, and A. Ohsuga, "Privacy-Preserving Publication of Deep Neural Networks," IEEE 18th International Conference on High Performance Computing and Communications; IEEE 14th International Conference on Smart City; IEEE 2nd International Conference on Data Science and Systems, 2016, pp. 1418-1425.
  • [44] N. Papernot, M. Abadi, U. Erlingsson, I. Goodfellow, and K. Talwar, "Semi-supervised knowledge transfer for deep learning from private training data," arXiv:1610.05755, 2016.
  • [45] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, et al., "Deep learning with differential privacy," ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 308-318.
  • [46] M. Gong, J. Feng, and Y. Xie, "Privacy-enhanced multi-party deep learning," Neural Networks, vol. 121, pp. 484-496, 2020.
  • [47] J. Zhao, Y. Chen, and W. Zhang, "Differential privacy preservation in deep learning: Challenges, opportunities and solutions," IEEE Access, vol. 7, pp. 48901-48911, 2019.
  • [48] Y. Yan, Q. Pei, and H. Li, "Privacy-Preserving Compressive Model for Enhanced Deep-Learning-Based Service Provision System in Edge Computing," IEEE Access, vol. 7, pp. 92921-92937, 2019.
  • [49] M. Hao, H. Li, G. Xu, S. Liu, and H. Yang, "Towards Efficient and Privacy-Preserving Federated Deep Learning," IEEE International Conference on Communications, 2019, pp. 1-6.
  • [50] L. Zhao, Q. Wang, Q. Zou, Y. Zhang, and Y. Chen, "Privacy-preserving collaborative deep learning with unreliable participants," IEEE Transactions on Information Forensics and Security, vol. 15, pp. 1486-1500, 2019.
  • [51] M. A. Rahman, T. Rahman, R. Laganière, N. Mohammed, and Y. Wang, "Membership Inference Attack against Differentially Private Deep Learning Model," Transactions on Data Privacy, vol. 11, pp. 61-79, 2018.
  • [52] C. Xu, J. Ren, D. Zhang, Y. Zhang, Z. Qin, and K. Ren, "GANobfuscator: Mitigating information leakage under GAN via differential privacy," IEEE Transactions on Information Forensics and Security, vol. 14, pp. 2358-2371, 2019.
  • [53] N. C. Abay, Y. Zhou, M. Kantarcioglu, B. Thuraisingham, and L. Sweeney, "Privacy preserving synthetic data release using deep learning," Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2018, pp. 510-526.
  • [54] N. Phan, X. Wu, H. Hu, and D. Dou, "Adaptive laplace mechanism: Differential privacy preservation in deep learning," IEEE International Conference on Data Mining (ICDM), 2017, pp. 385-394.
  • [55] L. Yu, L. Liu, C. Pu, M. E. Gursoy, and S. Truex, "Differentially private model publishing for deep learning," IEEE Symposium on Security and Privacy, 2019, pp. 332-349.
  • [56] Y. Liu, J. Peng, J. J. Yu, and Y. Wu, "Ppgan: Privacy-preserving generative adversarial network," arXiv:1910.02007, 2019.
  • [57] X. Huang, J. Guan, B. Zhang, S. Qi, X. Wang, and Q. Liao, "Differentially Private Convolutional Neural Networks with Adaptive Gradient Descent," IEEE Fourth International Conference on Data Science in Cyberspace, 2019, pp. 642-648.
  • [58] E. U. Soykan, Z. Bilgin, M. A. Ersoy, and E. Tomur, "Differentially Private Deep Learning for Load Forecasting on Smart Grid," IEEE Globecom Workshops, 2019, pp. 1-6.
  • [59] M. Kaya and H. Ş. Bilge, "Deep metric learning: a survey," Symmetry, vol. 11, p. 1066, 2019.
Year 2020, Volume: 6 Issue: 1, 1 - 16, 30.06.2020
https://doi.org/10.18640/ubgmd.750310

Abstract

References

  • [1] S. De Capitani Di Vimercati, S. Foresti, G. Livraga, and P. Samarati, "Data Privacy: Definitions and Techniques," International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 20, pp. 793-817, 2012.
  • [2] P. Jain, M. Gyanchandani, and N. Khare, "Big Data Privacy: A Technological Perspective and Review," Journal of Big Data, vol. 3, p. 25, 2016.
  • [3] Y. Canbay, "Aykırı Veri Yönelimli Fayda Temelli Büyük Veri Anonimleştirme Modeli," Doktora Tezi, Fen Bilimleri Enstitüsü, Gazi Üniversitesi, Ankara, 2019.
  • [4] B. Fung, K. Wang, R. Chen, and P. Yu, "Privacy-Preserving Data Publishing: A Survey of Recent Developments," Computing Surveys, vol. 42, p. 14, 2010.
  • [5] B. C. Fung, K. Wang, A. W. Fu, and S. Y. Philip, Introduction to Privacy-Preserving Data Publishing: Concepts and Techniques. USA: CRC Press, 2010.
  • [6] C. C. Aggarwal and S. Y. Philip, Privacy-Preserving Data Mining: Models and Algorithms. USA: Springer Science & Business Media, 2008.
  • [7] (11.03.2020). Kişisel Verilerin Korunması Kanunu. İnternet: http://www.resmigazete.gov.tr/eskiler/2016/04/20160407-8.pdf
  • [8] (12.03.2020). General Data Protection Regulation. İnternet: https://eur-lex.europa.eu/eli/reg/2016/679/oj
  • [9] J. Ker, L. Wang, J. Rao, and T. Lim, "Deep learning applications in medical image analysis," IEEE Access, vol. 6, pp. 9375-9389, 2017.
  • [10] Z. Sun, Y. Wang, M. Shu, R. Liu, and H. Zhao, "Differential Privacy for Data and Model Publishing of Medical Data," IEEE Access, vol. 7, pp. 152103-152114, 2019.
  • [11] L. Sweeney, "k-Anonymity: A Model for Protecting Privacy," International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, vol. 10, pp. 557-570, 2002.
  • [12] A. Machanavajjhala, J. Gehrke, D. Kifer, and M. Venkitasubramaniam, "l-Diversity: Privacy Beyond k-Anonymity," International Conference on Data Engineering, Atlanta, USA, 2006.
  • [13] N. Li, T. Li, and S. Venkatasubramanian, "t-Closeness: Privacy Beyond k-Anonymity and l-Diversity," IEEE International Conference on Data Engineering, Istanbul, Turkey, 2007, pp. 106-115.
  • [14] C. Dwork, "Differential Privacy," International Colloquium on Automata, Languages and Programming, Berlin, Heidelberg, 2006, pp. 1-12.
  • [15] L. Sweeney. (19.02.2019). Simple Demographics Often Identify People Uniquely. Internet: https://dataprivacylab.org
  • [16] R. Motwani and S. Nabar, "Anonymizing Unstructured Data," arXiv:0810.5582, 2008.
  • [17] A. Meyerson and R. Williams, "On the Complexity of Optimal k-Anonymity," ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, Paris, France, 2004, pp. 223-228.
  • [18] C. Aggarwal, "On k-Anonymity and the Curse of Dimensionality," International Conference on Very Large Data Bases, Trondheim, Norway, 2005, pp. 901-909.
  • [19] F. Mirshghallah, M. Taram, P. Vepakomma, A. Singh, R. Raskar, and H. Esmaeilzadeh, "Privacy in Deep Learning: A Survey," arXiv:2004.12254, 2020.
  • [20] B. Chen, K. LeFevre, and R. Ramakrishnan, "Privacy Skyline: Privacy with Multidimensional Adversarial Knowledge," International Conference on Very Large Data Bases, Vienna, Austria, 2007, pp. 770-781.
  • [21] Y. Canbay, Y. Vural, and Ş. Sağıroğlu, "OAN: aykırı kayıt yönelimli fayda temelli mahremiyet koruma modeli," Journal of the Faculty of Engineering & Architecture of Gazi University, vol. 35, 2020.
  • [22] R. C. Wong, A. W. Fu, K. Wang, and J. Pei, "Minimality Attack In Privacy Preserving Data Publishing," International Conference on Very Large Data Bases, Vienna, Austria, 2007, pp. 543-554.
  • [23] G. Duncan and D. Lambert, "The risk of disclosure for microdata," Journal of Business & Economic Statistics, vol. 7, pp. 207-217, 1989.
  • [24] C. Skinner and D. J. Holmes, "Estimating the Re-Identification Risk per Record in Microdata," Journal of Official Statistics, vol. 14, pp. 361-372, 1998.
  • [25] F. K. Dankar, K. El Emam, A. Neisa, and T. Roffey, "Estimating the Re-Identification Risk of Clinical Data Sets," Bmc Medical Informatics and Decision Making, vol. 12, p. 66, 2012.
  • [26] W. Winkler, "Masking and Re-Identification Methods for Public-Use Microdata: Overview and Research Problems," in International Workshop on Privacy in Statistical Databases, Barcelona, Spain, 2004, pp. 231-246.
  • [27] J. Domingo-Ferrer and V. Torra, "A Critique of k-Anonymity and Some of Its Enhancements," International Conference on Availability, Reliability and Security, Barcelona, Spain, 2008, pp. 990-993.
  • [28] X. Sun, L. Sun, and H. Wang, "Extended k-Anonymity Models Against Sensitive Attribute Disclosure," Computer Communications, vol. 34, pp. 526-535, 2011.
  • [29] M. E. Nergiz, M. Atzori, and C. Clifton, "Hiding the Presence of Individuals from Shared Databases," ACM SIGMOD International Conference on Management of Data, Beijing, China, 2007, pp. 665-676.
  • [30] L. Sweeney, "Computational Disclosure Control: A Primer on Data Privacy Protection," Ph. D. Thesis, Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, USA, 2001.
  • [31] X. Zhang, L. T. Yang, C. Liu, and J. Chen, "A Scalable Two-Phase Top-Down Specialization Approach for Data Anonymization Using Mapreduce on Cloud," IEEE Transactions on Parallel and Distributed Systems, vol. 25, pp. 363-373, 2014.
  • [32] B. Kenig and T. Tassa, "A practical approximation algorithm for optimal k-anonymity," Data Mining and Knowledge Discovery, vol. 25, pp. 134-168, 2012.
  • [33] G. Aggarwal, T. Feder, K. Kenthapadi, R. Motwani, R. Panigrahy, D. Thomas, et al., "Approximation Algorithms for k-Anonymity," Journal of Privacy Technology, pp. 1-18, 2005.
  • [34] G. Aggarwal, T. Feder, K. Kenthapadi, R. Motwani, R. Panigrahy, D. Thomas, et al., "Anonymizing Tables," in International Conference on Database Theory, Edinburgh, UK, 2005, pp. 246-258.
  • [35] N. Li, T. Li, and S. Venkatasubramanian, "Closeness: A New Privacy Measure for Data Publishing," IEEE Transactions on Knowledge and Data Engineering, vol. 22, pp. 943-956, 2010.
  • [36] A. Gkoulalas Divanis and G. Loukides, Medical Data Privacy Handbook. Switzerland: Springer, 2015.
  • [37] T. Zhu, G. Li, W. Zhou, and S. Y. Philip, "Differentially private data publishing and analysis: A survey," IEEE Transactions on Knowledge and Data Engineering, vol. 29, pp. 1619-1638, 2017.
  • [38] A. Beimel, K. Nissim, and U. Stemmer, "Private learning and sanitization: Pure vs. approximate differential privacy," Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, Springer, 2013, pp. 363-378.
  • [39] A. Alnemari, C. J. Romanowski, and R. K. Raj, "An adaptive differential privacy algorithm for range queries over healthcare data," 2017 IEEE International Conference on Healthcare Informatics, 2017, pp. 397-402.
  • [40] C. Dwork, K. Kenthapadi, F. McSherry, I. Mironov, and M. Naor, "Our data, ourselves: Privacy via distributed noise generation," Annual International Conference on the Theory and Applications of Cryptographic Techniques, 2006, pp. 486-503.
  • [41] C. Dwork and A. Roth, "The algorithmic foundations of differential privacy," Foundations and Trends in Theoretical Computer Science, vol. 9, pp. 211-407, 2014.
  • [42] P. C. M. Arachchige, P. Bertok, I. Khalil, D. Liu, S. Camtepe, and M. Atiquzzaman, "Local Differential Privacy for Deep Learning," IEEE Internet of Things Journal, 2019.
  • [43] Y. Sei, H. Okumura, and A. Ohsuga, "Privacy-Preserving Publication of Deep Neural Networks," IEEE 18th International Conference on High Performance Computing and Communications; IEEE 14th International Conference on Smart City; IEEE 2nd International Conference on Data Science and Systems, 2016, pp. 1418-1425.
  • [44] N. Papernot, M. Abadi, U. Erlingsson, I. Goodfellow, and K. Talwar, "Semi-supervised knowledge transfer for deep learning from private training data," arXiv:1610.05755, 2016.
  • [45] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, et al., "Deep learning with differential privacy," ACM SIGSAC Conference on Computer and Communications Security, 2016, pp. 308-318.
  • [46] M. Gong, J. Feng, and Y. Xie, "Privacy-enhanced multi-party deep learning," Neural Networks, vol. 121, pp. 484-496, 2020.
  • [47] J. Zhao, Y. Chen, and W. Zhang, "Differential privacy preservation in deep learning: Challenges, opportunities and solutions," IEEE Access, vol. 7, pp. 48901-48911, 2019.
  • [48] Y. Yan, Q. Pei, and H. Li, "Privacy-Preserving Compressive Model for Enhanced Deep-Learning-Based Service Provision System in Edge Computing," IEEE Access, vol. 7, pp. 92921-92937, 2019.
  • [49] M. Hao, H. Li, G. Xu, S. Liu, and H. Yang, "Towards Efficient and Privacy-Preserving Federated Deep Learning," IEEE International Conference on Communications, 2019, pp. 1-6.
  • [50] L. Zhao, Q. Wang, Q. Zou, Y. Zhang, and Y. Chen, "Privacy-preserving collaborative deep learning with unreliable participants," IEEE Transactions on Information Forensics and Security, vol. 15, pp. 1486-1500, 2019.
  • [51] M. A. Rahman, T. Rahman, R. Laganière, N. Mohammed, and Y. Wang, "Membership Inference Attack against Differentially Private Deep Learning Model," Transactions on Data Privacy, vol. 11, pp. 61-79, 2018.
  • [52] C. Xu, J. Ren, D. Zhang, Y. Zhang, Z. Qin, and K. Ren, "GANobfuscator: Mitigating information leakage under GAN via differential privacy," IEEE Transactions on Information Forensics and Security, vol. 14, pp. 2358-2371, 2019.
  • [53] N. C. Abay, Y. Zhou, M. Kantarcioglu, B. Thuraisingham, and L. Sweeney, "Privacy preserving synthetic data release using deep learning," Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2018, pp. 510-526.
  • [54] N. Phan, X. Wu, H. Hu, and D. Dou, "Adaptive laplace mechanism: Differential privacy preservation in deep learning," IEEE International Conference on Data Mining (ICDM), 2017, pp. 385-394.
  • [55] L. Yu, L. Liu, C. Pu, M. E. Gursoy, and S. Truex, "Differentially private model publishing for deep learning," IEEE Symposium on Security and Privacy, 2019, pp. 332-349.
  • [56] Y. Liu, J. Peng, J. J. Yu, and Y. Wu, "Ppgan: Privacy-preserving generative adversarial network," arXiv:1910.02007, 2019.
  • [57] X. Huang, J. Guan, B. Zhang, S. Qi, X. Wang, and Q. Liao, "Differentially Private Convolutional Neural Networks with Adaptive Gradient Descent," IEEE Fourth International Conference on Data Science in Cyberspace, 2019, pp. 642-648.
  • [58] E. U. Soykan, Z. Bilgin, M. A. Ersoy, and E. Tomur, "Differentially Private Deep Learning for Load Forecasting on Smart Grid," IEEE Globecom Workshops, 2019, pp. 1-6.
  • [59] M. Kaya and H. Ş. Bilge, "Deep metric learning: a survey," Symmetry, vol. 11, p. 1066, 2019.
There are 59 citations in total.

Details

Primary Language Turkish
Subjects Artificial Intelligence
Journal Section Makaleler
Authors

Yavuz Canbay

Şeref Sağıroğlu

Publication Date June 30, 2020
Submission Date June 10, 2020
Published in Issue Year 2020 Volume: 6 Issue: 1

Cite

IEEE Y. Canbay and Ş. Sağıroğlu, “Derin Öğrenmede Diferansiyel Mahremiyet”, UBGMD, vol. 6, no. 1, pp. 1–16, 2020, doi: 10.18640/ubgmd.750310.