Araştırma Makalesi
BibTex RIS Kaynak Göster

İnsan Haklarının Korunması ve Geliştirilmesinde Yeni Bir Sorun Alanı: Algoritmik Profilleme

Yıl 2022, Cilt: 5 Sayı: 9, 83 - 115, 23.08.2022

Öz

Algoritma temelli sistemler çoğunlukla etik kaygılara neden olmakla birlikte insandan kalıtımsal olarak geçen ön yargı motifleri nedeniyle ayrımcılık yasağı ihlali, veri gözetimi biçiminde izleme nedeniyle de mahremiyet hakkı ihlali doğurmaktadır. İçinde bulunduğumuz dönemde belirtileri hissedilmekte olan insan haklarındaki paradigma değişimini anlamamız ise sürekli gelişim halinde olan bu teknolojilerin yol açtığı sorunları çözüme kavuşturmamıza yardımcı olacaktır. Bu düzlemde bütüncül bir yaklaşımla oluşturulmuş olan bu çalışmada, algoritma temelli sistemlerin insan haklarının korunması ve geliştirilmesi ödevini zorlaştırdığı kabulü ile birlikte bu sistemlerden kaynaklı etik kaygıların giderilmesinin ve hak ihlallerinin önlenmesinin mümkün olduğu öne sürülecektir. İnsan ön yargısı ile karşılaştırıldığında algoritma ön yargısının tespit edilmesi ve başlangıçta matematiksel sonuçları itibariyle sosyolojik olan bu sistemlerden ayıklanması daha zahmetsiz olacaktır. Bu nedenle, katıksız bir pesimizm içinde olunmaması gerekmekte, bu sistemlerin dinamik yapılarıyla uyumlu yasal düzenlemelerin hayata geçirilmesi için çok sesli ve kapsamlı müzakere süreçleri takip edilmelidir.

Kaynakça

  • Allhutter, D., Cech, F., Fischer, F., Grill, G., & Mager, A. (2020). Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective. Frontiers in Big Data, 3(5), 1-17.
  • Andrew, J., & Baker, M. (2021). The General Data Protection Regulation in the Age of Surveillance Capitalism. Journal of Business Ethics, 168(3), 576.
  • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. In Ethics of Data and Analytics, 254- 264.
  • Arendt, H. (1959). Reflections on Little Rock. Dissent, 6(1), 51.
  • United Nations General Assembly (UNGA) (2013). The Right to Privacy in the Digital Age.
  • Büchi, M., Fosch-Villaronga, E., Lutz, C., Tamò-Larrieux, A., & Velidi, S. (2021). Making Sense of Algorithmic Profiling: User Perceptions on Facebook. Information, Communication & Society, 1-17.
  • Büchi, M., Fosch-Villaronga,, E., Lutz, C., Tamò-Larrieux, A., Velidi, S., & Viljoen, S. (2020). The Chilling Effects of Algorithmic Profiling: Mapping the Issues. Computer Law & Security Review, 36, 2.
  • Baciu, C., Opre, D., & Riley, S. (2016). A New Way of Thinking in the Era of Virtual Reality and Artificial Intelligence. Educatia Journal,, 21(14), 43-48.
  • Bagaric, M., Svilar, ,. J., Bull, M., Hunter, D., & Stobbs, N. (2022). The Solution to the Pervasive Bias and Discrimination in the Criminal Justice System: Transparent and Fair Artificial Intelligence. American Criminal Law Review, 59(1), 146-147.
  • Bauman, Z. (1993). Modernity and Ambivalence (The Self-constraction of Ambivalance). Polity Press.
  • Bauman, Z. (1999). Liquid Modernity. John Wiley & Sons.
  • Bauman, Z. (2000). Modernity and the Holocaust. Cornell University Press.
  • Brennan, T., Dieterich, W., & Ehret, B. (2009). Evaluating the Predictive Validity of the COMPAS Risk and Needs Assessment System. Criminal Justice and behavior, 36(1), 21-40.
  • Burrell, J. (2016). How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms. Big Data & Society, 3(1), 1-2.
  • Carbado, D. W., & Harris, C. (2018). Intersectionality at 30: Mapping the Margins of Anti-essentialism, Intersectionality, and Dominance Theory. Harvard Law Review, 132(1), 2193.
  • Cinnamon, J. (2017). Social Injustice in Surveillance Capitalism. Surveillance and Society, 15(5), 610.
  • Citron, D. K., & Solove, D. (2021). Privacy Harms. Available at SSRN.
  • Corbett-Davies, S., & Goel, S. (2018). The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. arXiv preprint arXiv: 1808.00023, 2.
  • Danielle, K., Guo, p., & Kessler, S. (2017). Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing. Berkman Klein Center for Internet&Society, Harward Law School, 29.
  • De Vries, K. (2010). Identity, Profiling Algorithms and a World of Ambient Intelligence. Ethics and Information Technology, 12, 71-85.
  • Dieterich, W., Mendoza, C., & Brennan, T. (2016). COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity. Northpointe Inc, 7(4), 1-36.
  • Diggelmann, O., Cleis, M., & Cleis, M. (2014). How the Right to Privacy Became a Human Right. Human Rights Law Review, 14(3), 441.
  • Donnelly, J. (1984). Cultural Relativism and Universal Human Rights. Human Rights Quarterly, 6(4), 414-415.
  • Dressel, J., & Farid, H. (2018). The Accuracy, Fairness, and Limits of Predicting Recidivism. Science advances, 4(1), 1-5.
  • Dupré, C. (2013). Human Dignity in Europe: a Foundational Constitutional Principle. European Public Law, 19(2), 337.
  • Ebers, M., R. S. Hoch, V., Rosenkranz, F., Ruschemeier, H., & Steinrötter, B. (2021). The European Commission’s Proposal for an Artificial Intelligence Act—A Critical Assessment by Members of the Robotics and AI Law Society (RAILS). Multidisciplinary Scientific Journal, 4(4), 600.
  • Ebert, I. L., & Beduschi, A. (2021). The Relevance of the Smart Mix of Measures for Artificial Intelligence–Assessing the Role of Regulation and the Need for Stronger Policy Coherence. The Geneva Academy of International Humanitarian Law and Human Rights.
  • European Commission. (2018). Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems. European Group on Ethics in Science and New Technologies.
  • European Commission. (2010). The Protection of Individuals With Regard to Automatic Processing of Personal Data in the Context of Profiling. Recommendation CM/Rec(2010)13 and explanatory memorandum.
  • Ferguson, A. G. (2017). Rise of Big Data Policing. The New York University Press.
  • Floridi, L. C. (2018). AI4People - An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689-707.
  • FRA. (2018). Preventing Unlawful Profiling Today and in The Future: A Guide. European Union Agency for Fundamental Rights. Fredman, S. (2011). Discrimination Law. Oxford University Press.
  • Gearty, C. (2014). Human rights: The Necessary Quest for Foundations. C. D. (Eds.) içinde, The Meanings of Rights: The Philosophy and Social Theory of Human Rights. Cambridge University Press.
  • Gilman, M. E. (2020). Five Privacy Principles (from the GDPR) the United States Should Adopt to Advance Economic Justice. Arizona State Law Journal, 52(2), 375-376.
  • Goodman, B., & Flaxman, S. (2017). European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation. AI Magazine, 38(3), 50-57.
  • Hayot, E. (2020). Bridge Essay: The Emergence of Modernity. A Companion to World Literature, 1-6.
  • Heinrichs, B. (2022). Discrimination In the Age of Artificial İntelligence. AI & Soc, 37(1), 143-154.
  • Holsinger, A. M., Lowenkamp, C. T., Latessa, E., Serin, R., Cohen, T. H., Robinson, C. R., VanBenschoten, S.W. (2018). A Rejoinder to Dressel and Farid: New Study Finds Computer Algorithm is More Accurate Than Humans at Predicting Arrest and as Good as a Group of 20 Lay Experts. Federal Probation, 50.
  • House of Lords. (2018). AI in the UK: Ready, Willing and Able? Select Committee on Artificial Intelligence.
  • Hunter, D., Bagaric, M., & Stobbs, N. (2019). A Framework for the Efficient and Ethical Use of Artificial Intelligence in the Criminal Justice System. Fla. St. UL Rev., 47(1), 753.
  • Ishay, M. (2008). The History of Human Rights: From Ancient Times to the Globalization Era. University of California Press.
  • James, G. W. (2013). An Introduction to Statistical Learning. New York: Springer.
  • Kuner, C. (2012). The European Commission’s Proposed Data Protection Regulation: A Copernican Revolution in European Data Protection Law. Bloomberg BNA Privacy and Security Law Report, 6(2012), 1-15.
  • Lafont, C. (2010). Accountability and Global Governance: Challenging the State-Centric Conception of Human Rights. Ethics & Global Politics, 3(3), 198-199.
  • Lafont, C. (2013). Human Rights and the Legitimacy of Global Governance Institutions. Revista Latinoamericana de Filosofía Política, 2(1), 7-8.
  • Leese, M. (2014). The New Profiling: Algorithms, Black Boxes, and the Failure of Anti-discriminatory Safeguards in the European Union. Security Dialogue, 45(5), 500.
  • Lu, S. (2022). Data Privacy, Human Rights, and Algorithmic Opacity. Human Rights, and Algorithmic Opacity. California Law Review, 110.
  • Mann, M., & Matzner, T. (2019). Challenging Algorithmic Profiling: the Limits of Data Protection and Anti-Discrimination in Responding to Emergent Discrimination. Big Data & Society, 6(2), 1-11.
  • Mishra, A. (2019). State-centric Approach to Human Rights: Exploring Human Obligations. Quebec Journal of International Law, 49-66.
  • Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The Ethics of Algorithms: Mapping The Debate. Big Data & Society, 3(2), 4-5.
  • Moeckli, D. (2010). Equality and Non-discrimination. International Human Rights Law.
  • Nortpointe. (2019). Practitioner’s Guide to COMPAS Core.
  • Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations. Science, 366(6464), 447-453.
  • OECD. (2019). OECD ArtificiaI Intelligence Principles.
  • OHCHR. (2021). The Right to Privacy in the Digital Age. United Nations.
  • Parsons, C. (2015). Beyond Privacy: Articulating the Broader Harms of Pervasive Mass Surveillance. Media and Communication, 3(3), 6.
  • The Working Party. (2018). Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679.
  • Pasquale, F. (2015). The Black Box Society The Secret Algorithms That Control Money and Information. Harvard University Press.
  • Polykalas, S. E., & Prezerakos, G. (2019). When the Mobile App is Free, the Product is Your Personal Data. Digital Policy, Regulation and Governance., 1-5.
  • Smuha, N. A., Ahmed-Rengers, E., Harkens, A., Li, W., MacLaren,, J., Piselli, R., & Yeung, K. (2021). How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act. Available at SSRN, 23.
  • Soh, C., & Connolly, D. (2021). New Frontiers of Profit and Risk: The Fourth Industrial Revolution’s Impact on Business and Human Rights. New Political Economy, 1, 170.
  • Tsamados, A. A. (2022). The Ethics of Algorithms: Key Problems and Solutions. AI & SOCIETY, 37(1), 215-230.
  • Tufekci, Z. (2014). Engineering the Public: Big Data, Surveillance and Computational Politics. First Monday, 19(7), 5.
  • Valiant, L. (1984). A Theory of The Learnable. Communications of the ACM, 27(11), 1134-1142.
  • Van Bekkum, M., & Borgesius, F. (2021). Digital Welfare Fraud Detection and the Dutch SyRI Judgment. European Journal of Social Security, 23(4), 327.
  • Wachter, S. (2018). Normative Challenges of İdentification in the Internet of Things: Privacy, Profiling, Discrimination, and the GDPR. Computer Law & Security Review, 34(3), 443.
  • Wachter, S. (2020). Affinity Profiling and Discrimination by Association in Online Behavioral Advertising. Berkeley Tech. LJ., 35, 367.
  • Warren, S. D., & Brandeis, L. (1890). Right to Privacy. Harvard Law Review, 4(5), 216.
  • Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Profile Books.
  • Avrupa Birliği. (2000, Ekim 02). Charter of Fundemental Rights of European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:12012P/TXT adresinden alındı.
  • Avrupa Birliği. (2016, Nisan 27). General Data Protection Regulation. https://eur-lex.europa.eu/eli/reg/2016/679/oj adresinden alındı.
  • Avrupa Birliği. (2022, Nisan 19). https://edps.europa.eu/data-protection/data-protection/legislation/history-general-data-protection-regulation_en adresinden alındı.
  • Avrupa Komisyonu. (2021, Nisan 21). Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF adresinden alındı.
  • Avrupa Birliği. (2016, Nisan 27). General Data Protection Regulation. https://eur-lex.europa.eu/eli/reg/2016/679/oj adresinden alındı.
  • Birleşmiş Milletler. (1966, Aralık 16). International Covenant on Civil and Political Rights. https://treaties.un.org/doc/publication/unts/volume%20999/volume-999-i-14668-english.pdf adresinden alındı.
  • Birleşmiş Milletler. (1948, Aralık 10). Universal Declaration of Human Rights. https://www.un.org/en/about-us/universal-declaration-of-human-rights adresinden alındı.
  • BMİHYK. (2011, Haziran 16) The UN Guiding Principles on Business and Human Rights.https://www.ohchr.org/sites/default/files/documents/publications/guidingprinciplesbusinesshr_en.pdf adresinden alındı.
  • AİHM, Bărbulescu v. Romanya, Başvuru Numarası: 61496/08, Karar Tarihi: 05/09/2017.
  • AİHM, Biao v. Danimarka, Başvuru Numarası: 38590/10, Karar Tarihi: 24/05/2016.
  • AİHM, Big Brother Watch ve Diğerleri v. Birleşik Krallık, Başvuru Numarası: 58170/13, 62322/14, 24960/15, Karar Tarihi: 25/05/2021.
  • AİHM, D.H. ve Diğerleri v Çek Cumhuriyeti, Başvuru Numarası: 57325/00, Karar Tarihi: 13/11/2017.
  • Lahey Yerel Mahkemesi (District Court of The Hague), NJCM v the Netherlands (SyRI), Başvuru Numarası: C/09/550982, Karar Tarihi: 05/02/2020.

New Challenge in the Protection and Promotion of Human Rights: Algorithmic Profiling

Yıl 2022, Cilt: 5 Sayı: 9, 83 - 115, 23.08.2022

Öz

Algorithm-based systems often cause ethical concerns. It also violates the prohibition of discrimination due to inherited bias motives and the right to privacy, due to monitoring in the form of data surveillance. Understanding the paradigm shift in human rights, the symptoms of which are being felt in the current period, will help us to solve the problems caused by these ever-growing technologies. With a holistic approach, this study argues that ethical concerns can be eliminated, and human rights can be protected, while accepting that algorithmic-based systems create difficulties in promoting and protecting human rights. In comparison to human bias, algorithm bias will be easier to detect and eliminate from these systems, which are initially mathematical but have turned into a sociological entity in terms of results. For that reason, it is necessary not to be pessimistic in an extreme way. Therefore, multi-party and comprehensive negotiation processes should be followed in order to implement regulations that are compatible with the dynamic structures of these systems.

Kaynakça

  • Allhutter, D., Cech, F., Fischer, F., Grill, G., & Mager, A. (2020). Algorithmic Profiling of Job Seekers in Austria: How Austerity Politics Are Made Effective. Frontiers in Big Data, 3(5), 1-17.
  • Andrew, J., & Baker, M. (2021). The General Data Protection Regulation in the Age of Surveillance Capitalism. Journal of Business Ethics, 168(3), 576.
  • Angwin, J., Larson, J., Mattu, S., & Kirchner, L. (2016). Machine Bias. In Ethics of Data and Analytics, 254- 264.
  • Arendt, H. (1959). Reflections on Little Rock. Dissent, 6(1), 51.
  • United Nations General Assembly (UNGA) (2013). The Right to Privacy in the Digital Age.
  • Büchi, M., Fosch-Villaronga, E., Lutz, C., Tamò-Larrieux, A., & Velidi, S. (2021). Making Sense of Algorithmic Profiling: User Perceptions on Facebook. Information, Communication & Society, 1-17.
  • Büchi, M., Fosch-Villaronga,, E., Lutz, C., Tamò-Larrieux, A., Velidi, S., & Viljoen, S. (2020). The Chilling Effects of Algorithmic Profiling: Mapping the Issues. Computer Law & Security Review, 36, 2.
  • Baciu, C., Opre, D., & Riley, S. (2016). A New Way of Thinking in the Era of Virtual Reality and Artificial Intelligence. Educatia Journal,, 21(14), 43-48.
  • Bagaric, M., Svilar, ,. J., Bull, M., Hunter, D., & Stobbs, N. (2022). The Solution to the Pervasive Bias and Discrimination in the Criminal Justice System: Transparent and Fair Artificial Intelligence. American Criminal Law Review, 59(1), 146-147.
  • Bauman, Z. (1993). Modernity and Ambivalence (The Self-constraction of Ambivalance). Polity Press.
  • Bauman, Z. (1999). Liquid Modernity. John Wiley & Sons.
  • Bauman, Z. (2000). Modernity and the Holocaust. Cornell University Press.
  • Brennan, T., Dieterich, W., & Ehret, B. (2009). Evaluating the Predictive Validity of the COMPAS Risk and Needs Assessment System. Criminal Justice and behavior, 36(1), 21-40.
  • Burrell, J. (2016). How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms. Big Data & Society, 3(1), 1-2.
  • Carbado, D. W., & Harris, C. (2018). Intersectionality at 30: Mapping the Margins of Anti-essentialism, Intersectionality, and Dominance Theory. Harvard Law Review, 132(1), 2193.
  • Cinnamon, J. (2017). Social Injustice in Surveillance Capitalism. Surveillance and Society, 15(5), 610.
  • Citron, D. K., & Solove, D. (2021). Privacy Harms. Available at SSRN.
  • Corbett-Davies, S., & Goel, S. (2018). The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. arXiv preprint arXiv: 1808.00023, 2.
  • Danielle, K., Guo, p., & Kessler, S. (2017). Algorithms in the Criminal Justice System: Assessing the Use of Risk Assessments in Sentencing. Berkman Klein Center for Internet&Society, Harward Law School, 29.
  • De Vries, K. (2010). Identity, Profiling Algorithms and a World of Ambient Intelligence. Ethics and Information Technology, 12, 71-85.
  • Dieterich, W., Mendoza, C., & Brennan, T. (2016). COMPAS Risk Scales: Demonstrating Accuracy Equity and Predictive Parity. Northpointe Inc, 7(4), 1-36.
  • Diggelmann, O., Cleis, M., & Cleis, M. (2014). How the Right to Privacy Became a Human Right. Human Rights Law Review, 14(3), 441.
  • Donnelly, J. (1984). Cultural Relativism and Universal Human Rights. Human Rights Quarterly, 6(4), 414-415.
  • Dressel, J., & Farid, H. (2018). The Accuracy, Fairness, and Limits of Predicting Recidivism. Science advances, 4(1), 1-5.
  • Dupré, C. (2013). Human Dignity in Europe: a Foundational Constitutional Principle. European Public Law, 19(2), 337.
  • Ebers, M., R. S. Hoch, V., Rosenkranz, F., Ruschemeier, H., & Steinrötter, B. (2021). The European Commission’s Proposal for an Artificial Intelligence Act—A Critical Assessment by Members of the Robotics and AI Law Society (RAILS). Multidisciplinary Scientific Journal, 4(4), 600.
  • Ebert, I. L., & Beduschi, A. (2021). The Relevance of the Smart Mix of Measures for Artificial Intelligence–Assessing the Role of Regulation and the Need for Stronger Policy Coherence. The Geneva Academy of International Humanitarian Law and Human Rights.
  • European Commission. (2018). Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems. European Group on Ethics in Science and New Technologies.
  • European Commission. (2010). The Protection of Individuals With Regard to Automatic Processing of Personal Data in the Context of Profiling. Recommendation CM/Rec(2010)13 and explanatory memorandum.
  • Ferguson, A. G. (2017). Rise of Big Data Policing. The New York University Press.
  • Floridi, L. C. (2018). AI4People - An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689-707.
  • FRA. (2018). Preventing Unlawful Profiling Today and in The Future: A Guide. European Union Agency for Fundamental Rights. Fredman, S. (2011). Discrimination Law. Oxford University Press.
  • Gearty, C. (2014). Human rights: The Necessary Quest for Foundations. C. D. (Eds.) içinde, The Meanings of Rights: The Philosophy and Social Theory of Human Rights. Cambridge University Press.
  • Gilman, M. E. (2020). Five Privacy Principles (from the GDPR) the United States Should Adopt to Advance Economic Justice. Arizona State Law Journal, 52(2), 375-376.
  • Goodman, B., & Flaxman, S. (2017). European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation. AI Magazine, 38(3), 50-57.
  • Hayot, E. (2020). Bridge Essay: The Emergence of Modernity. A Companion to World Literature, 1-6.
  • Heinrichs, B. (2022). Discrimination In the Age of Artificial İntelligence. AI & Soc, 37(1), 143-154.
  • Holsinger, A. M., Lowenkamp, C. T., Latessa, E., Serin, R., Cohen, T. H., Robinson, C. R., VanBenschoten, S.W. (2018). A Rejoinder to Dressel and Farid: New Study Finds Computer Algorithm is More Accurate Than Humans at Predicting Arrest and as Good as a Group of 20 Lay Experts. Federal Probation, 50.
  • House of Lords. (2018). AI in the UK: Ready, Willing and Able? Select Committee on Artificial Intelligence.
  • Hunter, D., Bagaric, M., & Stobbs, N. (2019). A Framework for the Efficient and Ethical Use of Artificial Intelligence in the Criminal Justice System. Fla. St. UL Rev., 47(1), 753.
  • Ishay, M. (2008). The History of Human Rights: From Ancient Times to the Globalization Era. University of California Press.
  • James, G. W. (2013). An Introduction to Statistical Learning. New York: Springer.
  • Kuner, C. (2012). The European Commission’s Proposed Data Protection Regulation: A Copernican Revolution in European Data Protection Law. Bloomberg BNA Privacy and Security Law Report, 6(2012), 1-15.
  • Lafont, C. (2010). Accountability and Global Governance: Challenging the State-Centric Conception of Human Rights. Ethics & Global Politics, 3(3), 198-199.
  • Lafont, C. (2013). Human Rights and the Legitimacy of Global Governance Institutions. Revista Latinoamericana de Filosofía Política, 2(1), 7-8.
  • Leese, M. (2014). The New Profiling: Algorithms, Black Boxes, and the Failure of Anti-discriminatory Safeguards in the European Union. Security Dialogue, 45(5), 500.
  • Lu, S. (2022). Data Privacy, Human Rights, and Algorithmic Opacity. Human Rights, and Algorithmic Opacity. California Law Review, 110.
  • Mann, M., & Matzner, T. (2019). Challenging Algorithmic Profiling: the Limits of Data Protection and Anti-Discrimination in Responding to Emergent Discrimination. Big Data & Society, 6(2), 1-11.
  • Mishra, A. (2019). State-centric Approach to Human Rights: Exploring Human Obligations. Quebec Journal of International Law, 49-66.
  • Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The Ethics of Algorithms: Mapping The Debate. Big Data & Society, 3(2), 4-5.
  • Moeckli, D. (2010). Equality and Non-discrimination. International Human Rights Law.
  • Nortpointe. (2019). Practitioner’s Guide to COMPAS Core.
  • Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations. Science, 366(6464), 447-453.
  • OECD. (2019). OECD ArtificiaI Intelligence Principles.
  • OHCHR. (2021). The Right to Privacy in the Digital Age. United Nations.
  • Parsons, C. (2015). Beyond Privacy: Articulating the Broader Harms of Pervasive Mass Surveillance. Media and Communication, 3(3), 6.
  • The Working Party. (2018). Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679.
  • Pasquale, F. (2015). The Black Box Society The Secret Algorithms That Control Money and Information. Harvard University Press.
  • Polykalas, S. E., & Prezerakos, G. (2019). When the Mobile App is Free, the Product is Your Personal Data. Digital Policy, Regulation and Governance., 1-5.
  • Smuha, N. A., Ahmed-Rengers, E., Harkens, A., Li, W., MacLaren,, J., Piselli, R., & Yeung, K. (2021). How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act. Available at SSRN, 23.
  • Soh, C., & Connolly, D. (2021). New Frontiers of Profit and Risk: The Fourth Industrial Revolution’s Impact on Business and Human Rights. New Political Economy, 1, 170.
  • Tsamados, A. A. (2022). The Ethics of Algorithms: Key Problems and Solutions. AI & SOCIETY, 37(1), 215-230.
  • Tufekci, Z. (2014). Engineering the Public: Big Data, Surveillance and Computational Politics. First Monday, 19(7), 5.
  • Valiant, L. (1984). A Theory of The Learnable. Communications of the ACM, 27(11), 1134-1142.
  • Van Bekkum, M., & Borgesius, F. (2021). Digital Welfare Fraud Detection and the Dutch SyRI Judgment. European Journal of Social Security, 23(4), 327.
  • Wachter, S. (2018). Normative Challenges of İdentification in the Internet of Things: Privacy, Profiling, Discrimination, and the GDPR. Computer Law & Security Review, 34(3), 443.
  • Wachter, S. (2020). Affinity Profiling and Discrimination by Association in Online Behavioral Advertising. Berkeley Tech. LJ., 35, 367.
  • Warren, S. D., & Brandeis, L. (1890). Right to Privacy. Harvard Law Review, 4(5), 216.
  • Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Profile Books.
  • Avrupa Birliği. (2000, Ekim 02). Charter of Fundemental Rights of European Union. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:12012P/TXT adresinden alındı.
  • Avrupa Birliği. (2016, Nisan 27). General Data Protection Regulation. https://eur-lex.europa.eu/eli/reg/2016/679/oj adresinden alındı.
  • Avrupa Birliği. (2022, Nisan 19). https://edps.europa.eu/data-protection/data-protection/legislation/history-general-data-protection-regulation_en adresinden alındı.
  • Avrupa Komisyonu. (2021, Nisan 21). Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. https://eur-lex.europa.eu/resource.html?uri=cellar:e0649735-a372-11eb-9585-01aa75ed71a1.0001.02/DOC_1&format=PDF adresinden alındı.
  • Avrupa Birliği. (2016, Nisan 27). General Data Protection Regulation. https://eur-lex.europa.eu/eli/reg/2016/679/oj adresinden alındı.
  • Birleşmiş Milletler. (1966, Aralık 16). International Covenant on Civil and Political Rights. https://treaties.un.org/doc/publication/unts/volume%20999/volume-999-i-14668-english.pdf adresinden alındı.
  • Birleşmiş Milletler. (1948, Aralık 10). Universal Declaration of Human Rights. https://www.un.org/en/about-us/universal-declaration-of-human-rights adresinden alındı.
  • BMİHYK. (2011, Haziran 16) The UN Guiding Principles on Business and Human Rights.https://www.ohchr.org/sites/default/files/documents/publications/guidingprinciplesbusinesshr_en.pdf adresinden alındı.
  • AİHM, Bărbulescu v. Romanya, Başvuru Numarası: 61496/08, Karar Tarihi: 05/09/2017.
  • AİHM, Biao v. Danimarka, Başvuru Numarası: 38590/10, Karar Tarihi: 24/05/2016.
  • AİHM, Big Brother Watch ve Diğerleri v. Birleşik Krallık, Başvuru Numarası: 58170/13, 62322/14, 24960/15, Karar Tarihi: 25/05/2021.
  • AİHM, D.H. ve Diğerleri v Çek Cumhuriyeti, Başvuru Numarası: 57325/00, Karar Tarihi: 13/11/2017.
  • Lahey Yerel Mahkemesi (District Court of The Hague), NJCM v the Netherlands (SyRI), Başvuru Numarası: C/09/550982, Karar Tarihi: 05/02/2020.
Toplam 82 adet kaynakça vardır.

Ayrıntılar

Birincil Dil Türkçe
Konular Hukuk
Bölüm Araştırma Makaleleri
Yazarlar

Nurullah Özgül 0000-0003-3790-2449

Yayımlanma Tarihi 23 Ağustos 2022
Gönderilme Tarihi 3 Mayıs 2022
Yayımlandığı Sayı Yıl 2022 Cilt: 5 Sayı: 9

Kaynak Göster

APA Özgül, N. (2022). İnsan Haklarının Korunması ve Geliştirilmesinde Yeni Bir Sorun Alanı: Algoritmik Profilleme. Türkiye İnsan Hakları Ve Eşitlik Kurumu Akademik Dergisi, 5(9), 83-115.