Araştırma Makalesi
BibTex RIS Kaynak Göster

Yapay Zeka Çağında İnsan Hakları: Fırsatlar ve Tuzaklar

Yıl 2026, Cilt: 16 Sayı: 1 , 142 - 171 , 29.03.2026
https://doi.org/10.18074/ckuiibfd.1633000
https://izlik.org/JA74ML99GY

Öz

Yapay zeka (YZ) günlük yaşamın büyük bir parçası haline gelmiş olup özellikle hukuk, siyaset ve kamu hizmetleri alanlarındaki karar verme süreçlerini etkilemektedir. YZ, insan haklarını korumak için faydalı bir araç olsa da, aynı zamanda mahremiyetin ihlali, adaletsiz uygulamalar ve toplumsal önyargıların güçlenmesi gibi riskler de taşımaktadır. Bazı hükümetler YZ’yi insanları izlemek ve gözetlemek için kullanırken, diğerleri güvenliği kişisel özgürlüklerle dengelemeye çalışmaktadır. Bu makale, YZ’nin insan hakları üzerindeki etkilerini inceleyerek özellikle mahremiyet hakkı, eşit muamele görme hakkı ve ifade özgürlüğü konularına odaklanmaktadır. Bu makale, YZ’nin hem olumlu hem de olumsuz yönlerini ele alarak, YZ kullanımında etik kurallara uyulmasını ve insan haklarına saygı gösterilmesini sağlamak için bazı öneriler sunmaktadır. Politika yapıcıların bu sorunları anlayarak YZ’nin herkes için faydalı olmasını sağlarken zararlarını azaltacak gerekli yasaları oluşturacağı öngörülmektedir.

Kaynakça

  • Altman, A. (2020, Winter Edition). Discrimination. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. Stanford University. Retrieved from https://plato.stanford.edu/archives/win2020/entries/discrimination/
  • Amnesty International. (2020, September 29). Netherlands: We Sense Trouble: Automated Discrimination and Mass Surveillance in Predictive Policing in the Netherlands (Index No. EUR 35/2971/2020). Retrieved from: https://www.amnesty.org/en/documents/eur35/2971/2020/en/
  • Amnesty International. (2023, December 11). Trapped by Automation: Poverty and Discrimination in Serbia’s Welfare State. Retrieved from https://www.amnesty.org/en/latest/research/2023/12/trapped-by-automation-poverty-and-discrimination-in-serbias-welfare-state/
  • Amnesty International. (2024a, April 30). Use of Entity Resolution in India: Shining a Light on How New Forms of Automation can Deny People Access to Welfare. Retrieved from https://www.amnesty.org/en/latest/research/2024/04/entity-resolution-in-indias-welfare-digitalization/
  • Amnesty International. (2024b, November 12). Denmark: Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State (Index No. EUR 18/8709/2024). Retrieved from https://www.amnesty.org/en/documents/eur18/8709/2024/en/
  • Amnesty International. (2024c, November 27). Sweden: Authorities Must Discontinue Discriminatory AI Systems Used by Welfare Agency. Retrieved from https://www.amnesty.org/en/latest/news/2024/11/sweden-authorities-must-discontinue-discriminatory-ai-systems-used-by-welfare-agency/
  • Angwin, J., Larson, J., Mattu, & Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against Blacks. Pro Publica Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  • Association of Southeast Asian Nations (ASEAN). (2024). ASEAN Guide on AI Governance and Ethics. Retrieved from https://asean.org/wp-content/uploads/2024/02/ASEAN-Guide-on-AI-Governance-and-Ethics_beautified_201223_v2.pdf
  • Ashraf, C. (2020). Artificial intelligence and the rights to assembly and association. Journal of Cyber Policy, 5(2), 163–179. https://doi.org/10.1080/23738871.2020.1778760
  • Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. http://dx.doi.org/10.15779/Z38BG31
  • Bensaid, A. (n.d.). Workplace and algorithm bias kill Palestine content on Facebook and Twitter. Retrieved January 21, 2025, from TRT website: https://www.trtworld.com/magazine/workplace-and-algorithm-bias-kill-palestine-content-on-facebook-and-twitter-46842
  • Bhuiyan, J. (2021, November 8). LAPD ended predictive policing programs amid public outcry. A new effort shares many of their flaws. The Guardian. Retrieved from https://www.theguardian.com/us-news/2021/nov/07/lapd-predictive-policing-surveillance-reform
  • Bischoff, P. (2025, June 25). Surveillance camera statistics: Which cities have the most CCTV cameras? Comparitech. Retrieved from https://www.comparitech.com/vpn-privacy/the-worlds-most-surveilled-cities/
  • Boden, M. A. (Ed.). (1996). Artificial intelligence. Oxford: Elsevier.
  • Bowman, S. (2024, November 14). The role of artificial intelligence in predicting human rights violations. Open Global Rights. Retrieved from https://www.openglobalrights.org/the-role-of-ai-in-predicting-human-rights-violations/
  • Brandusescu, A., & Sieber, R. E. (2025). Missed opportunities in AI regulation: Lessons from Canada's AI and data act. Data & Policy, 7, e40. https://doi.org/10.1017/dap.2025.17
  • Brkan, M. (2020). EU fundamental rights and democracy implications of data-driven political campaigns. Maastricht Journal of European and Comparative Law, 27(6), 774–790. https://doi.org/10.1177/1023263X20982960
  • Brkan, M., Claes, M., & Rauchegger, C. (2020). European fundamental rights and digitalization. Maastricht Journal of European and Comparative Law, 27(6), 697–704. https://doi.org/10.1177/1023263X20983778
  • Bu, Q. (2012). The global governance on automated facial recognition (AFR): Ethical and legal opportunities and privacy challenges. International Cybersecurity Law Review, (2), 113–145. https://doi.org/10.1365/s43439-021-00022-x
  • Cao, L. (2023). AI and data science for smart emergency, crisis and disaster resilience. International journal of data science and analytics, 15(3), 231-–246. https://doi.org/10.1007/s41060-023-00393-w
  • Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1), 1–12. https://doi.org/10.1057/s41599-023-02079-x
  • Chen, L., Chen, P. and Lin, Z. (2020) Artificial intelligence in education: A review. IEEE Access, 8, 75264-75278. https://doi.org/10.1109/ACCESS.2020.2988510
  • Cobbe, J. (2021). Algorithmic censorship by social platforms: Power and resistance. Philosophy & Technology, 34(4), 739–766. https://doi.org/10.1007/s13347-020-00429-0
  • Cossette-Lefebvre, H., & Maclure, J. (2023). AI’s fairness problem: Understanding wrongful discrimination in the context of automated decision-making. AI and Ethics, 3(4), 1255–1269. https://doi.org/10.1007/s43681-022-00233-w
  • Council of Europe. (n.d.-a). Convention 108 and its Protocol for the Protection of Individuals with regard to Automatic Processing of Personal Data. Retrieved January 10, 2025 from https://www.coe.int/en/web/data-protection/convention108-and-protocol
  • Council of Europe. (n.d.-b). The Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. Retrieved January 10, 2025 from https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence
  • Dastin, J. (2018, October 11). Insight—Amazon scraps secret AI recruiting tool that showed bias against women. Reuters Retrieved from https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/
  • De Gregorio, G., & Dunn, P. (2023). Artificial intelligence and freedom of expression. In A. Quintavalla & J. Temperman (Eds.), Artificial intelligence and human rights (pp. 76–90). Oxford: Oxford University Press.
  • Donahoe, E., & Metzger, M. M. (2019). Artificial intelligence and human rights. Journal of Democracy, 30(2), 115–126. https://doi.org/10.1353/jod.2019.0029
  • Drage, E., & Mackereth, K. (2022). Does AI debias recruitment? Race, gender, and AI’s “eradication of difference”. Philosophy & Technology, 35(4), 89. Retrieved from https://link.springer.com/article/10.1007/s13347-022-00543-1
  • Enqvist, L. (2024). Rule-based versus AI-driven benefits allocation: GDPR and AIA legal implications and challenges for automation in public social security administration. Information & Communications Technology Law, 33(2), 222–246. https://doi.org/10.1080/13600834.2024.2349835
  • Ergül, E. (2024). Yapay zeka ve hukuk [Artificial Intelligence and Law]. Ankara: Adalet Yayınevi.
  • European Commission. (n.d.). Recital 29 – Artificial Intelligence Act (EU AI Act). Retrieved October 15, 2025 from https://artificialintelligenceact.eu/recital/29/
  • European Convention for the Protection of Human Rights and Fundamental Freedoms (ETS No. 5), Nov. 4, 1950, Council of Europe. Retrieved from https://www.echr.coe.int/documents/convention_eng.pdf
  • European Crime Prevention Network (EUCPN). (2022). Artificial Intelligence and Predictive Policing: Risks and Challenges. Brussels: EUCPN. Retrieved from https://eucpn.org/sites/default/files/document/files/PP%20%282%29.pdf
  • European Network and Information Security Agency (ENISA). (2019). Pseudonymisation techniques and best practices: Recommendations on shaping technology according to data protection and privacy provisions. Heraklion: ENISA. Retrieved from https://www.enisa.europa.eu/publications/pseudonymisation-techniques-and-best-practices
  • Fair Trials. (2023, December 11). Partial ban on ‘predictive’ policing and crime prediction systems included in final EU AI Act. Retrieved from https://www.fairtrials.org/articles/news/partial-ban-on-predictive-policing-included-in-final-eu-ai-act/
  • Farayola, M. M., Tal, I., Connolly, R., Saber, T., & Bendechache, M. (2023). Ethics and trustworthiness of AI for predicting the risk of recidivism: A systematic literature review. Information, 14(8), 42, 426. https://doi.org/10.3390/info14080426
  • Ferguson, A. G. (2020). The rise of big data policing: Surveillance, race, and the future of law enforcement (First published in paperback). New York: New York University Press.
  • Gates, K. (2011). Our biometric future: Facial recognition technology and the culture of surveillance. New York: New York University Press.
  • Gaumond, E., & Régis, C. (2023, January 27). Assessing impacts of AI on human rights: It’s not solely about privacy and nondiscrimination. Lawfare. Retrieved from https://www.lawfaremedia.org/article/assessing-impacts-of-ai-on-human-rights-it-s-not-solely-about-privacy-and-nondiscrimination
  • Gerards, J. (2019). The fundamental rights challenges of algorithms. Netherlands Quarterly of Human Rights, 37(3), 205–209. https://doi.org/10.1177/0924051919861773
  • Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720943234
  • Gonzales, N. M. (2023). The rights to privacy and data protection and facial recognition technology in the global north. In A. Quintavalla & J. Temperman (Eds.), Artificial intelligence and human rights (pp. 136–149). Oxford: Oxford University Press.
  • Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1). https://doi.org/10.1177/2053951719897945
  • Hamilton, M. (2019). The biased algorithm: Evidence of disparate impact on Hispanics. American Criminal Law Review, 56(4), 1553–1577. Retrieved from https://www.law.georgetown.edu/american-criminal-law-review/in-print/volume-56-number-4-fall-2019/the-biased-algorithm-evidence-of-disparate-impact-on-hispanics/
  • Heaven, W. D. (2020, July 17). Predictive policing algorithms are racist. They need to be dismantled. MIT Technology Review. Retrieved from https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/
  • Illinois Biometric Information Privacy Act, 740 Ill. Comp. Stat. 14 (2008).
  • International Covenant on Civil and Political Rights, Dec. 16, 1966, 999 U.N.T.S. 171. Retrieved from https://www.ohchr.org/en/professionalinterest/pages/ccpr.aspx
  • Johnson, T. L., Johnson, N. N., McCurdy, D., & Olajide, M. S. (2022). Facial recognition systems in policing and racial disparities in arrests. Government Information Quarterly, 39(4), 101753. https://doi.org/10.1016/j.giq.2022.101753
  • Katrak, M., & Chakrabarty, I. (2023). Privacy, political participation, and dissent: Facial recognition technologies and the risk of digital authoritarianism in the global South. In A. Quintavalla & J. Temperman (Eds.), Artificial intelligence and human rights (pp. 150–161). Oxford: Oxford University Press.
  • Khaitan, T. (2017). Indirect discrimination. In K. Lippert-Rasmussen (Ed.), Routledge Handbook of the Ethics of Discrimination (pp. 30–41). London: Routledge. Retrieved from https://ssrn.com/abstract=3097020
  • Krupiy, T., & McLeod Rogers, J. (2022). Mapping artificial intelligence and human intersections: Why we need new perspectives on harm and governance in human rights. In Aoife O'Donoghue, Ruth Houghton and Se-shauna Wheatle (Eds.). Research Handbook on Global Governance. Cheltenham, UK: Edward Elgar. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4113725
  • Königs, P. (2022). Government surveillance, privacy, and legitimacy. Philosophy & Technology, 35(1), 8. https://doi.org/10.1007/s13347-022-00503-9
  • La Quadrature du Net. (2024, January 18). Predictive policing in France: Against opacity and discrimination, the need for a ban. Retrieved December 29, 2024, from https://www.laquadrature.net/en/2024/01/18/predictive-policing-in-france-against-opacity-and-discrimination-the-need-for-a-ban/
  • Langenbucher, K. (2020). Responsible AI-based credit scoring–a legal framework. European Business Law Review, 31(4). Retrieved from https://ir.lawnet.fordham.edu/faculty_scholarship/1081
  • Manning, C. (2020). AI definitions: A compilation of artificial intelligence definitions for policymakers and researchers. Stanford Institute for Human-Centered Artificial Intelligence (HAI). Retrieved from https://hai-production.s3.amazonaws.com/files/2020-09/AI-Definitions-HAI.pdf
  • Marcetic, B. (2022, June 17). YouTube’s censorship is a threat to the Left. Jacobin. Retrieved from https://jacobin.com/2022/06/youtube-google-big-tech-censorship-misinformation-left-wing-media
  • Marwala, T. & Mpedi, L.G. (2024). Criminal justice system and AI. In Artificial intelligence and the law (pp.47–64). Singapore: Palgrave Macmillan. https://doi.org/10.1007/978-981-97-2827-5_3
  • McDaniel, J. L. M., & Pease, K. (2021). Introduction. In J. L. M. McDaniel & K. Pease (Eds.), Predictive policing and artificial intelligence (pp.1–39). Abingdon, Oxon: Routledge.
  • Meijer, A., & Wessels, M. (2019). Predictive policing: Review of benefits and drawbacks. International Journal of Public Administration, 42(12), 1031–1039. https://doi.org/10.1080/01900692.2019.1575664
  • Milosevic, T., Van Royen, K., & Davis, B. (2022). Artificial intelligence to address cyberbullying, harassment and abuse: new directions in the midst of complexity. International journal of bullying prevention, 4(1), 1-5. https://doi.org/10.1007/s42380-022-00117-x
  • Miller, L. (2020, April 21). LAPD will end controversial program that aimed to predict where crimes would occur. Los Angeles Times. Retrieved from https://www.latimes.com/california/story/2020-04-21/lapd-ends-predictive-policing-program
  • Mozur, P. (2019, April 14). One month, 500,000 face scans: How China is using A.I. to profile a minority. The New York Times. Retrieved from https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html
  • Muggah, R., & Augusto Francisco, P. (2020, February 10). Brazil’s risky bet on tech to fight crime. Americas Quarterly. Retrieved from https://americasquarterly.org/article/brazils-risky-bet-on-tech-to-fight-crime/
  • Narayan, S. (2023). Predictive policing and the construction of the “criminal”: An ethnographic study of Delhi Police. London, UK: Palgrave Macmillan.
  • National Institute of Standards and Technology (NIST). (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). Retrieved from https://www.nist.gov/itl/ai-risk-management-framework
  • O’Donnell, R. M. (2019). Challenging racist predictive policing algorithms under the Equal Protection Clause. New York University Law Review, 94(3), 544–580. Retrieved from https://nyulawreview.org/wp-content/uploads/2019/06/NYULawReview-94-3-ODonnell.pdf
  • Office of the United Nations High Commissioner for Human Rights (OHCHR). (n.d.-a). B-Tech Project: Applying the UN Guiding Principles on Business and Human Rights to Digital Technologies. Retrieved January 22, 2025, from https://www.ohchr.org/en/b-tech
  • Office of the United Nations High Commissioner for Human Rights (OHCHR). (n.d.-b). Human Rights Indicators Database. Retrieved January 22, 2025, from https://indicators.ohchr.org/
  • Office of the High Commissioner for Human Rights (OHCHR), & International Bar Association. (2003). The right to equality and non-discrimination in the administration of Justice. In Human rights in the administration of justice: A manual on human rights for judges, prosecutors and lawyers (pp.631-–679). New York and Geneva: United Nations. Retrieved from https://www.ohchr.org/sites/default/files/Documents/Publications/training9chapter13en.pdf
  • Ohlheiser, A. W. (2021, July 13). Welcome to TikTok’s endless cycle of censorship and mistakes. MIT Technology Review. Retrieved from https://www.technologyreview.com/2021/07/13/1028401/tiktok-censorship-mistakes-glitches-apologies-endless-cycle/
  • Okolo, C. T., & Tano, M. (2024). Moving toward truly responsible AI development in the global AI market. Brookings Institution. Retrieved from https://www.brookings.edu/articles/moving-toward-truly-responsible-ai-development-in-the-global-ai-market/
  • Oosterloo, S., & van Schie, G. (2018). The politics and biases of the “Crime Anticipation System” of the Dutch Police. CEUR Workshop Proceedings of the International Workshop on Bias in Information, Algorithms, and Systems, 13, 30–41. Retrieved from https://ceur-ws.org/Vol-2103/paper_6.pdf
  • Organisation for Economic Cooperation and Development (OECD). (2024). Assessing potential future artificial intelligence risks, benefits and policy imperatives (OECD Artificial Intelligence Papers No. 27). Paris: OECD Publishing. https://doi.org/10.1787/3f4e3dfb-en
  • Ortiz Freuler, J., & Iglesias, C. (2018). Algorithms and Artificial Intelligence in Latin America: A Study of Implementation by Governments in Argentina and Uruguay. Web Foundation. Retrieved from https://webfoundation.org/docs/2018/09/WF_AI-inLA_Report_Screen_AW.pdf
  • Qandeel, M. (2024). Facial recognition technology: Regulations, rights and the rule of law. Frontiers in Big Data, 7(1354659). https://doi.org/10.3389/fdata.2024.1354659
  • Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (CETS No. 223), Oct. 10, 2018, Council of Europe. Retrieved from https://www.coe.int/en/web/data-protection/convention108-and-protocol
  • Raab, C. D. (2023). Beyond data: human rights, ethical and social impact assessment in AI: by Alessandro Mantelero, The Hague, The Netherlands, T.M.C. ASSER Press, 2022, 200pp., ISBN 978-94-6265-530-0. https://link.springer.com/book/9789462655300. International Review of Law, Computers & Technology, 38(1), 111–114. https://doi.org/10.1080/13600869.2023.2213104
  • Raji, I., & Sholademi, D. B. (2024). Predictive Policing: The Role of AI in Crime Prevention. International Journal of Computer Applications Technology and Research, 13 (10), 66–78. https://doi.org/10.7753/IJCATR1310.1006.
  • Raso, F., Hilligoss, H., Krishnamurthy, V., Bavitz, C., & Kim, L. Y. (2018). Artificial intelligence & human rights: Opportunities & risks. Berkman Klein Center for Internet & Society at Harvard University. Retrieved from https://www.ssrn.com/abstract=3259344
  • Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation), 2016 O.J. (L 119) 1. Retrieved from https://eur-lex.europa.eu/eli/reg/2016/679/oj
  • Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), 2024 O.J. (L 168) 1. Retrieved from https://eur-lex.europa.eu/eli/reg/2024/1689/oj
  • Richardson, R., Schultz, J. M., & Crawford, K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review, 94(15), 192–233. Retrieved from https://nyulawreview.org/wp-content/uploads/2019/04/NYULawReview-94-Richardson_etal-FIN.pdf
  • Risse, M. (2018). Human rights and artificial intelligence: An urgently needed agenda. Revista Publicum, 4(1), 1–16. https://doi.org/10.12957/publicum.2018.35098
  • Russell, S., Perset, K., & Grobelnik, M. (2023, November 29). Updates to the OECD’s definition of an AI system explained. OECD.AI Policy Observatory. Retrieved from https://oecd.ai/en/wonk/ai-system-definition-update
  • Sander, B. (2020). Freedom of expression in the age of online platforms: The promise and pitfalls of a human rights-based approach to content moderation. Fordham International Law Journal, 43(4), 939–1006. Retrieved from https://ir.lawnet.fordham.edu/ilj/vol43/iss4/3
  • Saslow, K., & Lorenz, P. (2019, September 19). Artificial intelligence needs human rights: How the focus on ethical AI fails to address privacy, discrimination and other concerns. Stiftung Neue Verantwortung. Retrieved from https://www.ssrn.com/abstract=3589473
  • Seidensticker, K., Bode, F., Bode, Felix, & Stoffel, F. (2018). Predictive policing in Germany. Konstanzer Online-Publikationssystem (KOPS). Retrieved from http://nbn-resolving.de/urn:nbn:de:bsz:352-2-14sbvox1ik0z06
  • Sheikh, H., Prins, C., & Schrijvers, E. (2023). Artificial intelligence: definition and background. In Mission AI: The new system technology (pp. 15–41). Cham: Springer International Publishing.
  • Singil, N. (2022). Yapay zekâ ve insan hakları [Artificial Intelligence and Human Rights]. Public and Private International Law Bulletin, 42(1), 121-158. https://doi.org/10.26650/ppil.2022.42.1.970856
  • Šmuclerová, M., Král, L., & Drchal, J. (2023). AI life cycle and human rights risks and remedies. In A. Quintavalla & J. Temperman (Eds.), Artificial Intelligence and Human Rights (pp. 16–41). Oxford: Oxford University Pres
  • Sprick, D. (2020). Predictive policing in China: An authoritarian dream of public security. NAVEIÑ REET: Nordic Journal of Law and Social Research, 1(9), 299–324. https://doi.org/10.7146/nnjlsr.v1i9.122164
  • Taylor, R. D. (2024). Saving global human rights: A “Global South + AI” strategy. The Information Society, 40(1), 1–16. https://doi.org/10.1080/01972243.2024.2442916
  • United Nations AI Advisory Board. (2024). Governing AI for humanity: Final report. New York, NY: United Nations. Retrieved from https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf
  • United Nations Human Rights Council. (2011). Guiding principles on business and human rights: Implementing the United Nations “Protect, Respect and Remedy” framework (Resolution A/HRC/RES/17/4). Retrieved from https://www.ohchr.org/sites/default/files/documents/publications/guidingprincip
  • van Bekkum, M., & Borgesius, F. Z. (2021). Digital welfare fraud detection and the Dutch SyRI judgment. European Journal of Social Security, 23(4), 323–340. https://doi.org/10.1177/13882627211031257
  • van Dijck, G. (2022). Predicting recidivism risk meets AI Act. European Journal on Criminal Policy and Research, 28(3), 407–423. https://doi.org/10.1007/s10610-022-09516-8
  • Varona, D., Lizama-Mue, Y., & Suárez, J. L. (2021). Machine learning’s limitations in avoiding automation of bias. AI & SOCIETY, 36, 197–203. https://doi.org/10.1007/s00146-020-00996-y
  • Veale, M., Binns, R., & Edwards, L. (2018). Algorithms that remember: Model inversion attacks and data protection law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180083. https://doi.org/10.1098/rsta.2018.0083
  • Wen, J., & Wang, W. (2023). The future of ChatGPT in academic research and publishing: A commentary for clinical and translational medicine. Clinical and Translational Medicine, 13(3), e1207. Retrieved from https://pubmed.ncbi.nlm.nih.gov/36941774/
  • Zornetta, A., & Cofone, I. (2023). Artificial intelligence and the right to privacy. In A. Quintavalla & J. Temperman (Eds.), Artificial Intelligence and Human Rights (pp. 121–135). Oxford: Oxford University Press.
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: Public Affairs.

Human Rights in the Artificial Intelligence Era: Opportunities and Pitfalls

Yıl 2026, Cilt: 16 Sayı: 1 , 142 - 171 , 29.03.2026
https://doi.org/10.18074/ckuiibfd.1633000
https://izlik.org/JA74ML99GY

Öz

Artificial intelligence (AI) technologies are increasingly more integrated in our daily lives and influencing how decisions are made, especially in the fields of law, politics, and public services. AI has the potential to serve as a beneficial instrument to protect human rights. However, it also carries risks such as violating privacy, causing unfair treatment, and deepening already existing inequalities within societies. Some governments employ AI systems for surveillance purposes, while others try to balance security with personal freedoms. This article examines how AI affects human rights by focusing on the rights to privacy and equality, and the freedom of expression. It discusses both the opportunities and pitfalls of AI on human rights and proposes certain ways to ensure AI follows ethical rules and respects human rights. By understanding these issues, policymakers can create necessary laws to ensure that AI benefits everyone while reducing harm.

Kaynakça

  • Altman, A. (2020, Winter Edition). Discrimination. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. Stanford University. Retrieved from https://plato.stanford.edu/archives/win2020/entries/discrimination/
  • Amnesty International. (2020, September 29). Netherlands: We Sense Trouble: Automated Discrimination and Mass Surveillance in Predictive Policing in the Netherlands (Index No. EUR 35/2971/2020). Retrieved from: https://www.amnesty.org/en/documents/eur35/2971/2020/en/
  • Amnesty International. (2023, December 11). Trapped by Automation: Poverty and Discrimination in Serbia’s Welfare State. Retrieved from https://www.amnesty.org/en/latest/research/2023/12/trapped-by-automation-poverty-and-discrimination-in-serbias-welfare-state/
  • Amnesty International. (2024a, April 30). Use of Entity Resolution in India: Shining a Light on How New Forms of Automation can Deny People Access to Welfare. Retrieved from https://www.amnesty.org/en/latest/research/2024/04/entity-resolution-in-indias-welfare-digitalization/
  • Amnesty International. (2024b, November 12). Denmark: Coded Injustice: Surveillance and Discrimination in Denmark’s Automated Welfare State (Index No. EUR 18/8709/2024). Retrieved from https://www.amnesty.org/en/documents/eur18/8709/2024/en/
  • Amnesty International. (2024c, November 27). Sweden: Authorities Must Discontinue Discriminatory AI Systems Used by Welfare Agency. Retrieved from https://www.amnesty.org/en/latest/news/2024/11/sweden-authorities-must-discontinue-discriminatory-ai-systems-used-by-welfare-agency/
  • Angwin, J., Larson, J., Mattu, & Kirchner, L. (2016). Machine bias: There’s software used across the country to predict future criminals. And it’s biased against Blacks. Pro Publica Retrieved from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
  • Association of Southeast Asian Nations (ASEAN). (2024). ASEAN Guide on AI Governance and Ethics. Retrieved from https://asean.org/wp-content/uploads/2024/02/ASEAN-Guide-on-AI-Governance-and-Ethics_beautified_201223_v2.pdf
  • Ashraf, C. (2020). Artificial intelligence and the rights to assembly and association. Journal of Cyber Policy, 5(2), 163–179. https://doi.org/10.1080/23738871.2020.1778760
  • Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. http://dx.doi.org/10.15779/Z38BG31
  • Bensaid, A. (n.d.). Workplace and algorithm bias kill Palestine content on Facebook and Twitter. Retrieved January 21, 2025, from TRT website: https://www.trtworld.com/magazine/workplace-and-algorithm-bias-kill-palestine-content-on-facebook-and-twitter-46842
  • Bhuiyan, J. (2021, November 8). LAPD ended predictive policing programs amid public outcry. A new effort shares many of their flaws. The Guardian. Retrieved from https://www.theguardian.com/us-news/2021/nov/07/lapd-predictive-policing-surveillance-reform
  • Bischoff, P. (2025, June 25). Surveillance camera statistics: Which cities have the most CCTV cameras? Comparitech. Retrieved from https://www.comparitech.com/vpn-privacy/the-worlds-most-surveilled-cities/
  • Boden, M. A. (Ed.). (1996). Artificial intelligence. Oxford: Elsevier.
  • Bowman, S. (2024, November 14). The role of artificial intelligence in predicting human rights violations. Open Global Rights. Retrieved from https://www.openglobalrights.org/the-role-of-ai-in-predicting-human-rights-violations/
  • Brandusescu, A., & Sieber, R. E. (2025). Missed opportunities in AI regulation: Lessons from Canada's AI and data act. Data & Policy, 7, e40. https://doi.org/10.1017/dap.2025.17
  • Brkan, M. (2020). EU fundamental rights and democracy implications of data-driven political campaigns. Maastricht Journal of European and Comparative Law, 27(6), 774–790. https://doi.org/10.1177/1023263X20982960
  • Brkan, M., Claes, M., & Rauchegger, C. (2020). European fundamental rights and digitalization. Maastricht Journal of European and Comparative Law, 27(6), 697–704. https://doi.org/10.1177/1023263X20983778
  • Bu, Q. (2012). The global governance on automated facial recognition (AFR): Ethical and legal opportunities and privacy challenges. International Cybersecurity Law Review, (2), 113–145. https://doi.org/10.1365/s43439-021-00022-x
  • Cao, L. (2023). AI and data science for smart emergency, crisis and disaster resilience. International journal of data science and analytics, 15(3), 231-–246. https://doi.org/10.1007/s41060-023-00393-w
  • Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10(1), 1–12. https://doi.org/10.1057/s41599-023-02079-x
  • Chen, L., Chen, P. and Lin, Z. (2020) Artificial intelligence in education: A review. IEEE Access, 8, 75264-75278. https://doi.org/10.1109/ACCESS.2020.2988510
  • Cobbe, J. (2021). Algorithmic censorship by social platforms: Power and resistance. Philosophy & Technology, 34(4), 739–766. https://doi.org/10.1007/s13347-020-00429-0
  • Cossette-Lefebvre, H., & Maclure, J. (2023). AI’s fairness problem: Understanding wrongful discrimination in the context of automated decision-making. AI and Ethics, 3(4), 1255–1269. https://doi.org/10.1007/s43681-022-00233-w
  • Council of Europe. (n.d.-a). Convention 108 and its Protocol for the Protection of Individuals with regard to Automatic Processing of Personal Data. Retrieved January 10, 2025 from https://www.coe.int/en/web/data-protection/convention108-and-protocol
  • Council of Europe. (n.d.-b). The Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law. Retrieved January 10, 2025 from https://www.coe.int/en/web/artificial-intelligence/the-framework-convention-on-artificial-intelligence
  • Dastin, J. (2018, October 11). Insight—Amazon scraps secret AI recruiting tool that showed bias against women. Reuters Retrieved from https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/
  • De Gregorio, G., & Dunn, P. (2023). Artificial intelligence and freedom of expression. In A. Quintavalla & J. Temperman (Eds.), Artificial intelligence and human rights (pp. 76–90). Oxford: Oxford University Press.
  • Donahoe, E., & Metzger, M. M. (2019). Artificial intelligence and human rights. Journal of Democracy, 30(2), 115–126. https://doi.org/10.1353/jod.2019.0029
  • Drage, E., & Mackereth, K. (2022). Does AI debias recruitment? Race, gender, and AI’s “eradication of difference”. Philosophy & Technology, 35(4), 89. Retrieved from https://link.springer.com/article/10.1007/s13347-022-00543-1
  • Enqvist, L. (2024). Rule-based versus AI-driven benefits allocation: GDPR and AIA legal implications and challenges for automation in public social security administration. Information & Communications Technology Law, 33(2), 222–246. https://doi.org/10.1080/13600834.2024.2349835
  • Ergül, E. (2024). Yapay zeka ve hukuk [Artificial Intelligence and Law]. Ankara: Adalet Yayınevi.
  • European Commission. (n.d.). Recital 29 – Artificial Intelligence Act (EU AI Act). Retrieved October 15, 2025 from https://artificialintelligenceact.eu/recital/29/
  • European Convention for the Protection of Human Rights and Fundamental Freedoms (ETS No. 5), Nov. 4, 1950, Council of Europe. Retrieved from https://www.echr.coe.int/documents/convention_eng.pdf
  • European Crime Prevention Network (EUCPN). (2022). Artificial Intelligence and Predictive Policing: Risks and Challenges. Brussels: EUCPN. Retrieved from https://eucpn.org/sites/default/files/document/files/PP%20%282%29.pdf
  • European Network and Information Security Agency (ENISA). (2019). Pseudonymisation techniques and best practices: Recommendations on shaping technology according to data protection and privacy provisions. Heraklion: ENISA. Retrieved from https://www.enisa.europa.eu/publications/pseudonymisation-techniques-and-best-practices
  • Fair Trials. (2023, December 11). Partial ban on ‘predictive’ policing and crime prediction systems included in final EU AI Act. Retrieved from https://www.fairtrials.org/articles/news/partial-ban-on-predictive-policing-included-in-final-eu-ai-act/
  • Farayola, M. M., Tal, I., Connolly, R., Saber, T., & Bendechache, M. (2023). Ethics and trustworthiness of AI for predicting the risk of recidivism: A systematic literature review. Information, 14(8), 42, 426. https://doi.org/10.3390/info14080426
  • Ferguson, A. G. (2020). The rise of big data policing: Surveillance, race, and the future of law enforcement (First published in paperback). New York: New York University Press.
  • Gates, K. (2011). Our biometric future: Facial recognition technology and the culture of surveillance. New York: New York University Press.
  • Gaumond, E., & Régis, C. (2023, January 27). Assessing impacts of AI on human rights: It’s not solely about privacy and nondiscrimination. Lawfare. Retrieved from https://www.lawfaremedia.org/article/assessing-impacts-of-ai-on-human-rights-it-s-not-solely-about-privacy-and-nondiscrimination
  • Gerards, J. (2019). The fundamental rights challenges of algorithms. Netherlands Quarterly of Human Rights, 37(3), 205–209. https://doi.org/10.1177/0924051919861773
  • Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2). https://doi.org/10.1177/2053951720943234
  • Gonzales, N. M. (2023). The rights to privacy and data protection and facial recognition technology in the global north. In A. Quintavalla & J. Temperman (Eds.), Artificial intelligence and human rights (pp. 136–149). Oxford: Oxford University Press.
  • Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1). https://doi.org/10.1177/2053951719897945
  • Hamilton, M. (2019). The biased algorithm: Evidence of disparate impact on Hispanics. American Criminal Law Review, 56(4), 1553–1577. Retrieved from https://www.law.georgetown.edu/american-criminal-law-review/in-print/volume-56-number-4-fall-2019/the-biased-algorithm-evidence-of-disparate-impact-on-hispanics/
  • Heaven, W. D. (2020, July 17). Predictive policing algorithms are racist. They need to be dismantled. MIT Technology Review. Retrieved from https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/
  • Illinois Biometric Information Privacy Act, 740 Ill. Comp. Stat. 14 (2008).
  • International Covenant on Civil and Political Rights, Dec. 16, 1966, 999 U.N.T.S. 171. Retrieved from https://www.ohchr.org/en/professionalinterest/pages/ccpr.aspx
  • Johnson, T. L., Johnson, N. N., McCurdy, D., & Olajide, M. S. (2022). Facial recognition systems in policing and racial disparities in arrests. Government Information Quarterly, 39(4), 101753. https://doi.org/10.1016/j.giq.2022.101753
  • Katrak, M., & Chakrabarty, I. (2023). Privacy, political participation, and dissent: Facial recognition technologies and the risk of digital authoritarianism in the global South. In A. Quintavalla & J. Temperman (Eds.), Artificial intelligence and human rights (pp. 150–161). Oxford: Oxford University Press.
  • Khaitan, T. (2017). Indirect discrimination. In K. Lippert-Rasmussen (Ed.), Routledge Handbook of the Ethics of Discrimination (pp. 30–41). London: Routledge. Retrieved from https://ssrn.com/abstract=3097020
  • Krupiy, T., & McLeod Rogers, J. (2022). Mapping artificial intelligence and human intersections: Why we need new perspectives on harm and governance in human rights. In Aoife O'Donoghue, Ruth Houghton and Se-shauna Wheatle (Eds.). Research Handbook on Global Governance. Cheltenham, UK: Edward Elgar. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4113725
  • Königs, P. (2022). Government surveillance, privacy, and legitimacy. Philosophy & Technology, 35(1), 8. https://doi.org/10.1007/s13347-022-00503-9
  • La Quadrature du Net. (2024, January 18). Predictive policing in France: Against opacity and discrimination, the need for a ban. Retrieved December 29, 2024, from https://www.laquadrature.net/en/2024/01/18/predictive-policing-in-france-against-opacity-and-discrimination-the-need-for-a-ban/
  • Langenbucher, K. (2020). Responsible AI-based credit scoring–a legal framework. European Business Law Review, 31(4). Retrieved from https://ir.lawnet.fordham.edu/faculty_scholarship/1081
  • Manning, C. (2020). AI definitions: A compilation of artificial intelligence definitions for policymakers and researchers. Stanford Institute for Human-Centered Artificial Intelligence (HAI). Retrieved from https://hai-production.s3.amazonaws.com/files/2020-09/AI-Definitions-HAI.pdf
  • Marcetic, B. (2022, June 17). YouTube’s censorship is a threat to the Left. Jacobin. Retrieved from https://jacobin.com/2022/06/youtube-google-big-tech-censorship-misinformation-left-wing-media
  • Marwala, T. & Mpedi, L.G. (2024). Criminal justice system and AI. In Artificial intelligence and the law (pp.47–64). Singapore: Palgrave Macmillan. https://doi.org/10.1007/978-981-97-2827-5_3
  • McDaniel, J. L. M., & Pease, K. (2021). Introduction. In J. L. M. McDaniel & K. Pease (Eds.), Predictive policing and artificial intelligence (pp.1–39). Abingdon, Oxon: Routledge.
  • Meijer, A., & Wessels, M. (2019). Predictive policing: Review of benefits and drawbacks. International Journal of Public Administration, 42(12), 1031–1039. https://doi.org/10.1080/01900692.2019.1575664
  • Milosevic, T., Van Royen, K., & Davis, B. (2022). Artificial intelligence to address cyberbullying, harassment and abuse: new directions in the midst of complexity. International journal of bullying prevention, 4(1), 1-5. https://doi.org/10.1007/s42380-022-00117-x
  • Miller, L. (2020, April 21). LAPD will end controversial program that aimed to predict where crimes would occur. Los Angeles Times. Retrieved from https://www.latimes.com/california/story/2020-04-21/lapd-ends-predictive-policing-program
  • Mozur, P. (2019, April 14). One month, 500,000 face scans: How China is using A.I. to profile a minority. The New York Times. Retrieved from https://www.nytimes.com/2019/04/14/technology/china-surveillance-artificial-intelligence-racial-profiling.html
  • Muggah, R., & Augusto Francisco, P. (2020, February 10). Brazil’s risky bet on tech to fight crime. Americas Quarterly. Retrieved from https://americasquarterly.org/article/brazils-risky-bet-on-tech-to-fight-crime/
  • Narayan, S. (2023). Predictive policing and the construction of the “criminal”: An ethnographic study of Delhi Police. London, UK: Palgrave Macmillan.
  • National Institute of Standards and Technology (NIST). (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). Retrieved from https://www.nist.gov/itl/ai-risk-management-framework
  • O’Donnell, R. M. (2019). Challenging racist predictive policing algorithms under the Equal Protection Clause. New York University Law Review, 94(3), 544–580. Retrieved from https://nyulawreview.org/wp-content/uploads/2019/06/NYULawReview-94-3-ODonnell.pdf
  • Office of the United Nations High Commissioner for Human Rights (OHCHR). (n.d.-a). B-Tech Project: Applying the UN Guiding Principles on Business and Human Rights to Digital Technologies. Retrieved January 22, 2025, from https://www.ohchr.org/en/b-tech
  • Office of the United Nations High Commissioner for Human Rights (OHCHR). (n.d.-b). Human Rights Indicators Database. Retrieved January 22, 2025, from https://indicators.ohchr.org/
  • Office of the High Commissioner for Human Rights (OHCHR), & International Bar Association. (2003). The right to equality and non-discrimination in the administration of Justice. In Human rights in the administration of justice: A manual on human rights for judges, prosecutors and lawyers (pp.631-–679). New York and Geneva: United Nations. Retrieved from https://www.ohchr.org/sites/default/files/Documents/Publications/training9chapter13en.pdf
  • Ohlheiser, A. W. (2021, July 13). Welcome to TikTok’s endless cycle of censorship and mistakes. MIT Technology Review. Retrieved from https://www.technologyreview.com/2021/07/13/1028401/tiktok-censorship-mistakes-glitches-apologies-endless-cycle/
  • Okolo, C. T., & Tano, M. (2024). Moving toward truly responsible AI development in the global AI market. Brookings Institution. Retrieved from https://www.brookings.edu/articles/moving-toward-truly-responsible-ai-development-in-the-global-ai-market/
  • Oosterloo, S., & van Schie, G. (2018). The politics and biases of the “Crime Anticipation System” of the Dutch Police. CEUR Workshop Proceedings of the International Workshop on Bias in Information, Algorithms, and Systems, 13, 30–41. Retrieved from https://ceur-ws.org/Vol-2103/paper_6.pdf
  • Organisation for Economic Cooperation and Development (OECD). (2024). Assessing potential future artificial intelligence risks, benefits and policy imperatives (OECD Artificial Intelligence Papers No. 27). Paris: OECD Publishing. https://doi.org/10.1787/3f4e3dfb-en
  • Ortiz Freuler, J., & Iglesias, C. (2018). Algorithms and Artificial Intelligence in Latin America: A Study of Implementation by Governments in Argentina and Uruguay. Web Foundation. Retrieved from https://webfoundation.org/docs/2018/09/WF_AI-inLA_Report_Screen_AW.pdf
  • Qandeel, M. (2024). Facial recognition technology: Regulations, rights and the rule of law. Frontiers in Big Data, 7(1354659). https://doi.org/10.3389/fdata.2024.1354659
  • Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (CETS No. 223), Oct. 10, 2018, Council of Europe. Retrieved from https://www.coe.int/en/web/data-protection/convention108-and-protocol
  • Raab, C. D. (2023). Beyond data: human rights, ethical and social impact assessment in AI: by Alessandro Mantelero, The Hague, The Netherlands, T.M.C. ASSER Press, 2022, 200pp., ISBN 978-94-6265-530-0. https://link.springer.com/book/9789462655300. International Review of Law, Computers & Technology, 38(1), 111–114. https://doi.org/10.1080/13600869.2023.2213104
  • Raji, I., & Sholademi, D. B. (2024). Predictive Policing: The Role of AI in Crime Prevention. International Journal of Computer Applications Technology and Research, 13 (10), 66–78. https://doi.org/10.7753/IJCATR1310.1006.
  • Raso, F., Hilligoss, H., Krishnamurthy, V., Bavitz, C., & Kim, L. Y. (2018). Artificial intelligence & human rights: Opportunities & risks. Berkman Klein Center for Internet & Society at Harvard University. Retrieved from https://www.ssrn.com/abstract=3259344
  • Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data (General Data Protection Regulation), 2016 O.J. (L 119) 1. Retrieved from https://eur-lex.europa.eu/eli/reg/2016/679/oj
  • Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act), 2024 O.J. (L 168) 1. Retrieved from https://eur-lex.europa.eu/eli/reg/2024/1689/oj
  • Richardson, R., Schultz, J. M., & Crawford, K. (2019). Dirty data, bad predictions: How civil rights violations impact police data, predictive policing systems, and justice. New York University Law Review, 94(15), 192–233. Retrieved from https://nyulawreview.org/wp-content/uploads/2019/04/NYULawReview-94-Richardson_etal-FIN.pdf
  • Risse, M. (2018). Human rights and artificial intelligence: An urgently needed agenda. Revista Publicum, 4(1), 1–16. https://doi.org/10.12957/publicum.2018.35098
  • Russell, S., Perset, K., & Grobelnik, M. (2023, November 29). Updates to the OECD’s definition of an AI system explained. OECD.AI Policy Observatory. Retrieved from https://oecd.ai/en/wonk/ai-system-definition-update
  • Sander, B. (2020). Freedom of expression in the age of online platforms: The promise and pitfalls of a human rights-based approach to content moderation. Fordham International Law Journal, 43(4), 939–1006. Retrieved from https://ir.lawnet.fordham.edu/ilj/vol43/iss4/3
  • Saslow, K., & Lorenz, P. (2019, September 19). Artificial intelligence needs human rights: How the focus on ethical AI fails to address privacy, discrimination and other concerns. Stiftung Neue Verantwortung. Retrieved from https://www.ssrn.com/abstract=3589473
  • Seidensticker, K., Bode, F., Bode, Felix, & Stoffel, F. (2018). Predictive policing in Germany. Konstanzer Online-Publikationssystem (KOPS). Retrieved from http://nbn-resolving.de/urn:nbn:de:bsz:352-2-14sbvox1ik0z06
  • Sheikh, H., Prins, C., & Schrijvers, E. (2023). Artificial intelligence: definition and background. In Mission AI: The new system technology (pp. 15–41). Cham: Springer International Publishing.
  • Singil, N. (2022). Yapay zekâ ve insan hakları [Artificial Intelligence and Human Rights]. Public and Private International Law Bulletin, 42(1), 121-158. https://doi.org/10.26650/ppil.2022.42.1.970856
  • Šmuclerová, M., Král, L., & Drchal, J. (2023). AI life cycle and human rights risks and remedies. In A. Quintavalla & J. Temperman (Eds.), Artificial Intelligence and Human Rights (pp. 16–41). Oxford: Oxford University Pres
  • Sprick, D. (2020). Predictive policing in China: An authoritarian dream of public security. NAVEIÑ REET: Nordic Journal of Law and Social Research, 1(9), 299–324. https://doi.org/10.7146/nnjlsr.v1i9.122164
  • Taylor, R. D. (2024). Saving global human rights: A “Global South + AI” strategy. The Information Society, 40(1), 1–16. https://doi.org/10.1080/01972243.2024.2442916
  • United Nations AI Advisory Board. (2024). Governing AI for humanity: Final report. New York, NY: United Nations. Retrieved from https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf
  • United Nations Human Rights Council. (2011). Guiding principles on business and human rights: Implementing the United Nations “Protect, Respect and Remedy” framework (Resolution A/HRC/RES/17/4). Retrieved from https://www.ohchr.org/sites/default/files/documents/publications/guidingprincip
  • van Bekkum, M., & Borgesius, F. Z. (2021). Digital welfare fraud detection and the Dutch SyRI judgment. European Journal of Social Security, 23(4), 323–340. https://doi.org/10.1177/13882627211031257
  • van Dijck, G. (2022). Predicting recidivism risk meets AI Act. European Journal on Criminal Policy and Research, 28(3), 407–423. https://doi.org/10.1007/s10610-022-09516-8
  • Varona, D., Lizama-Mue, Y., & Suárez, J. L. (2021). Machine learning’s limitations in avoiding automation of bias. AI & SOCIETY, 36, 197–203. https://doi.org/10.1007/s00146-020-00996-y
  • Veale, M., Binns, R., & Edwards, L. (2018). Algorithms that remember: Model inversion attacks and data protection law. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180083. https://doi.org/10.1098/rsta.2018.0083
  • Wen, J., & Wang, W. (2023). The future of ChatGPT in academic research and publishing: A commentary for clinical and translational medicine. Clinical and Translational Medicine, 13(3), e1207. Retrieved from https://pubmed.ncbi.nlm.nih.gov/36941774/
  • Zornetta, A., & Cofone, I. (2023). Artificial intelligence and the right to privacy. In A. Quintavalla & J. Temperman (Eds.), Artificial Intelligence and Human Rights (pp. 121–135). Oxford: Oxford University Press.
  • Zuboff, S. (2019). The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: Public Affairs.
Toplam 103 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Politika ve Yönetim (Diğer)
Bölüm Araştırma Makalesi
Yazarlar

Güliz Dinç 0000-0003-0350-0556

Özge Öz Döm 0000-0001-6569-2261

Gönderilme Tarihi 4 Şubat 2025
Kabul Tarihi 21 Ekim 2025
Yayımlanma Tarihi 29 Mart 2026
DOI https://doi.org/10.18074/ckuiibfd.1633000
IZ https://izlik.org/JA74ML99GY
Yayımlandığı Sayı Yıl 2026 Cilt: 16 Sayı: 1

Kaynak Göster

APA Dinç, G., & Öz Döm, Ö. (2026). Human Rights in the Artificial Intelligence Era: Opportunities and Pitfalls. Çankırı Karatekin Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi, 16(1), 142-171. https://doi.org/10.18074/ckuiibfd.1633000
AMA 1.Dinç G, Öz Döm Ö. Human Rights in the Artificial Intelligence Era: Opportunities and Pitfalls. Çankırı Karatekin Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi. 2026;16(1):142-171. doi:10.18074/ckuiibfd.1633000
Chicago Dinç, Güliz, ve Özge Öz Döm. 2026. “Human Rights in the Artificial Intelligence Era: Opportunities and Pitfalls”. Çankırı Karatekin Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi 16 (1): 142-71. https://doi.org/10.18074/ckuiibfd.1633000.
EndNote Dinç G, Öz Döm Ö (01 Mart 2026) Human Rights in the Artificial Intelligence Era: Opportunities and Pitfalls. Çankırı Karatekin Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi 16 1 142–171.
IEEE [1]G. Dinç ve Ö. Öz Döm, “Human Rights in the Artificial Intelligence Era: Opportunities and Pitfalls”, Çankırı Karatekin Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi, c. 16, sy 1, ss. 142–171, Mar. 2026, doi: 10.18074/ckuiibfd.1633000.
ISNAD Dinç, Güliz - Öz Döm, Özge. “Human Rights in the Artificial Intelligence Era: Opportunities and Pitfalls”. Çankırı Karatekin Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi 16/1 (01 Mart 2026): 142-171. https://doi.org/10.18074/ckuiibfd.1633000.
JAMA 1.Dinç G, Öz Döm Ö. Human Rights in the Artificial Intelligence Era: Opportunities and Pitfalls. Çankırı Karatekin Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi. 2026;16:142–171.
MLA Dinç, Güliz, ve Özge Öz Döm. “Human Rights in the Artificial Intelligence Era: Opportunities and Pitfalls”. Çankırı Karatekin Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi, c. 16, sy 1, Mart 2026, ss. 142-71, doi:10.18074/ckuiibfd.1633000.
Vancouver 1.Güliz Dinç, Özge Öz Döm. Human Rights in the Artificial Intelligence Era: Opportunities and Pitfalls. Çankırı Karatekin Üniversitesi İktisadi ve İdari Bilimler Fakültesi Dergisi. 01 Mart 2026;16(1):142-71. doi:10.18074/ckuiibfd.1633000