Gender Biases in Artificial Intelligence Models: A Quantitative Study on ChatGPT and Copilot
Yıl 2025,
Sayı: 49, 310 - 331, 31.08.2025
Şule Yenigün Altın
,
Hacer Taşdelen
Öz
This study aims to reveal whether generative artificial intelligence (AI) tools used in a wide range of fields from health to employment today exhibit a gender-biased attitude while producing information. The sample of the study was determined as ChatGPT and Copilot, which are AI tools that also stand out with their visual production features, using the purposeful sampling method. Nineteen professions and twenty positive personality traits were requested from these AI tools, and the gender with which the generated profession and personality traits were most associated was analyzed. The data was analyzed with the content analysis technique and the distribution of the visuals by gender was examined. In the findings obtained as a result of the study; it was seen that the male gender was associated with problem-solving, technically skilled and high-status professions, while the female gender was identified with a small number of professions that were predominantly artistic and aesthetic. Similarly, in the findings regarding positive personality traits, the representation of many traits was associated with the male gender. As a result, it was understood that the AI tools in the sample worked with a male-biased approach and contributed to the reproduction of male-biased assumptions in the results they produced.
Kaynakça
-
Abid, A., Farooqi, M., ve Zou, J. (2021). Persistent anti-Muslim bias in large language models. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 298-306. https://doi.org/10.1145/3461702.3462624
-
Angwin, J., Larson, J., Mattu, S., ve Kirchner, L. (2022). Machine bias. İçinde Ethics of data and analytics. Auerbach Publications.
-
Aziz, A. (2015). Sosyal bilimlerde araştırma yöntemleri ve teknikleri. Nobel Yayıncılık.
-
Balestri, R. (2024). Examining multimodal gender and content bias in ChatGPT-4o (arXiv:2411.19140). arXiv. https://doi.org/10.48550/arXiv.2411.19140
-
Bliuc, A.-M., Faulkner, N., Jakubowicz, A. ve McGarty, C. (2018). Online networks of racial hate: A systematic review of 10 years of research on cyber-racism. Computers in Human Behavior, 87, 75-86. https://doi.org/10.1016/j.chb.2018.05.026
-
Bowman, S. R. (2023). Eight things to know about large language models (arXiv:2304.00612). arXiv. https://doi.org/10.48550/arXiv.2304.00612
-
Bryman, A. ve Bell, E. (2011). Business Research Methods: Bell, Emma: 9780199668649: Amazon.com: Books. Oxford University Press. https://www.amazon.com/Business-Research-Methods-Alan-Bryman/dp/0199668647
-
Carter, C., Kurkinen, V., König, L., Ruohonen, A., ve van Vloten, E. C. (2024). Exploring gender bias in ChatGPT [Graduate]. University of Helsinki.
-
Chan, A., Salganik, R., Markelius, A., Pang, C., Rajkumar, N., Krasheninnikov, D., Langosco, L., He, Z., Duan, Y., Carroll, M., Lin, M., Mayhew, A., Collins, K., Molamohammadi, M., Burden, J., Zhao, W., Rismani, S., Voudouris, K., Bhatt, U., Maharaj, T. (2023). Harms from ıncreasingly agentic algorithmic systems. 2023 ACM Conference on Fairness, Accountability, and Transparency, 651-666. https://doi.org/10.1145/3593013.3594033
-
Çifci, B. S., ve Basfirinci, C. (2020). Yapay zekâ konusunun toplumsal cinsiyet kapsamında incelenmesi: Mesleklere yönelik bir araştırma. Çukurova Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 29(4), Article 4. https://doi.org/10.35379/cusosbil.819510
-
Cirillo, D., Catuara-Solarz, S., Morey, C., Guney, E., Subirats, L., Mellino, S., Gigante, A., Valencia, A., Rementeria, M. J., Chadha, A. S., ve Mavridis, N. (2020). Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. Npj Digital Medicine, 3(1), 1-11. https://doi.org/10.1038/s41746-020-0288-5
-
Crawford, J., Cowling, M., ve Allen, K.-A. (2023). Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI). Journal of University Teaching and Learning Practice, 20(3), Article 3. https://doi.org/10.53761/1.20.3.02
-
Crawford, K., ve Paglen, T. (2021). Excavating AI: The politics of images in machine learning training sets. AI & SOCIETY, 36(4), 1105-1116. https://doi.org/10.1007/s00146-021-01162-8
-
Cyfırma. (2023, Ocak 21). ChatGPT AI in security testing: Opportunities and challenges. CYFIRMA. https://www.cyfirma.com/outofband/chatgpt-ai-in-security-testing-opportunities-and-challenges/
-
Du, H., Teng, S., Chen, H., Ma, J., Wang, X., Gou, C., Li, B., Ma, S., Miao, Q., Na, X., Ye, P., Zhang, H., Luo, G., ve Wang, F.-Y. (2023). Chat with ChatGPT on intelligent vehicles: An IEEE TIV Perspective. IEEE Transactions on Intelligent Vehicles, 8(3), 2020-2026. IEEE Transactions on Intelligent Vehicles. https://doi.org/10.1109/TIV.2023.3253281
-
Favaretto, M., De Clercq, E., ve Elger, B. S. (2019). Big data and discrimination: Perils, promises and solutions. A systematic review. Journal of Big Data, 6(1), 12. https://doi.org/10.1186/s40537-019-0177-4
-
Ferrara, E. (2024). Fairness and bias in artificial intelligence: A brief survey of sources, impacts and mitigation strategies. Sci, 6(1), Article 1. https://doi.org/10.3390/sci6010003
-
Forbes. (2024). Billionaires list—the richest people ın the world ranked. Forbes. https://www.forbes.com/billionaires/
-
Friedman, B., ve Nissenbaum, H. (1996). Bias in computer systems. ACM Trans. Inf. Syst., 14(3), 330-347. https://doi.org/10.1145/230538.230561
-
García-Ull, F.-J., ve Melero-Lázaro, M. (2023). Gender stereotypes in AI-generated images. profesional de la información, 32(5), Article 5. https://doi.org/10.3145/epi.2023.sep.05
-
Ghosh, S., ve Caliskan, A. (2023). ChatGPT perpetuates gender bias in machine translation and Ignores Non-gendered Pronouns: Findings across Bengali and five other Low-resource languages. Proceedings of the 2023 ACM Conference on International Computing Education Research V.1, 397-415. https://doi.org/10.1145/3568813.3600120
-
Glickman, M., ve Sharot, T. (2024). How human–AI feedback loops alter human perceptual, emotional and social judgements. Nature Human Behaviour, 1-15. https://doi.org/10.1038/s41562-024-02077-2
-
Gross, N. (2023). What ChatGPT tells us about gender: A cautionary tale about performativity and gender biases in AI. Social Sciences, 12(8), Article 8. https://doi.org/10.3390/socsci12080435
-
Guijarro, S. T. (2023). Sesgos de género de la asistencia digital en español. Gender on Digital. Journal of Digital Feminism, 1, 35-58. https://doi.org/10.35869/god.v1i.5061
-
Guynn, J. (2019, Nisan 16). AI, facial recognition too white, too male: More minorities needed. USA Today. https://www.usatoday.com/story/tech/2019/04/17/ai-too-white-male-more-women-minorities-needed-facial-recognition/3451932002/
-
Güngör, N. (2018). İletişim: Kuramlar ve yaklaşımlar (4. bs). Siyasal Kitapevi.
Hagerty, A., ve Rubinov, I. (2019). Global AI ethics: A review of the social impacts and ethical implications of artificial intelligence (arXiv:1907.07892). arXiv. https://doi.org/10.48550/arXiv.1907.07892
-
Hall, P., ve Ellis, D. (2023). A systematic review of socio-technical gender bias in AI algorithms. Online Information Review, 47(7), 1264-1279. https://doi.org/10.1108/OIR-08-2021-0452
-
Johnson, C. Y., Boodman, S. G., Nirappil, F., Diamond, D., Sun, L. H., Johnson, M., ve Bisset, V. (2019, Ekim 24). Racial bias in a medical algorithm favors white patients over sicker black patients. Washington Post. https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/
-
Kaplan, D. M., Palitsky, R., Arconada Alvarez, S. J., Pozzo, N. S., Greenleaf, M. N., Atkinson, C. A., ve Lam, W. A. (2024). What’s in a name? Experimental evidence of gender bias in recommendation letters generated by ChatGPT. Journal of Medical Internet Research, 26, e51837. https://doi.org/10.2196/51837
-
King, M. R. ve chatGPT. (2023). A conversation on artificial ıntelligence, chatbots, and plagiarism in higher education. Cellular and Molecular Bioengineering, 16(1), 1-2. https://doi.org/10.1007/s12195-022-00754-8
-
Leavy, S., Meaney, G., Wade, K., ve Greene, D. (2020). Mitigating gender bias in machine learning data sets. Içinde L. Boratto, S. Faralli, M. Marras, ve G. Stilo (Ed.), Bias and social aspects in search and recommendation (C. 1245, ss. 12-26). Springer International Publishing. https://doi.org/10.1007/978-3-030-52485-2_2
-
Liang, P. P., Wu, C., Morency, L.-P., ve Salakhutdinov, R. (2021). Towards understanding and mitigating social biases in language models. Proceedings of the 38th International Conference on Machine Learning, 6565-6576. https://proceedings.mlr.press/v139/liang21a.html
-
Liebrenz, M., Schleifer, R., Buadze, A., Bhugra, D., ve Smith, A. (2023). Generating scholarly content with ChatGPT: Ethical challenges for medical publishing. The Lancet Digital Health, 5(3), e105-e106. https://doi.org/10.1016/S2589-7500(23)00019-5
-
Lippens, L. (2024). Computer says ‘no’: Exploring systemic bias in ChatGPT using an audit approach. Computers in Human Behavior: Artificial Humans, 2(1), 100054. https://doi.org/10.1016/j.chbah.2024.100054
-
Liu, Z. (2021). Sociological perspectives on artificial intelligence: A typological reading. Sociology Compass, 15(3), e12851. https://doi.org/10.1111/soc4.12851
-
Lynch, S. (2017, Mart 10). Andrew Ng: Why AI Is the New Electricity. Stanford Graduate School of Business. https://www.gsb.stanford.edu/insights/andrew-ng-why-ai-new-electricity
-
Mbakwe, A. B., Lourentzou, I., Celi, L. A., Mechanic, O. J., ve Dagan, A. (2023). ChatGPT passing USMLE shines a spotlight on the flaws of medical education. PLOS Digital Health, 2(2), e0000205. https://doi.org/10.1371/journal.pdig.0000205
-
McCarthy, J., Minsky, M. L., Rochester, N., ve Shannon, C. E. (2006). A proposal for the dartmouth summer research project on artificial ıntelligence, August 31, 1955. AI Magazine, 27(4), Article 4. https://doi.org/10.1609/aimag.v27i4.1904
-
McKenna, M. (2019, Ekim 5). Three notable examples of AI bias. https://aibusiness.com/responsible-ai/three-notable-examples-of-ai-bias
-
Menegatti, M., ve Rubini, M. (2017). Gender bias and sexism in language. Içinde Oxford Research Encyclopedia of Communication. https://oxfordre.com/communication/display/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-470
-
Muchmore, M. (2024, Haziran 18). What Is Copilot? Microsoft’s AI assistant explained. PC Mag. https://www.pcmag.com/explainers/what-is-microsoft-copilot
-
Naumova, E. N. (2023). A mistake-find exercise: A teacher’s tool to engage with information innovations, ChatGPT, and their analogs. Journal of Public Health Policy. https://doi.org/10.1057/s41271-023-00400-1
-
Nikolic, K., ve Jovicic, J. (2023, Nisan 3). Reproducing inequality: How AI image generators show biases against women in STEM. UNDP. https://www.undp.org/serbia/blog/reproducing-inequality-how-ai-image-generators-show-biases-against-women-stem
-
Olsen, K. (1999). Daily life in 18th-century England. The Greenwood Press.
-
Orlikowski, W. J. (1991). The duality of technology: Rethinking the concept of technology in organizations. Massachusetts Institute of Technology.
-
Parsheera, S. (2018). A gendered perspective on artificial intelligence. Machine Learning for a 5G Future, 1-7. https://doi.org/10.23919/ITU-WT.2018.8597618
-
Pasquale, F. (2015). The black box society: The secret algorithms that control money and Information. Harvard University Press. https://www.jstor.org/stable/j.ctt13x0hch
-
Patton, M. Q. (2002). Qualitative research ve evaluation methods (3. bs). SAGE Publications Ltd. https://us.sagepub.com/en-us/nam/qualitative-research-evaluation-methods/book232962
-
Prates, M. O. R., Avelar, P. H., ve Lamb, L. C. (2020). Assessing gender bias in machine translation: A case study with Google Translate. Neural Computing and Applications, 32(10), 6363-6381. https://doi.org/10.1007/s00521-019-04144-6
-
Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121-154. https://doi.org/10.1016/j.iotcps.2023.04.003
-
Remitly. (2022). Dream jobs around the world study. Remitly. https://www.remitly.com/gb/en/landing/dream-jobs-around-the-world
-
Rozado, D. (2023). The political biases of ChatGPT. Social Sciences, 12(3), Article 3. https://doi.org/10.3390/socsci12030148
-
Russell, S. J., ve Norvig, P. (2010). Artificial intelligence: A modern approach (3rd ed). Prentice Hall.
-
Sharma, M. (2024, Aralık 27). What is AI bias? Almost everything you should know about bias in AI results. TechRadar. https://www.techradar.com/computing/artificial-intelligence/what-is-ai-bias-almost-everything-you-should-know-about-bias-in-ai-results
-
Shrestha, S., ve Das, S. (2022). Exploring gender biases in ML and AI academic research through systematic literature review. Frontiers in Artificial Intelligence, 5, 976838. https://doi.org/10.3389/frai.2022.976838
-
Silberg, J., ve Manyika, J. (2019). Notes from the AI frontier: Tackling bias in AI (and in humans). McKinsey Global Institute, 18.
-
UE. (2024, Eylül 25). DALL-E and Leonardo AI: When artificial intelligence generates outdated gender roles. UE. https://www.ue-germany.com/news-centre/press/when-artificial-intelligence-generates-outdated-gender-roles
-
Urchs, S., Thurner, V., Aßenmacher, M., Heumann, C., ve Thiemichen, S. (2024). How prevalent is gender bias in ChatGPT? -- Exploring German and English ChatGPT Responses (arXiv:2310.03031). arXiv. https://doi.org/10.48550/arXiv.2310.03031
-
Wakefield, J. (2016, Mart 24). Microsoft chatbot is taught to swear on Twitter. BBC News. https://www.bbc.com/news/technology-35890188
-
WEF. (2024, Haziran 11). Global gender gap report 2024. World Economic Forum. https://www.weforum.org/publications/global-gender-gap-report-2024/
-
Wellner, G. P. (2020). When Ai is gender-biased. Humana Mente, 13(37). https://philpapers.org/rec/WELWAI
-
West, S. M., Whittaker, M., ve Crawford, K. (2019). Discriminating systems: Gender, race, and power in AI. https://ainowinstitute.org/publication/discriminating-systems-gender-race-and-power-in-ai-2
-
Winner, L. (1989). The whale and the reactor: A search for limits in an age of high technology. University of Chicago Press.
-
Wolf, M. J., Miller, K. W., ve Grodzinsky, F. S. (2017). Why we should have seen that coming: Comments on Microsoft’s Tay “Experiment,” and Wider Implications. The ORBIT Journal, 1(2), 1-12. https://doi.org/10.29297/orbit.v1i2.49
-
Wu, G. (2022, Aralık 22). 8 Big problems with OpenAI’s ChatGPT. MUO. https://www.makeuseof.com/openai-chatgpt-biggest-probelms/
-
Xavier, B. (2024). Biases within AI: Challenging the illusion of neutrality. AI & SOCIETY. https://doi.org/10.1007/s00146-024-01985-1
-
Zajko, M. (2022). Artificial intelligence, algorithms, and social inequality: Sociological contributions to contemporary debates. Sociology Compass, 16(3), e12962. https://doi.org/10.1111/soc4.12962
-
Zhou, K. Z., ve Sanfilippo, M. R. (2023). Public perceptions of gender bias in large language models: Cases of ChatGPT and Ernie (arXiv:2309.09120). arXiv. https://doi.org/10.48550/arXiv.2309.09120
-
Zhuo, T. Y., Huang, Y., Chen, C., ve Xing, Z. (2023). Red teaming ChatGPT via Jailbreaking: bias, robustness, reliability and toxicity (arXiv:2301.12867). arXiv. https://doi.org/10.48550/arXiv.2301.12867
Yapay Zekâ Modellerinde Cinsiyet Ön Yargıları: ChatGPT ve Copilot Üzerine Nicel Bir İnceleme
Yıl 2025,
Sayı: 49, 310 - 331, 31.08.2025
Şule Yenigün Altın
,
Hacer Taşdelen
Öz
Bu çalışma, günümüzde sağlıktan istihdama kadar çok çeşitli alanlarda kullanılan üretken yapay zekâ (YZ) araçlarının, bilgi üretirken cinsiyet konusunda ön yargılı bir tutum sergileyip sergilemediğini ortaya çıkarmayı amaçlamaktadır. Çalışmanın örneklemi, amaçlı örnekleme yöntemi kullanılarak görsel üretme özellikleriyle de öne çıkan YZ araçlarından ChatGPT ve Copilot olarak belirlenmiştir. Bu YZ araçlarından on dokuz meslek ve yirmi olumlu kişilik özelliğinin görselinin istemi yapılarak, üretilen meslek ve kişilik özelliklerinin görsellerinin en çok hangi cinsiyet ile ilişkilendirildiği analiz edilmiştir. Verilerin analizi içerik analizi tekniği ile gerçekleştirilmiş ve görsellerin cinsiyetlere göre dağılımları incelenmiştir. Çalışma sonucunda elde edilen bulgularda; erkek cinsiyetinin sorun çözen, teknik beceri gerektiren ve yüksek statü içeren mesleklerle ilişkilendirildiği, kadın cinsiyetinin ise sanat ve estetik yönü ağır basan az sayıda meslekle özdeşleştirildiği görülmüştür. Olumlu kişilik özellikleriyle ilgili bulgularda da benzer şekilde birçok özelliğin temsili erkek cinsiyetiyle ilişkilendirilmiştir. Sonuç olarak örneklemde yer alan YZ araçlarının erkek yanlı bir yaklaşımla çalıştığı ve ortaya koyduğu sonuçlarda erkek yanlı kabullerin yeniden üretimine katkı sunduğu anlaşılmıştır.
Etik Beyan
Çalışma etik kurul raporu gerektirmemektedir.
Kaynakça
-
Abid, A., Farooqi, M., ve Zou, J. (2021). Persistent anti-Muslim bias in large language models. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 298-306. https://doi.org/10.1145/3461702.3462624
-
Angwin, J., Larson, J., Mattu, S., ve Kirchner, L. (2022). Machine bias. İçinde Ethics of data and analytics. Auerbach Publications.
-
Aziz, A. (2015). Sosyal bilimlerde araştırma yöntemleri ve teknikleri. Nobel Yayıncılık.
-
Balestri, R. (2024). Examining multimodal gender and content bias in ChatGPT-4o (arXiv:2411.19140). arXiv. https://doi.org/10.48550/arXiv.2411.19140
-
Bliuc, A.-M., Faulkner, N., Jakubowicz, A. ve McGarty, C. (2018). Online networks of racial hate: A systematic review of 10 years of research on cyber-racism. Computers in Human Behavior, 87, 75-86. https://doi.org/10.1016/j.chb.2018.05.026
-
Bowman, S. R. (2023). Eight things to know about large language models (arXiv:2304.00612). arXiv. https://doi.org/10.48550/arXiv.2304.00612
-
Bryman, A. ve Bell, E. (2011). Business Research Methods: Bell, Emma: 9780199668649: Amazon.com: Books. Oxford University Press. https://www.amazon.com/Business-Research-Methods-Alan-Bryman/dp/0199668647
-
Carter, C., Kurkinen, V., König, L., Ruohonen, A., ve van Vloten, E. C. (2024). Exploring gender bias in ChatGPT [Graduate]. University of Helsinki.
-
Chan, A., Salganik, R., Markelius, A., Pang, C., Rajkumar, N., Krasheninnikov, D., Langosco, L., He, Z., Duan, Y., Carroll, M., Lin, M., Mayhew, A., Collins, K., Molamohammadi, M., Burden, J., Zhao, W., Rismani, S., Voudouris, K., Bhatt, U., Maharaj, T. (2023). Harms from ıncreasingly agentic algorithmic systems. 2023 ACM Conference on Fairness, Accountability, and Transparency, 651-666. https://doi.org/10.1145/3593013.3594033
-
Çifci, B. S., ve Basfirinci, C. (2020). Yapay zekâ konusunun toplumsal cinsiyet kapsamında incelenmesi: Mesleklere yönelik bir araştırma. Çukurova Üniversitesi Sosyal Bilimler Enstitüsü Dergisi, 29(4), Article 4. https://doi.org/10.35379/cusosbil.819510
-
Cirillo, D., Catuara-Solarz, S., Morey, C., Guney, E., Subirats, L., Mellino, S., Gigante, A., Valencia, A., Rementeria, M. J., Chadha, A. S., ve Mavridis, N. (2020). Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. Npj Digital Medicine, 3(1), 1-11. https://doi.org/10.1038/s41746-020-0288-5
-
Crawford, J., Cowling, M., ve Allen, K.-A. (2023). Leadership is needed for ethical ChatGPT: Character, assessment, and learning using artificial intelligence (AI). Journal of University Teaching and Learning Practice, 20(3), Article 3. https://doi.org/10.53761/1.20.3.02
-
Crawford, K., ve Paglen, T. (2021). Excavating AI: The politics of images in machine learning training sets. AI & SOCIETY, 36(4), 1105-1116. https://doi.org/10.1007/s00146-021-01162-8
-
Cyfırma. (2023, Ocak 21). ChatGPT AI in security testing: Opportunities and challenges. CYFIRMA. https://www.cyfirma.com/outofband/chatgpt-ai-in-security-testing-opportunities-and-challenges/
-
Du, H., Teng, S., Chen, H., Ma, J., Wang, X., Gou, C., Li, B., Ma, S., Miao, Q., Na, X., Ye, P., Zhang, H., Luo, G., ve Wang, F.-Y. (2023). Chat with ChatGPT on intelligent vehicles: An IEEE TIV Perspective. IEEE Transactions on Intelligent Vehicles, 8(3), 2020-2026. IEEE Transactions on Intelligent Vehicles. https://doi.org/10.1109/TIV.2023.3253281
-
Favaretto, M., De Clercq, E., ve Elger, B. S. (2019). Big data and discrimination: Perils, promises and solutions. A systematic review. Journal of Big Data, 6(1), 12. https://doi.org/10.1186/s40537-019-0177-4
-
Ferrara, E. (2024). Fairness and bias in artificial intelligence: A brief survey of sources, impacts and mitigation strategies. Sci, 6(1), Article 1. https://doi.org/10.3390/sci6010003
-
Forbes. (2024). Billionaires list—the richest people ın the world ranked. Forbes. https://www.forbes.com/billionaires/
-
Friedman, B., ve Nissenbaum, H. (1996). Bias in computer systems. ACM Trans. Inf. Syst., 14(3), 330-347. https://doi.org/10.1145/230538.230561
-
García-Ull, F.-J., ve Melero-Lázaro, M. (2023). Gender stereotypes in AI-generated images. profesional de la información, 32(5), Article 5. https://doi.org/10.3145/epi.2023.sep.05
-
Ghosh, S., ve Caliskan, A. (2023). ChatGPT perpetuates gender bias in machine translation and Ignores Non-gendered Pronouns: Findings across Bengali and five other Low-resource languages. Proceedings of the 2023 ACM Conference on International Computing Education Research V.1, 397-415. https://doi.org/10.1145/3568813.3600120
-
Glickman, M., ve Sharot, T. (2024). How human–AI feedback loops alter human perceptual, emotional and social judgements. Nature Human Behaviour, 1-15. https://doi.org/10.1038/s41562-024-02077-2
-
Gross, N. (2023). What ChatGPT tells us about gender: A cautionary tale about performativity and gender biases in AI. Social Sciences, 12(8), Article 8. https://doi.org/10.3390/socsci12080435
-
Guijarro, S. T. (2023). Sesgos de género de la asistencia digital en español. Gender on Digital. Journal of Digital Feminism, 1, 35-58. https://doi.org/10.35869/god.v1i.5061
-
Guynn, J. (2019, Nisan 16). AI, facial recognition too white, too male: More minorities needed. USA Today. https://www.usatoday.com/story/tech/2019/04/17/ai-too-white-male-more-women-minorities-needed-facial-recognition/3451932002/
-
Güngör, N. (2018). İletişim: Kuramlar ve yaklaşımlar (4. bs). Siyasal Kitapevi.
Hagerty, A., ve Rubinov, I. (2019). Global AI ethics: A review of the social impacts and ethical implications of artificial intelligence (arXiv:1907.07892). arXiv. https://doi.org/10.48550/arXiv.1907.07892
-
Hall, P., ve Ellis, D. (2023). A systematic review of socio-technical gender bias in AI algorithms. Online Information Review, 47(7), 1264-1279. https://doi.org/10.1108/OIR-08-2021-0452
-
Johnson, C. Y., Boodman, S. G., Nirappil, F., Diamond, D., Sun, L. H., Johnson, M., ve Bisset, V. (2019, Ekim 24). Racial bias in a medical algorithm favors white patients over sicker black patients. Washington Post. https://www.washingtonpost.com/health/2019/10/24/racial-bias-medical-algorithm-favors-white-patients-over-sicker-black-patients/
-
Kaplan, D. M., Palitsky, R., Arconada Alvarez, S. J., Pozzo, N. S., Greenleaf, M. N., Atkinson, C. A., ve Lam, W. A. (2024). What’s in a name? Experimental evidence of gender bias in recommendation letters generated by ChatGPT. Journal of Medical Internet Research, 26, e51837. https://doi.org/10.2196/51837
-
King, M. R. ve chatGPT. (2023). A conversation on artificial ıntelligence, chatbots, and plagiarism in higher education. Cellular and Molecular Bioengineering, 16(1), 1-2. https://doi.org/10.1007/s12195-022-00754-8
-
Leavy, S., Meaney, G., Wade, K., ve Greene, D. (2020). Mitigating gender bias in machine learning data sets. Içinde L. Boratto, S. Faralli, M. Marras, ve G. Stilo (Ed.), Bias and social aspects in search and recommendation (C. 1245, ss. 12-26). Springer International Publishing. https://doi.org/10.1007/978-3-030-52485-2_2
-
Liang, P. P., Wu, C., Morency, L.-P., ve Salakhutdinov, R. (2021). Towards understanding and mitigating social biases in language models. Proceedings of the 38th International Conference on Machine Learning, 6565-6576. https://proceedings.mlr.press/v139/liang21a.html
-
Liebrenz, M., Schleifer, R., Buadze, A., Bhugra, D., ve Smith, A. (2023). Generating scholarly content with ChatGPT: Ethical challenges for medical publishing. The Lancet Digital Health, 5(3), e105-e106. https://doi.org/10.1016/S2589-7500(23)00019-5
-
Lippens, L. (2024). Computer says ‘no’: Exploring systemic bias in ChatGPT using an audit approach. Computers in Human Behavior: Artificial Humans, 2(1), 100054. https://doi.org/10.1016/j.chbah.2024.100054
-
Liu, Z. (2021). Sociological perspectives on artificial intelligence: A typological reading. Sociology Compass, 15(3), e12851. https://doi.org/10.1111/soc4.12851
-
Lynch, S. (2017, Mart 10). Andrew Ng: Why AI Is the New Electricity. Stanford Graduate School of Business. https://www.gsb.stanford.edu/insights/andrew-ng-why-ai-new-electricity
-
Mbakwe, A. B., Lourentzou, I., Celi, L. A., Mechanic, O. J., ve Dagan, A. (2023). ChatGPT passing USMLE shines a spotlight on the flaws of medical education. PLOS Digital Health, 2(2), e0000205. https://doi.org/10.1371/journal.pdig.0000205
-
McCarthy, J., Minsky, M. L., Rochester, N., ve Shannon, C. E. (2006). A proposal for the dartmouth summer research project on artificial ıntelligence, August 31, 1955. AI Magazine, 27(4), Article 4. https://doi.org/10.1609/aimag.v27i4.1904
-
McKenna, M. (2019, Ekim 5). Three notable examples of AI bias. https://aibusiness.com/responsible-ai/three-notable-examples-of-ai-bias
-
Menegatti, M., ve Rubini, M. (2017). Gender bias and sexism in language. Içinde Oxford Research Encyclopedia of Communication. https://oxfordre.com/communication/display/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-470
-
Muchmore, M. (2024, Haziran 18). What Is Copilot? Microsoft’s AI assistant explained. PC Mag. https://www.pcmag.com/explainers/what-is-microsoft-copilot
-
Naumova, E. N. (2023). A mistake-find exercise: A teacher’s tool to engage with information innovations, ChatGPT, and their analogs. Journal of Public Health Policy. https://doi.org/10.1057/s41271-023-00400-1
-
Nikolic, K., ve Jovicic, J. (2023, Nisan 3). Reproducing inequality: How AI image generators show biases against women in STEM. UNDP. https://www.undp.org/serbia/blog/reproducing-inequality-how-ai-image-generators-show-biases-against-women-stem
-
Olsen, K. (1999). Daily life in 18th-century England. The Greenwood Press.
-
Orlikowski, W. J. (1991). The duality of technology: Rethinking the concept of technology in organizations. Massachusetts Institute of Technology.
-
Parsheera, S. (2018). A gendered perspective on artificial intelligence. Machine Learning for a 5G Future, 1-7. https://doi.org/10.23919/ITU-WT.2018.8597618
-
Pasquale, F. (2015). The black box society: The secret algorithms that control money and Information. Harvard University Press. https://www.jstor.org/stable/j.ctt13x0hch
-
Patton, M. Q. (2002). Qualitative research ve evaluation methods (3. bs). SAGE Publications Ltd. https://us.sagepub.com/en-us/nam/qualitative-research-evaluation-methods/book232962
-
Prates, M. O. R., Avelar, P. H., ve Lamb, L. C. (2020). Assessing gender bias in machine translation: A case study with Google Translate. Neural Computing and Applications, 32(10), 6363-6381. https://doi.org/10.1007/s00521-019-04144-6
-
Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems, 3, 121-154. https://doi.org/10.1016/j.iotcps.2023.04.003
-
Remitly. (2022). Dream jobs around the world study. Remitly. https://www.remitly.com/gb/en/landing/dream-jobs-around-the-world
-
Rozado, D. (2023). The political biases of ChatGPT. Social Sciences, 12(3), Article 3. https://doi.org/10.3390/socsci12030148
-
Russell, S. J., ve Norvig, P. (2010). Artificial intelligence: A modern approach (3rd ed). Prentice Hall.
-
Sharma, M. (2024, Aralık 27). What is AI bias? Almost everything you should know about bias in AI results. TechRadar. https://www.techradar.com/computing/artificial-intelligence/what-is-ai-bias-almost-everything-you-should-know-about-bias-in-ai-results
-
Shrestha, S., ve Das, S. (2022). Exploring gender biases in ML and AI academic research through systematic literature review. Frontiers in Artificial Intelligence, 5, 976838. https://doi.org/10.3389/frai.2022.976838
-
Silberg, J., ve Manyika, J. (2019). Notes from the AI frontier: Tackling bias in AI (and in humans). McKinsey Global Institute, 18.
-
UE. (2024, Eylül 25). DALL-E and Leonardo AI: When artificial intelligence generates outdated gender roles. UE. https://www.ue-germany.com/news-centre/press/when-artificial-intelligence-generates-outdated-gender-roles
-
Urchs, S., Thurner, V., Aßenmacher, M., Heumann, C., ve Thiemichen, S. (2024). How prevalent is gender bias in ChatGPT? -- Exploring German and English ChatGPT Responses (arXiv:2310.03031). arXiv. https://doi.org/10.48550/arXiv.2310.03031
-
Wakefield, J. (2016, Mart 24). Microsoft chatbot is taught to swear on Twitter. BBC News. https://www.bbc.com/news/technology-35890188
-
WEF. (2024, Haziran 11). Global gender gap report 2024. World Economic Forum. https://www.weforum.org/publications/global-gender-gap-report-2024/
-
Wellner, G. P. (2020). When Ai is gender-biased. Humana Mente, 13(37). https://philpapers.org/rec/WELWAI
-
West, S. M., Whittaker, M., ve Crawford, K. (2019). Discriminating systems: Gender, race, and power in AI. https://ainowinstitute.org/publication/discriminating-systems-gender-race-and-power-in-ai-2
-
Winner, L. (1989). The whale and the reactor: A search for limits in an age of high technology. University of Chicago Press.
-
Wolf, M. J., Miller, K. W., ve Grodzinsky, F. S. (2017). Why we should have seen that coming: Comments on Microsoft’s Tay “Experiment,” and Wider Implications. The ORBIT Journal, 1(2), 1-12. https://doi.org/10.29297/orbit.v1i2.49
-
Wu, G. (2022, Aralık 22). 8 Big problems with OpenAI’s ChatGPT. MUO. https://www.makeuseof.com/openai-chatgpt-biggest-probelms/
-
Xavier, B. (2024). Biases within AI: Challenging the illusion of neutrality. AI & SOCIETY. https://doi.org/10.1007/s00146-024-01985-1
-
Zajko, M. (2022). Artificial intelligence, algorithms, and social inequality: Sociological contributions to contemporary debates. Sociology Compass, 16(3), e12962. https://doi.org/10.1111/soc4.12962
-
Zhou, K. Z., ve Sanfilippo, M. R. (2023). Public perceptions of gender bias in large language models: Cases of ChatGPT and Ernie (arXiv:2309.09120). arXiv. https://doi.org/10.48550/arXiv.2309.09120
-
Zhuo, T. Y., Huang, Y., Chen, C., ve Xing, Z. (2023). Red teaming ChatGPT via Jailbreaking: bias, robustness, reliability and toxicity (arXiv:2301.12867). arXiv. https://doi.org/10.48550/arXiv.2301.12867