<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Publishing DTD v1.4 20241031//EN"
        "https://jats.nlm.nih.gov/publishing/1.4/JATS-journalpublishing1-4.dtd">
<article  article-type="research-article"        dtd-version="1.4">
            <front>

                <journal-meta>
                                                                <journal-id>kritik</journal-id>
            <journal-title-group>
                                                                                    <journal-title>Kritik İletişim Çalışmaları Dergisi</journal-title>
            </journal-title-group>
                                        <issn pub-type="epub">2667-6850</issn>
                                                                                            <publisher>
                    <publisher-name>Nuri Paşa ÖZER</publisher-name>
                </publisher>
                    </journal-meta>
                <article-meta>
                                        <article-id pub-id-type="doi">10.53281/kritik.1798961</article-id>
                                                                <article-categories>
                                            <subj-group  xml:lang="en">
                                                            <subject>Communication and Media Studies (Other)</subject>
                                                            <subject>Cultural Studies (Other)</subject>
                                                    </subj-group>
                                            <subj-group  xml:lang="tr">
                                                            <subject>İletişim ve Medya Çalışmaları (Diğer)</subject>
                                                            <subject>Kültürel çalışmalar (Diğer)</subject>
                                                    </subj-group>
                                    </article-categories>
                                                                                                                                                        <title-group>
                                                                                                                        <article-title>Yapay Zekâ Üretiminde Kültürel Temsiller ve Algoritmik Yanlılık: DALL-E Örneğinde Göstergebilimsel Analiz</article-title>
                                                                                                                                                                                                <trans-title-group xml:lang="en">
                                    <trans-title>Cultural Representations and Algorithmic Bias in AI Generation: A Semiotic Analysis of the Case of DALL-E</trans-title>
                                </trans-title-group>
                                                                                                    </title-group>
            
                                                    <contrib-group content-type="authors">
                                                                        <contrib contrib-type="author">
                                                                    <contrib-id contrib-id-type="orcid">
                                        https://orcid.org/0000-0001-9743-2878</contrib-id>
                                                                <name>
                                    <surname>Özer</surname>
                                    <given-names>Nuri Paşa</given-names>
                                </name>
                                                                    <aff>NECMETTİN ERBAKAN ÜNİVERSİTESİ</aff>
                                                            </contrib>
                                                                                </contrib-group>
                        
                                        <pub-date pub-type="pub" iso-8601-date="20251229">
                    <day>12</day>
                    <month>29</month>
                    <year>2025</year>
                </pub-date>
                                        <volume>7</volume>
                                        <issue>2</issue>
                                        <fpage>254</fpage>
                                        <lpage>281</lpage>
                        
                        <history>
                                    <date date-type="received" iso-8601-date="20251007">
                        <day>10</day>
                        <month>07</month>
                        <year>2025</year>
                    </date>
                                                    <date date-type="accepted" iso-8601-date="20251120">
                        <day>11</day>
                        <month>20</month>
                        <year>2025</year>
                    </date>
                            </history>
                                        <permissions>
                    <copyright-statement>Copyright © 2018, Kritik İletişim Çalışmaları Dergisi</copyright-statement>
                    <copyright-year>2018</copyright-year>
                    <copyright-holder>Kritik İletişim Çalışmaları Dergisi</copyright-holder>
                </permissions>
            
                                                                                                <abstract><p>Bu çalışma, yapay zekâ tabanlı görsel üretim araçlarının kültürel temsilleri nasıl kurguladığını ve bu süreçte hangi algoritmik yanlılıkların yeniden üretildiğini incelemektedir. Özellikle DALL-E aracılığıyla üretilen görseller, Roland Barthes’ın göstergebilimsel analiz modeli (düzanlam, yananlam, mit) çerçevesinde değerlendirilmiştir. Araştırma kapsamında on farklı senaryo (aile yemeği, çocuk oyunu, düğün, festival, kafe, ofis toplantısı, okul sınıfı, pazar yeri, spor etkinliği, ulusal kutlama) için dört ayrı kültürel bağlamda (nötr, Batı, İslam, Uzakdoğu) toplam kırk görsel üretilmiştir. Bulgular, “nötr” olarak tanımlanan görsellerin çoğunlukla Batı kültürünü evrensel bir norm olarak sunduğunu; İslam kültürünün dinî ögelerle, Uzakdoğu kültürünün ise egzotik ve folklorik motiflerle temsil edildiğini göstermektedir. Bu durum, yapay zekâ modellerinin eğitim verilerindeki dengesizlikler nedeniyle Batı merkezli normları yeniden ürettiğini, diğer kültürleri ise indirgemeci ve stereotipleştirici biçimde sunduğunu ortaya koymaktadır. Çalışma, algoritmik yanlılığın yalnızca teknik bir sorun değil, aynı zamanda toplumsal ve kültürel eşitsizliklerin yeniden üretimine aracılık eden ideolojik bir süreç olduğunu vurgulamaktadır.</p></abstract>
                                                                                                                                    <trans-abstract xml:lang="en">
                            <p>This study investigates how AI-based image generation tools construct cultural representations and reproduce algorithmic biases within these processes. Visuals produced by DALL-E are analyzed through Roland Barthes’ semiotic framework of denotation, connotation, and myth. The research design includes ten scenarios (family dinner, children’s play, wedding, festival, café, office meeting, classroom, marketplace, sports event, and national celebration), each generated in four cultural contexts (neutral, Western, Islamic, and East Asian), resulting in a total of forty images. The findings reveal that so-called “neutral” visuals predominantly normalize Western culture as the universal standard, while Islamic contexts are reduced to religious markers and East Asian contexts are depicted through exotic or folkloric motifs. These results demonstrate that generative AI systems, due to imbalances in their training datasets, reproduce Western-centric norms and portray non-Western cultures in reductive and stereotypical ways. The study emphasizes that algorithmic bias is not merely a technical limitation but also an ideological process that reproduces social and cultural inequalities.</p></trans-abstract>
                                                            
            
                                                            <kwd-group>
                                                    <kwd>Algoritmik Yanlılık</kwd>
                                                    <kwd>  Üretken Yapay Zekâ</kwd>
                                                    <kwd>  Kültürel Temsil</kwd>
                                                    <kwd>  Göstergebilim</kwd>
                                                    <kwd>  Batı-merkezcilik</kwd>
                                            </kwd-group>
                                                        
                                                                            <kwd-group xml:lang="en">
                                                    <kwd>Algorithmic Bias</kwd>
                                                    <kwd>  Generative Artificial Intelligence</kwd>
                                                    <kwd>  Cultural Representation</kwd>
                                                    <kwd>  Semiotics</kwd>
                                                    <kwd>  Western-centrism</kwd>
                                            </kwd-group>
                                                                                                            </article-meta>
    </front>
    <back>
                            <ref-list>
                                    <ref id="ref1">
                        <label>1</label>
                        <mixed-citation publication-type="journal">Baum, J., &amp; Villasenor, J. (2024, April 17). Rendering misrepresentation: Diversity failures in AI image
generation. Brookings Institution. https://www.brookings.edu/articles/rendering-misrepresentation-diversity-
failures-in-ai-image-generation/</mixed-citation>
                    </ref>
                                    <ref id="ref2">
                        <label>2</label>
                        <mixed-citation publication-type="journal">Barthes, R. (1972). Mythologies. Hill and Wang.</mixed-citation>
                    </ref>
                                    <ref id="ref3">
                        <label>3</label>
                        <mixed-citation publication-type="journal">Benjamin, R. (2019). Race after technology: Abolitionist tools for the new Jim code. Polity.</mixed-citation>
                    </ref>
                                    <ref id="ref4">
                        <label>4</label>
                        <mixed-citation publication-type="journal">Cecere, G., Jean, C., Le Guel, F., &amp; Manant, M. (2024). Artificial intelligence and algorithmic bias?
Field tests on social network with teens. Technological Forecasting and Social Change, 201, 123204. https://
doi.org/10.1016/j.techfore.2023.123204</mixed-citation>
                    </ref>
                                    <ref id="ref5">
                        <label>5</label>
                        <mixed-citation publication-type="journal">Eubanks, V. (2018). Automating inequality: How high-tech tools profile, police, and punish the poor.
Picador.</mixed-citation>
                    </ref>
                                    <ref id="ref6">
                        <label>6</label>
                        <mixed-citation publication-type="journal">Fazelpour, S., &amp; Donks, D. (2021). Algorithmic bias: Senses, sources, solutions. Philosophy &amp; Techno-
logy / Compass (Wiley).</mixed-citation>
                    </ref>
                                    <ref id="ref7">
                        <label>7</label>
                        <mixed-citation publication-type="journal">Fidan, Ü. (2025). Yönetim Bilişim Sistemleri Perspektifinden Algoritmik Yanlılık ve Etik Karar Ver-
me. In S. Vahid (Ed.), Yönetim Bilişim Sistemleri Alanında Yenilikçi Çözümler ve Güncel Yaklaşımlar (pp.
89–120). Özgür Yayınları.</mixed-citation>
                    </ref>
                                    <ref id="ref8">
                        <label>8</label>
                        <mixed-citation publication-type="journal">Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, &amp; K. A. Foot (Eds.),
Media technologies: Essays on communication, materiality, and society. MIT Press.</mixed-citation>
                    </ref>
                                    <ref id="ref9">
                        <label>9</label>
                        <mixed-citation publication-type="journal">Johnson, K. (2022, May 5). DALL-E 2 creates incredible images—and biased ones you don’t see. Wi-
red. https://www.wired.com/story/dall-e-2-ai-text-image-bias-social-media/</mixed-citation>
                    </ref>
                                    <ref id="ref10">
                        <label>10</label>
                        <mixed-citation publication-type="journal">Kennedy, H. (2018). Post, mine, repeat: Social media data mining becomes ordinary. Social Media +
Society.</mixed-citation>
                    </ref>
                                    <ref id="ref11">
                        <label>11</label>
                        <mixed-citation publication-type="journal">Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.</mixed-citation>
                    </ref>
                                    <ref id="ref12">
                        <label>12</label>
                        <mixed-citation publication-type="journal">Noble, S. U., &amp; Tynes, B. M. (2016). The intersectional internet: Race, sex, class, and culture online.
University of Illinois Press.</mixed-citation>
                    </ref>
                                    <ref id="ref13">
                        <label>13</label>
                        <mixed-citation publication-type="journal">Özer, N. P., &amp; Yarar, A. E. (2019). Göstergebilimsel bir reklam analizi: Burger King “ateş seni çağırıyor.”
Atatürk İletişim Dergisi, 18, 105–124. https://doi.org/10.32952/atauniiletisim.641375</mixed-citation>
                    </ref>
                                    <ref id="ref14">
                        <label>14</label>
                        <mixed-citation publication-type="journal">Özer, N. P., &amp; Zengin, A. M. (2020). Arabesk kültürü özelinde Orhan Gencebay’ın rol aldığı Rexona
reklamlarının incelenmesi. Kritik İletişim Çalışmaları Dergisi, 2(1), 1–12.</mixed-citation>
                    </ref>
                                    <ref id="ref15">
                        <label>15</label>
                        <mixed-citation publication-type="journal">Pasquale, F. (2015). The black box society: The secret algorithms that control money and information.
Harvard University Press.</mixed-citation>
                    </ref>
                                    <ref id="ref16">
                        <label>16</label>
                        <mixed-citation publication-type="journal">Rosenthal-von der Pütten, A. M., &amp; Sach, A. (2024). Michael is better than Mehmet: Exploring the perils
of algorithmic biases and selective adherence to advice from automated decision support systems in hiring.
Frontiers in Psychology, 15. https://doi.org/10.3389/fpsyg.2024.1416504</mixed-citation>
                    </ref>
                                    <ref id="ref17">
                        <label>17</label>
                        <mixed-citation publication-type="journal">Sandvig, C., Hamilton, K., Karahalios, K., &amp; Langbort, C. (2014). Auditing algorithms: Research met-
hods for detecting discrimination on internet platforms. In Data and discrimination: Converting critical con-
cerns into productive inquiry.</mixed-citation>
                    </ref>
                                    <ref id="ref18">
                        <label>18</label>
                        <mixed-citation publication-type="journal">Thiem, A., &amp; Dusa, A. (2020). Algorithmic bias in social research: A meta-analysis. PLOS ONE, 15(4),
e0233625. https://doi.org/10.1371/journal.pone.0233625</mixed-citation>
                    </ref>
                                    <ref id="ref19">
                        <label>19</label>
                        <mixed-citation publication-type="journal">Zajko, M. (2020). Conservative AI and social inequality: Conceptualizing alternatives to bias through
social theory. arXiv. https://arxiv.org/abs/2007.08666</mixed-citation>
                    </ref>
                            </ref-list>
                    </back>
    </article>
