Research Article
BibTex RIS Cite

Bilişimsel Propaganda Farkındalık Ölçeği ve Kamuoyu Üzerindeki Etkisi

Year 2024, Issue: 46, 45 - 74, 31.12.2024
https://doi.org/10.17829/turcom.1489518

Abstract

Sosyal ağlar seçim dönemlerinin önemli iletişim kanalları haline gelmiştir. Yeni siyasal iletişim faaliyetleri
önce “dijital propaganda”ya ardından da “bilişimsel propaganda”ya evrilmiştir. Dijital ortamın propagandası,
bilişimsel propaganda, botlar aracılığıyla yürütülen yeni bir propaganda türüdür. Bireylerin bilerek ya da
bilmeyerek bilgi kaynağı olarak kabul edebildiği botlar bu sebeple kamuoyu üzerinde önemli bir etkiye sahip
olmaktadır. Özellikle seçim dönemlerinde aktifleşen botların, toplum üzerinde siyasal mühendislik aracı
olarak kullanıldığı çeşitli çalışmalarda gösterilmiştir. Bilişimsel propaganda farkındalığı, yanlış bilgilerin
yayılmasında ve etkileşimlerinin sonuçlarında ne olduğunu görebilmek, teknolojinin bireylerle iletişiminde ve
siyasetteki etkilerini anlayabilmek adına önem taşımaktadır. Bu bağlamda Bilişimsel Propaganda Farkındalık
Ölçeği geliştirilmiştir. Çalışma sonucunda toplam 17 maddeli ve 5 faktörlü Bilişimsel Propaganda Farkındalık
Ölçeği yapısı ortaya çıkmıştır. BPF ölçeği ile 505 kişinin verisi, Jamovi programında analiz edilmiştir. Bilişimsel
Propaganda Farkındalık Ölçeği’ne dair 5 faktörün açıkladığı toplam varyans %56,4’tür. Ayrıca BPF puanları
ile kamuoyunun izleme durumu arasındaki ilişki incelenmiştir. Bilişimsel propaganda sonucu kamuoyu, “izleyici kalanlar” ve “tepki verenler” olarak ikiye ayrılmaktadır. Araştırmamızda geliştirilen modelin, izleyici
kalanları doğru olarak tespit etme oranı %50,4, tepki verenleri doğru olarak tespit etme oranı ise %62,1’dir.
Araştırma, bilişimsel propaganda ile mücadelede bireylerin farkındalık düzeyini artırarak daha bilinçli bir
kamuoyu oluşturulmasına katkı sağlamayı hedeflemektedir.

References

  • Aggarwal, N. (2020). The norms of algorithmic credit scoring. Cambridge Law Journal, 80(1), 42-73. https://doi. org/10.1017/S000.819.7321000015
  • Arnaudo, D. (2017). Computational propaganda in Brazil: Social bots during elections [Çalışma raporu No. 2017.8]. Computational Propaganda Research Project, University of Oxford. https://demtech.oii.ox.ac. uk/wp-content/uploads/sites/12/2017/06/Comprop-Brazil-1.pdf
  • Atasoy, D. (2001). Lojistik regresyon analizinin incelenmesi ve bir uygulaması [Yayınlanmamış yüksek lisans tezi]. Cumhuriyet Üniversitesi Sosyal Bilimler Enstitüsü.
  • Baykul, Y. (2022). Eğitimde ve psikolojide ölçme: Klâsik test teorisi ve uygulaması (Cilt 6). PEGEM.
  • Büyüköztürk, Ş. (2005). Anket geliştirme. The Journal of Turkish Educational Sciences, 3(2), 133 – 151.
  • Balcı, A. (2018). Sosyal bilimlerde araştırma: Yöntem teknik ve ilkeler. Pegem Akademi.
  • Benkler, Y., Faris, R. & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press.
  • Bolsover, G. & Howard, P. (2017). Computational propaganda and political big data: Moving toward a more critical research agenda. Big Data, 5(4), 273-276. https://doi.org/10.1089/big.2017.29024.cpr.
  • Bradshaw, S. & Howard, P. (2017). Troops, trolls and troublemakers: A global ınventory of organized social media manipulation [Çalışma raporu No. 2017.12]. Computational Propaganda Reserach Project, University of Oxford. https://ora.ox.ac.uk/objects/uuid:cef7e8d9-27bf-4ea5-9fd6-855209b3e1f6/files/ m3ca8c455852611e82d0fb182445a471f
  • Crocker, L. & Algina, J. (1986). Introduction to classical and modern test theory. Cengage Learning.
  • Daily Mail. (2013, Nisan 24). Syrian Electronic army linked to hack attack on AP Twitter feed that ‘broke news’ Obama had been injured in White House blast and sent Dow Jones plunging. https://www.dailymail. co.uk/news/article-2314001/Syrian-Electronic-Army-linked-hack-attack-AP-Twitter-feed-broke- news-Obama-injured-White-House-blast-sent-Dow-Jones-plunging.html
  • Daniel, F. & Millimaggi, A. (2020). On Twitter bots behaving badly: A large-scale study of code patterns on GitHub. Journal of Web Engineering, 18(8), 801–836. https://doi.org/10.13052/jwe1540-9589.1883
  • D’Alessio, F. A. (2021). Computational propaganda: Challenges and responses. Academia Letters, (3468), 1-8. https://doi.org/10.20935/AL3468
  • Durbin, J. & Watson, G. S. (1950). Testing for serial correlation in least squares regression. Biometrika, 37(3/4), 409–428. https://doi.org/10.1093/biomet/37.3-4.409
  • Field, A. (2009). Discovering statistics using SPSS (3rd Ed.). Sage Publication.
  • Floridi, L. (2020). The fight for digital sovereignty: What it is, and why it matters, especially for the EU. Philosophy & Technology, 33(3), 369–378. https://doi.org/dx.doi.org/10.2139/ssrn.3827089
  • Floridi, L. & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 1-5. https://doi.org/10.1098/rsta.2016.0360
  • Gorwa, R. & Guilbeault, D. (2020). Unpacking the social media bot: A typology to guide research and policy. Policy Internet, 12(2), 225–248. https://doi.org/10.48550/arXiv.1801.06863
  • Green, B. & Chen, Y. (2019, Ocak 29-31). Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments [Konferans Bildirisi]. FAT* ‘19: Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA. https://doi.org/10.1145/3287.560.3287563
  • Hill, J., Ford, W. & Farreras, I. (2015). Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Computers in Human Behavior, 49, 245–250. https://doi.org/10.1016/j.chb.2015.02.026
  • Himelein-Wachowiak, M., Giorgi , S., Devoto, A., Rahman, M., Ungar , L., Schwartz , H., . . . Curtis, B. (2021). Bots and misinformation spread on social media: Implications for COVID-19. Journal of Medical Internet Research, 23(5), Article e26933. https://doi.org/10.2196/26933
  • Hurtado, S., Ray, P. & Marculescu, R. (2019, May 6-9). Bot detection in Reddit political discussion [Konferans Bildirisi]. WebSci ‘19: 10th ACM Conference on Web Science, Boston, MA, USA. https://doi. org/10.1145/3313.294.3313386
  • Keller, F., Schoch, D., Stier, S. & Yang, J. (2019). Political astroturfing on Twitter: How to coordinate a disinformation campaign. Political Communication, 37(2), 256-280. https://doi.org/10.1080/10584.60 9.2019.1661888
  • Klotz, R. J. (2007). Internet campaigning for grassroots and astroturf support. Social Science Computer Review, 25(1), 3-12. https://doi.org/10.1177/089.443.930628910
  • Kovic, M., Rauchfleisch, A., Sele, M. & Caspar, C. (2018). Digital astroturfing in politics: Definition, typology, and countermeasures. Studies in Communication Sciences, 18(1), 69–85. https://doi.org/10.24434/j. scoms.2018.01.005
  • Labati, R. D., Genovese, A. & Muñoz, E. (2016). Biometric recognition in automated border control: A survey. ACM Computing Surveys, 49(2), 1–39. https://doi.org/0000.001.0000001
  • Lee, M. S. & Floridi, L. (2020). Algorithmic fairness in mortgage lending: From absolute conditions to relational trade-offs. Minds and Machines, 31, 165–191. https://doi.org/10.1007/s11023.020.09529-4
  • Moravec, H. P. (2024, Şubat 5). Robot. Britannica. https://www.britannica.com/technology/robot-technology Obermeyer, Z., Powers, B. & Vogeli, C. (2019). Dissectingracial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. https://doi.org/10.1126/science.aax2342
  • Online Etymology Dictionary. (2023). Bot. https://www.etymonline.com/word/bot#etymonline_v_27335
  • Paraschakis, D. (2017). Towards an ethical recommendation framework [Konferans Bildirisi]. 11th International Conference on Research Challenges in Information Science (IEEE), Brighton, UK. https://doi. org/10.1109/RCIS.2017.795.6539
  • Pavlíková, M., Šenkýrˇová, B. & Drmola, J. (2021). Propaganda and disinformation go online. M. Gregor & P. Mlejnková (Dü.), Challenging online propaganda and disinformation in the 21st century içind(ss. 43-75). Palgrave Macmillan.
  • Perra, N. & Rocha, L. (2019). Modelling opinion dynamics in the age of algorithmic personalisation. Scientific Reports, 9(7261), 1-11. https://doi.org/10.1038/s41598.019.43830-2
  • Prates, M. O., Avelar, P. & Lamb, L. (2019). Assessing gender bias in machine translation: A case study with google translate. Neural Computing and Applications, 1, 6363–6381. https://doi.org/10.48550/arXiv.1809.02208
  • Schia, N. N. & Gjesvik, L. (2020). Hacking democracy: Managing influence campaigns and disinformation in the digital age. Journal of Cyber Policy, 5(3), 413-428. https://doi.org/10.1080/23738.871.2020.1820060
  • Sütcü, C. S. (2021). Önsöz. O. Kuş (Dü.), Algoritmaların gölgesinde toplum ve iletişim içinde (ss. v-vii). Alternatif Bilişim Derneği.
  • Tabachnick, B. G. & Fidell, L. (2001). Using multivariate statistics. Allyon and Bacon Press.
  • Taddeo, M. & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752. https://doi. org/10.1126/science.aat5991
  • Tavşancıl, E. (2010). Tutumların ölçülmesi ve SPSS ile veri analizi. Nobel Yayıncılık.
  • TechTarget. (2023). What is an algorithm? https://www.techtarget.com/whatis/definition/algorithm
  • TÜİK. (2022). Hanehalkı bilişim teknolojileri (BT) kullanım araştırması. https://data.tuik.gov.tr/Bulten/ Index?p=Hanehalki-Bilisim-Teknolojileri-(BT)-Kullanim-Arastirmasi-2022-45587
  • TÜİK. (2023). TÜİK hanehalkı bilişim teknolojileri kullanım araştırması 2023. https://data.tuik.gov.tr/Bulten/ Index?p=Hanehalki-Bilisim-Teknolojileri-(BT)-Kullanim-Arastirmasi-2023-49407
  • Tosyalı, H. (2021, Ekim 20-22). Dijital çağda siyasal iletişim: Algoritmalar ve botlar [Konferans Bildirisi]. Communication and Technology Congress, İstanbul, Türkiye.
  • Ünver, H. A. (2017). Bilişimsel diplomasi [Çalışma raporu No. 2017/3]. Siber Politikalar ve Dijital Demokrasi, EDAM. https://edam.org.tr/wp-content/uploads/2017/11/bilisimsel_diplomasi_2.pdf
  • Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects Us and undermines democracy. Oxford University Press.
  • Vosoughi, S., Roy, D. & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151. https://doi.org/10.1126/science.aap9559
  • We Are Social. (2023). Digital 2023 global overview report. https://www.wearesocial.com/digital-2023
  • Woolley, S. C. & Howard, P. (2016). Automation, algorithms, and politics: Political communication, computational propaganda, and autonomous agents. International Journal of Communication, 10, 4882–4890.
  • Zhou, N., Zhang , C.-T., Lv , H.-Y., Hao, C.-X. & Li , T.-J. (2019). Concordance study between IBM Watson for oncology and clinical practice for patients with cancer in China. Oncologist, 24(6), 812–819. https://doi. org/10.1634/theoncologist.2018-0255

The Computational Propaganda Awareness Scale and Its Impact on Public Opinion

Year 2024, Issue: 46, 45 - 74, 31.12.2024
https://doi.org/10.17829/turcom.1489518

Abstract

Social networks have become vital platforms during election periods, transforming traditional political
communication into “digital propaganda” and “computational propaganda” (CP). CP involves the use
of bots to spread propaganda, significantly influencing public opinion by acting as perceived sources of
information. Studies reveal that bots, particularly active during elections, serve as tools for societal political
engineering. Understanding CP is essential to comprehend the spread of misinformation and its political
and social consequences. To measure this awareness, The Computational Propaganda Awareness Scale
(CPAS) was developed, comprising 17 items across five factors. The data of 505 people with the CPA scale
were analysed in the Jamovi programme. The total variance explained by the 5 factors of the CPAS is 56.4%.
The study also examined the link between CP awareness and public monitoring behavior. Public opinion
was categorized into “spectators” and “reactors,” with the model correctly identifying 50.4% of spectators
and 62.1% of reactors. This research aims to enhance public awareness, promoting a more informed society
capable of resisting CP’s manipulative effects. By identifying these dynamics, it contributes to fostering
critical engagement with technology in political and communication contexts.

References

  • Aggarwal, N. (2020). The norms of algorithmic credit scoring. Cambridge Law Journal, 80(1), 42-73. https://doi. org/10.1017/S000.819.7321000015
  • Arnaudo, D. (2017). Computational propaganda in Brazil: Social bots during elections [Çalışma raporu No. 2017.8]. Computational Propaganda Research Project, University of Oxford. https://demtech.oii.ox.ac. uk/wp-content/uploads/sites/12/2017/06/Comprop-Brazil-1.pdf
  • Atasoy, D. (2001). Lojistik regresyon analizinin incelenmesi ve bir uygulaması [Yayınlanmamış yüksek lisans tezi]. Cumhuriyet Üniversitesi Sosyal Bilimler Enstitüsü.
  • Baykul, Y. (2022). Eğitimde ve psikolojide ölçme: Klâsik test teorisi ve uygulaması (Cilt 6). PEGEM.
  • Büyüköztürk, Ş. (2005). Anket geliştirme. The Journal of Turkish Educational Sciences, 3(2), 133 – 151.
  • Balcı, A. (2018). Sosyal bilimlerde araştırma: Yöntem teknik ve ilkeler. Pegem Akademi.
  • Benkler, Y., Faris, R. & Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization in American politics. Oxford University Press.
  • Bolsover, G. & Howard, P. (2017). Computational propaganda and political big data: Moving toward a more critical research agenda. Big Data, 5(4), 273-276. https://doi.org/10.1089/big.2017.29024.cpr.
  • Bradshaw, S. & Howard, P. (2017). Troops, trolls and troublemakers: A global ınventory of organized social media manipulation [Çalışma raporu No. 2017.12]. Computational Propaganda Reserach Project, University of Oxford. https://ora.ox.ac.uk/objects/uuid:cef7e8d9-27bf-4ea5-9fd6-855209b3e1f6/files/ m3ca8c455852611e82d0fb182445a471f
  • Crocker, L. & Algina, J. (1986). Introduction to classical and modern test theory. Cengage Learning.
  • Daily Mail. (2013, Nisan 24). Syrian Electronic army linked to hack attack on AP Twitter feed that ‘broke news’ Obama had been injured in White House blast and sent Dow Jones plunging. https://www.dailymail. co.uk/news/article-2314001/Syrian-Electronic-Army-linked-hack-attack-AP-Twitter-feed-broke- news-Obama-injured-White-House-blast-sent-Dow-Jones-plunging.html
  • Daniel, F. & Millimaggi, A. (2020). On Twitter bots behaving badly: A large-scale study of code patterns on GitHub. Journal of Web Engineering, 18(8), 801–836. https://doi.org/10.13052/jwe1540-9589.1883
  • D’Alessio, F. A. (2021). Computational propaganda: Challenges and responses. Academia Letters, (3468), 1-8. https://doi.org/10.20935/AL3468
  • Durbin, J. & Watson, G. S. (1950). Testing for serial correlation in least squares regression. Biometrika, 37(3/4), 409–428. https://doi.org/10.1093/biomet/37.3-4.409
  • Field, A. (2009). Discovering statistics using SPSS (3rd Ed.). Sage Publication.
  • Floridi, L. (2020). The fight for digital sovereignty: What it is, and why it matters, especially for the EU. Philosophy & Technology, 33(3), 369–378. https://doi.org/dx.doi.org/10.2139/ssrn.3827089
  • Floridi, L. & Taddeo, M. (2016). What is data ethics? Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 1-5. https://doi.org/10.1098/rsta.2016.0360
  • Gorwa, R. & Guilbeault, D. (2020). Unpacking the social media bot: A typology to guide research and policy. Policy Internet, 12(2), 225–248. https://doi.org/10.48550/arXiv.1801.06863
  • Green, B. & Chen, Y. (2019, Ocak 29-31). Disparate interactions: An algorithm-in-the-loop analysis of fairness in risk assessments [Konferans Bildirisi]. FAT* ‘19: Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA. https://doi.org/10.1145/3287.560.3287563
  • Hill, J., Ford, W. & Farreras, I. (2015). Real conversations with artificial intelligence: A comparison between human–human online conversations and human–chatbot conversations. Computers in Human Behavior, 49, 245–250. https://doi.org/10.1016/j.chb.2015.02.026
  • Himelein-Wachowiak, M., Giorgi , S., Devoto, A., Rahman, M., Ungar , L., Schwartz , H., . . . Curtis, B. (2021). Bots and misinformation spread on social media: Implications for COVID-19. Journal of Medical Internet Research, 23(5), Article e26933. https://doi.org/10.2196/26933
  • Hurtado, S., Ray, P. & Marculescu, R. (2019, May 6-9). Bot detection in Reddit political discussion [Konferans Bildirisi]. WebSci ‘19: 10th ACM Conference on Web Science, Boston, MA, USA. https://doi. org/10.1145/3313.294.3313386
  • Keller, F., Schoch, D., Stier, S. & Yang, J. (2019). Political astroturfing on Twitter: How to coordinate a disinformation campaign. Political Communication, 37(2), 256-280. https://doi.org/10.1080/10584.60 9.2019.1661888
  • Klotz, R. J. (2007). Internet campaigning for grassroots and astroturf support. Social Science Computer Review, 25(1), 3-12. https://doi.org/10.1177/089.443.930628910
  • Kovic, M., Rauchfleisch, A., Sele, M. & Caspar, C. (2018). Digital astroturfing in politics: Definition, typology, and countermeasures. Studies in Communication Sciences, 18(1), 69–85. https://doi.org/10.24434/j. scoms.2018.01.005
  • Labati, R. D., Genovese, A. & Muñoz, E. (2016). Biometric recognition in automated border control: A survey. ACM Computing Surveys, 49(2), 1–39. https://doi.org/0000.001.0000001
  • Lee, M. S. & Floridi, L. (2020). Algorithmic fairness in mortgage lending: From absolute conditions to relational trade-offs. Minds and Machines, 31, 165–191. https://doi.org/10.1007/s11023.020.09529-4
  • Moravec, H. P. (2024, Şubat 5). Robot. Britannica. https://www.britannica.com/technology/robot-technology Obermeyer, Z., Powers, B. & Vogeli, C. (2019). Dissectingracial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. https://doi.org/10.1126/science.aax2342
  • Online Etymology Dictionary. (2023). Bot. https://www.etymonline.com/word/bot#etymonline_v_27335
  • Paraschakis, D. (2017). Towards an ethical recommendation framework [Konferans Bildirisi]. 11th International Conference on Research Challenges in Information Science (IEEE), Brighton, UK. https://doi. org/10.1109/RCIS.2017.795.6539
  • Pavlíková, M., Šenkýrˇová, B. & Drmola, J. (2021). Propaganda and disinformation go online. M. Gregor & P. Mlejnková (Dü.), Challenging online propaganda and disinformation in the 21st century içind(ss. 43-75). Palgrave Macmillan.
  • Perra, N. & Rocha, L. (2019). Modelling opinion dynamics in the age of algorithmic personalisation. Scientific Reports, 9(7261), 1-11. https://doi.org/10.1038/s41598.019.43830-2
  • Prates, M. O., Avelar, P. & Lamb, L. (2019). Assessing gender bias in machine translation: A case study with google translate. Neural Computing and Applications, 1, 6363–6381. https://doi.org/10.48550/arXiv.1809.02208
  • Schia, N. N. & Gjesvik, L. (2020). Hacking democracy: Managing influence campaigns and disinformation in the digital age. Journal of Cyber Policy, 5(3), 413-428. https://doi.org/10.1080/23738.871.2020.1820060
  • Sütcü, C. S. (2021). Önsöz. O. Kuş (Dü.), Algoritmaların gölgesinde toplum ve iletişim içinde (ss. v-vii). Alternatif Bilişim Derneği.
  • Tabachnick, B. G. & Fidell, L. (2001). Using multivariate statistics. Allyon and Bacon Press.
  • Taddeo, M. & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752. https://doi. org/10.1126/science.aat5991
  • Tavşancıl, E. (2010). Tutumların ölçülmesi ve SPSS ile veri analizi. Nobel Yayıncılık.
  • TechTarget. (2023). What is an algorithm? https://www.techtarget.com/whatis/definition/algorithm
  • TÜİK. (2022). Hanehalkı bilişim teknolojileri (BT) kullanım araştırması. https://data.tuik.gov.tr/Bulten/ Index?p=Hanehalki-Bilisim-Teknolojileri-(BT)-Kullanim-Arastirmasi-2022-45587
  • TÜİK. (2023). TÜİK hanehalkı bilişim teknolojileri kullanım araştırması 2023. https://data.tuik.gov.tr/Bulten/ Index?p=Hanehalki-Bilisim-Teknolojileri-(BT)-Kullanim-Arastirmasi-2023-49407
  • Tosyalı, H. (2021, Ekim 20-22). Dijital çağda siyasal iletişim: Algoritmalar ve botlar [Konferans Bildirisi]. Communication and Technology Congress, İstanbul, Türkiye.
  • Ünver, H. A. (2017). Bilişimsel diplomasi [Çalışma raporu No. 2017/3]. Siber Politikalar ve Dijital Demokrasi, EDAM. https://edam.org.tr/wp-content/uploads/2017/11/bilisimsel_diplomasi_2.pdf
  • Vaidhyanathan, S. (2018). Antisocial media: How Facebook disconnects Us and undermines democracy. Oxford University Press.
  • Vosoughi, S., Roy, D. & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151. https://doi.org/10.1126/science.aap9559
  • We Are Social. (2023). Digital 2023 global overview report. https://www.wearesocial.com/digital-2023
  • Woolley, S. C. & Howard, P. (2016). Automation, algorithms, and politics: Political communication, computational propaganda, and autonomous agents. International Journal of Communication, 10, 4882–4890.
  • Zhou, N., Zhang , C.-T., Lv , H.-Y., Hao, C.-X. & Li , T.-J. (2019). Concordance study between IBM Watson for oncology and clinical practice for patients with cancer in China. Oncologist, 24(6), 812–819. https://doi. org/10.1634/theoncologist.2018-0255
There are 48 citations in total.

Details

Primary Language Turkish
Subjects Communication Studies, Communication Technology and Digital Media Studies, New Communication Technologies, New Media
Journal Section Research Articles
Authors

Elif Akçay 0000-0002-2566-7288

Cem Sütcü 0000-0002-9389-6832

Publication Date December 31, 2024
Submission Date May 24, 2024
Acceptance Date October 15, 2024
Published in Issue Year 2024 Issue: 46

Cite

APA Akçay, E., & Sütcü, C. (2024). Bilişimsel Propaganda Farkındalık Ölçeği ve Kamuoyu Üzerindeki Etkisi. Türkiye İletişim Araştırmaları Dergisi(46), 45-74. https://doi.org/10.17829/turcom.1489518

All articles published in the Turkish Review of Communication Studies are licensed under the Creative Commons Attribution-NonCommercial 4.0 International License.