Araştırma Makalesi
BibTex RIS Kaynak Göster

Ağ Bağlantılı Ortamlarda Veri Şiddeti: Kullanıcı Deneyimi Üzerine Bir İnceleme

Yıl 2025, Sayı: 18, 106 - 123, 30.06.2025
https://doi.org/10.55609/yenimedya.1626763

Öz

Ağ bağlantılı ortamlarda gerçekleşen iletişimde ağdaki diğerleri ile temas etmek gündelik hayattaki şiddet unsuru bu ortamlara taşıyabilmektedir. Bu durumda kullanıcılar sosyal medya uygulamalarında şiddetin ağ bağlantılı ortamlara taşınmış biçimi olan veri şiddeti ile karşılaşabilmektedir. Bu araştırmada kullanıcıların sosyal medya ortamlarında karşılaştıkları veri şiddeti ele alınmıştır. Veri şiddetine yol açan etmenlerin ortaya koyulması sorunun tespitinde önemlidir. Bu çalışmada veri şiddetini ortaya çıkaran ve sürmesine neden olan etmenlerle birlikte ciddi zarar oluşturucu bu eylemler karşısında neler yapılabileceğine dair bir çerçeve sunulması amaçlanmıştır. Bir çevrimiçi şikâyet platformunda yer alan sosyal medya ortamlarıyla ilgili sıradan kullanıcıların şikayetleri doküman incelemesi tekniğiyle toplanmış ve söz konusu verilere nitel içerik analizi uygulanmıştır. Bulgular veri şiddetinin farklı kategoriler altında, kullanıcıdan kullanıcıya ya da platformdan kullanıcıya şeklinde gerçekleştiğini göstermiştir. Platformların veri şiddeti konusundaki çözümleri yetersizdir. Platformların teknik altyapılarından kaynaklı veri şiddetinin önlenebilmesi için yine platformların etik bir biçimde tasarlanması ve etik bir biçimde işletilmesi gereğinin üzerinde durulmuş, otoritelerin görevleri vurgulanmıştır.

Kaynakça

  • Amrute, S. (2019). Of techno-ethics and technoaffects. Feminist Review, 123(1), 56–73. https://doi.org/10.1177/0141778919879744.
  • Andrejevic, M. (2019). Automating surveillance. Surveillance and society, 17(1–2), 7–13. https://doi.org/10.24908/ss.v17i1/2.12930.
  • Bacchini, F., & Lorusso, L. (2019). Race, again: how face recognition technology reinforces racial discrimination. Journal of information, communication and ethics in society, 17(3), 321-335.
  • Binns, R. (2018). Fairness in machine learning: lessons from political philosophy. 2016, 1–11. http://arxiv.org/abs/1712.03586.
  • Birhane, A., Kasirzadeh, A., Leslie, D., & Wachter, S. (2023). Science in the age of large language models. Nature reviews physics, 5(5), 277-280.
  • Bridle, J. (2020). Yeni karanlık çağ teknoloji ve geleceğin sonu. (K. Güleç, Translation) İstanbul: Metis.
  • Dechesne, F. (2020). Fair enough? on (avoiding) bias in data, algorithms and decisions. Privacy and Identity Management. Data for Better Living: AI and Privacy: 14th IFIP WG 9.2, 9.6/11.7,11.6/SIG 9.2. 2 International Summer School, Windisch, Switzerland, August 19–23, 2019, Revised Selected Papers 14, 17-26.
  • Floridi, L. (2024). Dördüncü devrim: bilgiküre insan hakikatini nasıl yeniden şekillendiriyor? (O. Önder, Translation) İstanbul: Albaraka yayınları.
  • Fox, S., Menking, A., Steinhardt, S., Hoffmann, A. L., & Bardzell, S. (2017, February). Imagining intersectional futures: Feminist approaches in CSCW. In Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 387-393).
  • Gangadharan, S. P. (2017). The downside of digital inclusion: expectations and experiences of privacy and surveillance among marginal Internet users. New media and society, 19(4), 597–615. https://doi.org/10.1177/1461444815614053.
  • Gillespie, T., & Seaver, N. (2016, December 15). Critical algorithm studies: a reading list. Retrieved on October 21, 2024, from social media collective: https://socialmediacollective.org/reading-lists/ critical-algorithm-studies/.
  • Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: a critical assessment of the movement for ethical artificial intelligence and machine learning. Proceedings of the 52nd Hawaii International Conference on System Sciences, 2122–2131. https://doi.org/10.24251/ hicss.2019.258.
  • Hartzog, W., & Selinger, E. (2018, August 2). Facial recognition is the perfect tool for oppression. Retrieved on October 22, 2024, from Medium: https://medium.com/s/story/facialrecognition-is-the-perfect-tool-for-oppressionbc2a08f0fe66.
  • Hoffmann, A. L. (2017). Data, technology, and gender: Thinking about (and from) trans lives. In Spaces for the Future (pp. 3-13). Routledge.
  • Hoffmann, A. L. (2018, April 30). Data violence and how bad engineering choices can damage society. Retrieved on October 20, 2024, from Medium: https://medium.com/s/story/dataviolence-and-how-bad-engineering-choicescan-damage-society-39e44150e1d4.
  • Hoffmann, A. L. (2019). Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information Communication and Society, 22(7), 900–915. https://doi.org/10.1080/1369118X.2019.1573912.
  • Hoffmann, A. L. (2021). Terms of inclusion: data, discourse, violence. New media & society, 23(12), 3539-3556.
  • Hoffmann, A. L. (2022). Excerpt from where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. In Ethics of data and analytics (pp. 319-328). Auerbach publications.
  • Holmes, E. (2005). Anti-discrimination rights without equality. The modern law review limited, 68(2), 175–194. https://doi.org/10.2307/824017.
  • Introna, L. D. (2016). Algorithms, governance, and governmentality: on governing academic writing. Science technology and human values, 41(1), 17–49. https://doi.org/10.1177/0162243915587360.
  • Kayış, H. H. (2021). Dijital eşitsizlikleri yakindan incelemek: dijital uçurum buzdağının görünen yüzü müdür? Ege üniversitesi iletişim fakültesi yeni düşünceler hakemli e-dergisi, (15), 109-124.
  • Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. R. (2018). Discrimination in the age of algorithms. Journal of legal analysis, 10, 113-174.
  • Leavy, S. (2018). Gender bias in artificial intelligence: the need for diversity and gender theory in machine learning. In Proceedings of the 1st International Workshop on Gender Equality in Software Engineering (pp. 14-16). https://doi.org/10.1145/3195570.3195580.
  • Leszczynski, A. (2016). Speculative futures: cities, data, and governance beyond smart urbanism. Environment and planning A, 48(9), 1691–1708. https://doi.org/10.1177/0308518X16651445.
  • Lippert-Rasmussen, K. (2015). Discrimination: an intriguing but underexplored issue in ethics and political philosophy. Moral philosophy and politics, 2(2), 207-217.
  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: mapping the debate. Big data and society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679.
  • Nissenbaum, H. (2010). Privacy in context: technology, policy, and social life. In Jurimetrics (C. 51). http://search.proquest.com.strauss.uc3m.es:8080/docview/913137931.
  • Novelli, C., Taddeo, M., & Floridi, L. (2024). Accountability in artificial intelligence: what it is and how it works. Ai & Society, 39(4), 1871-1882.
  • Onuoha, M. (2018, March 22). GitHub. Retrieved on October 20, 2024 from https://github.com/ MimiOnuoha/On-Algorithmic-Violence.
  • Persson, A., & Kavathatzopoulos, I. (2018). How to make decisions with algorithms: ethical decision-making using algorithms within predictive analytics. ACM SIGCAS computers and society, 47(4), 122-133.
  • Pessach, D., & Shmueli, E. (2023). Algorithmic fairness. In Machine learning for data science handbook: data mining and knowledge discovery handbook (pp. 867-886). Cham: Springer International Publishing.
  • Rosenbaum, H., & Fichman, P. (2019). Algorithmic accountability and digital justice: A critical assessment of technical and sociotechnical approaches. Proceedings of the association for information science and technology, 56(1), 237-244.
  • Russo, F. (2018). Digital technologies, ethical questions, and the need of an informational framework. Philosophy & Technology, 31(4), 655-667.
  • Schmidt, E. (2013, May 20). How should we think about the future? You Tube Video.
  • Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big data and society, 4(2), 1–12. https://doi.org/10.1177/2053951717738104.
  • Segall, S. (2012). What’s so bad about discrimination? Utilitas, 24(1), 82–100. https://doi.org/10.1017/S0953820811000379.
  • Shelton, T. (2017). The urban geographical imagination in the age of Big Data. Big data and society, 4(1), 1–14. https://doi.org/10.1177/2053951716665129.
  • Stahl, B. C., & Wright, D. (2018). Ethics and privacy in AI and big data: implementing responsible research and innovation. IEEE Security & Privacy, 16(3), 26-33.
  • Stark, L. (2019). Facial recognition is the plutonium of ai. XRDS: Crossroads, The ACM magazine for students, 50-55.
  • Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big data and society, 4(2), 1–14. https://doi.org/10.1177/2053951717736335.
  • Toplumsal Bilgi ve İletişim Derneği (2021). Türkiye’de dijital şiddet araştırması. Retrieved on February 22, 2025, from https://turkiye.unfpa.org/sites/default/files/pub-pdf/digital_violence_report.pdf.
  • Wharton, C. (2006). Document analysis. In Victor Jupp, The Sage dictionary of social research methods (pp. 79-81). London: Sage.
  • Yıldırım, A. & Şimşek, H. (2018). Sosyal bilimlerde nitel araştırma yöntemleri. Ankara: Seçkin.
  • Zhang, N., & Zhao, W. (2007). Privacy-preserving data mining systems. Computer, 40(4), 52-58.

Data Violence in Networked Environments: A Study on User Experience

Yıl 2025, Sayı: 18, 106 - 123, 30.06.2025
https://doi.org/10.55609/yenimedya.1626763

Öz

Contact with others in the network in the communication that takes place in networked environments can carry the element of violence in daily life to these environments. In this case, users may encounter data violence in social media applications, which is a form of violence transferred to networked environments. In this study, data violence encountered by users in social media environments is discussed. Identifying the factors that lead to data violence is important in identifying the problem. This study aims to provide a framework for what can be done in the face of these actions that cause serious harm, along with the factors that cause data violence to occur and continue. The complaints of ordinary users about social media environments on an online complaint platform were collected through document analysis and qualitative content analysis was applied to the data. The findings showed that data violence occurs under different categories, from user to user or from platform to user. Platforms' solutions to data violence are inadequate. In order to prevent data violence caused by the technical infrastructure of the platforms, the need for ethical design and ethical operation of the platforms was emphasized and the duties of the authorities were emphasized.

Etik Beyan

Araştırma, araştırma etiği ilkelerine uygun bir biçimde gerçekleştirilmiştir.

Destekleyen Kurum

Destekleyen kurum yoktur.

Kaynakça

  • Amrute, S. (2019). Of techno-ethics and technoaffects. Feminist Review, 123(1), 56–73. https://doi.org/10.1177/0141778919879744.
  • Andrejevic, M. (2019). Automating surveillance. Surveillance and society, 17(1–2), 7–13. https://doi.org/10.24908/ss.v17i1/2.12930.
  • Bacchini, F., & Lorusso, L. (2019). Race, again: how face recognition technology reinforces racial discrimination. Journal of information, communication and ethics in society, 17(3), 321-335.
  • Binns, R. (2018). Fairness in machine learning: lessons from political philosophy. 2016, 1–11. http://arxiv.org/abs/1712.03586.
  • Birhane, A., Kasirzadeh, A., Leslie, D., & Wachter, S. (2023). Science in the age of large language models. Nature reviews physics, 5(5), 277-280.
  • Bridle, J. (2020). Yeni karanlık çağ teknoloji ve geleceğin sonu. (K. Güleç, Translation) İstanbul: Metis.
  • Dechesne, F. (2020). Fair enough? on (avoiding) bias in data, algorithms and decisions. Privacy and Identity Management. Data for Better Living: AI and Privacy: 14th IFIP WG 9.2, 9.6/11.7,11.6/SIG 9.2. 2 International Summer School, Windisch, Switzerland, August 19–23, 2019, Revised Selected Papers 14, 17-26.
  • Floridi, L. (2024). Dördüncü devrim: bilgiküre insan hakikatini nasıl yeniden şekillendiriyor? (O. Önder, Translation) İstanbul: Albaraka yayınları.
  • Fox, S., Menking, A., Steinhardt, S., Hoffmann, A. L., & Bardzell, S. (2017, February). Imagining intersectional futures: Feminist approaches in CSCW. In Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing (pp. 387-393).
  • Gangadharan, S. P. (2017). The downside of digital inclusion: expectations and experiences of privacy and surveillance among marginal Internet users. New media and society, 19(4), 597–615. https://doi.org/10.1177/1461444815614053.
  • Gillespie, T., & Seaver, N. (2016, December 15). Critical algorithm studies: a reading list. Retrieved on October 21, 2024, from social media collective: https://socialmediacollective.org/reading-lists/ critical-algorithm-studies/.
  • Greene, D., Hoffmann, A. L., & Stark, L. (2019). Better, nicer, clearer, fairer: a critical assessment of the movement for ethical artificial intelligence and machine learning. Proceedings of the 52nd Hawaii International Conference on System Sciences, 2122–2131. https://doi.org/10.24251/ hicss.2019.258.
  • Hartzog, W., & Selinger, E. (2018, August 2). Facial recognition is the perfect tool for oppression. Retrieved on October 22, 2024, from Medium: https://medium.com/s/story/facialrecognition-is-the-perfect-tool-for-oppressionbc2a08f0fe66.
  • Hoffmann, A. L. (2017). Data, technology, and gender: Thinking about (and from) trans lives. In Spaces for the Future (pp. 3-13). Routledge.
  • Hoffmann, A. L. (2018, April 30). Data violence and how bad engineering choices can damage society. Retrieved on October 20, 2024, from Medium: https://medium.com/s/story/dataviolence-and-how-bad-engineering-choicescan-damage-society-39e44150e1d4.
  • Hoffmann, A. L. (2019). Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information Communication and Society, 22(7), 900–915. https://doi.org/10.1080/1369118X.2019.1573912.
  • Hoffmann, A. L. (2021). Terms of inclusion: data, discourse, violence. New media & society, 23(12), 3539-3556.
  • Hoffmann, A. L. (2022). Excerpt from where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. In Ethics of data and analytics (pp. 319-328). Auerbach publications.
  • Holmes, E. (2005). Anti-discrimination rights without equality. The modern law review limited, 68(2), 175–194. https://doi.org/10.2307/824017.
  • Introna, L. D. (2016). Algorithms, governance, and governmentality: on governing academic writing. Science technology and human values, 41(1), 17–49. https://doi.org/10.1177/0162243915587360.
  • Kayış, H. H. (2021). Dijital eşitsizlikleri yakindan incelemek: dijital uçurum buzdağının görünen yüzü müdür? Ege üniversitesi iletişim fakültesi yeni düşünceler hakemli e-dergisi, (15), 109-124.
  • Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. R. (2018). Discrimination in the age of algorithms. Journal of legal analysis, 10, 113-174.
  • Leavy, S. (2018). Gender bias in artificial intelligence: the need for diversity and gender theory in machine learning. In Proceedings of the 1st International Workshop on Gender Equality in Software Engineering (pp. 14-16). https://doi.org/10.1145/3195570.3195580.
  • Leszczynski, A. (2016). Speculative futures: cities, data, and governance beyond smart urbanism. Environment and planning A, 48(9), 1691–1708. https://doi.org/10.1177/0308518X16651445.
  • Lippert-Rasmussen, K. (2015). Discrimination: an intriguing but underexplored issue in ethics and political philosophy. Moral philosophy and politics, 2(2), 207-217.
  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: mapping the debate. Big data and society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679.
  • Nissenbaum, H. (2010). Privacy in context: technology, policy, and social life. In Jurimetrics (C. 51). http://search.proquest.com.strauss.uc3m.es:8080/docview/913137931.
  • Novelli, C., Taddeo, M., & Floridi, L. (2024). Accountability in artificial intelligence: what it is and how it works. Ai & Society, 39(4), 1871-1882.
  • Onuoha, M. (2018, March 22). GitHub. Retrieved on October 20, 2024 from https://github.com/ MimiOnuoha/On-Algorithmic-Violence.
  • Persson, A., & Kavathatzopoulos, I. (2018). How to make decisions with algorithms: ethical decision-making using algorithms within predictive analytics. ACM SIGCAS computers and society, 47(4), 122-133.
  • Pessach, D., & Shmueli, E. (2023). Algorithmic fairness. In Machine learning for data science handbook: data mining and knowledge discovery handbook (pp. 867-886). Cham: Springer International Publishing.
  • Rosenbaum, H., & Fichman, P. (2019). Algorithmic accountability and digital justice: A critical assessment of technical and sociotechnical approaches. Proceedings of the association for information science and technology, 56(1), 237-244.
  • Russo, F. (2018). Digital technologies, ethical questions, and the need of an informational framework. Philosophy & Technology, 31(4), 655-667.
  • Schmidt, E. (2013, May 20). How should we think about the future? You Tube Video.
  • Seaver, N. (2017). Algorithms as culture: Some tactics for the ethnography of algorithmic systems. Big data and society, 4(2), 1–12. https://doi.org/10.1177/2053951717738104.
  • Segall, S. (2012). What’s so bad about discrimination? Utilitas, 24(1), 82–100. https://doi.org/10.1017/S0953820811000379.
  • Shelton, T. (2017). The urban geographical imagination in the age of Big Data. Big data and society, 4(1), 1–14. https://doi.org/10.1177/2053951716665129.
  • Stahl, B. C., & Wright, D. (2018). Ethics and privacy in AI and big data: implementing responsible research and innovation. IEEE Security & Privacy, 16(3), 26-33.
  • Stark, L. (2019). Facial recognition is the plutonium of ai. XRDS: Crossroads, The ACM magazine for students, 50-55.
  • Taylor, L. (2017). What is data justice? The case for connecting digital rights and freedoms globally. Big data and society, 4(2), 1–14. https://doi.org/10.1177/2053951717736335.
  • Toplumsal Bilgi ve İletişim Derneği (2021). Türkiye’de dijital şiddet araştırması. Retrieved on February 22, 2025, from https://turkiye.unfpa.org/sites/default/files/pub-pdf/digital_violence_report.pdf.
  • Wharton, C. (2006). Document analysis. In Victor Jupp, The Sage dictionary of social research methods (pp. 79-81). London: Sage.
  • Yıldırım, A. & Şimşek, H. (2018). Sosyal bilimlerde nitel araştırma yöntemleri. Ankara: Seçkin.
  • Zhang, N., & Zhao, W. (2007). Privacy-preserving data mining systems. Computer, 40(4), 52-58.
Toplam 44 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Yeni Medya
Bölüm Araştırma Makaleleri
Yazarlar

Hasan Hüseyin Kayış 0000-0003-3811-3636

Yayımlanma Tarihi 30 Haziran 2025
Gönderilme Tarihi 25 Ocak 2025
Kabul Tarihi 19 Mart 2025
Yayımlandığı Sayı Yıl 2025 Sayı: 18

Kaynak Göster

APA Kayış, H. H. (2025). Data Violence in Networked Environments: A Study on User Experience. Yeni Medya(18), 106-123. https://doi.org/10.55609/yenimedya.1626763