Research Article
BibTex RIS Cite

YAPAY ZEKANIN KAVRAMSAL İNCELENMESİ; İNSAN VE MAKİNE ÖĞRENİMİ ARASINDAKİ FARKLAR

Year 2024, , 648 - 659, 01.07.2024
https://doi.org/10.7456/tojdac.1464262

Abstract

Makine öğrenimi ve yapay zeka, insanlarınkine benzer "akıllı" kararlar verebiliyor gibi görünen, ancak insan düşüncesinden farklı işleyen algoritmalar üretmektedir. İnsanların makine önerilerine dayalı kararlar verebilmesi için bu önerilerin arka planını anlayabilmesi önem arz etmektedir. İnsanlar, insan zekasını anlamaya yönelik olduklarından, makine öğrenimi tarafından yaratılan "düşünmeyi" gerçekten anlayıp anlayamayacakları ya da yalnızca insan benzeri bilişsel süreçleri makinelere yansıtıp yansıtmadıkları henüz tam anlaşılmamıştır. Ayrıca yapay zekanın medya temsilleri, yapay zekayı gerçekte sahip olduğundan daha yüksek yetenekler ve insan benzerliği varmış gibi lanse etmektedir. Günlük hayatta, akıllı algoritmalar temelinde insan görevlerini ve kararlarını kolaylaştırmak için tasarlanmış yardım sistemleriyle giderek daha fazla karşılaşılmaktadır. Bu algoritmalar ağırlıklı olarak, büyük miktarda veriyi analiz ederek önceden bilinmeyen korelasyonları keşfetmeyi mümkün kılan makine öğrenimi teknolojilerine dayanmaktadır. Örnek olarak, hasta ve sağlıklı insanlara ait binlerce röntgen görüntüsünün makine tarafından analiz edilmesi gösterilebilmektedir. Bu sistemi çalışır hale getirmek için, "sağlıklı" olarak not düşülen görüntülerin hangi kalıplarla "hasta" olarak not düşülenlerden ayırt edilebileceğini belirlemek ve ikincisini tanımlayan bir algoritma bulmak gerekmektedir. Bu şekilde oluşturulan "eğitilmiş" algoritmalar, yalnızca tıbbi teşhisler için değil, aynı zamanda bir iş ilanı için başvuranların ön seçiminde veya iletişimde de sesli asistanlar yardımıyla olmak üzere çeşitli uygulama alanlarında kullanılmaktadır. Çalışmanın ilk bölümünde yapay zekâ kavramsal olarak açıklandıktan sonra, zayıf ve güçlü yapay zekâ kavramları irdelenecektir. Daha sonra, yapay zekânın alt kategorileri açıklandıktan sonra, insan öğrenmesi ve makine öğrenmesi arasındaki ayrımlar ele alınacaktır. Makine öğrenmesi ve derin öğrenme kavramlarının incelenmesinin ardından, sonuç bölümünde makine öğrenmesinin yarattığı potansiyel riskler ve fırsatlar tartışılacaktır.

References

  • Asendorpf, J. (2004). Psychologie der Persönlichkeit, Heidelberg
  • Baacke, D. (1998). Zum Konzept und zur Operationalisierung von Medienkompetenz, https://www.produktivemedienarbeit.de/ressourcen/bibliothek/fachartikel/baacke_operationalisierung.shtml.
  • Beck, S. R., Riggs, K. J. & Burns, P. (2011). Multiple developments in counterfactual thinking. Understanding counterfactuals, understanding causation, p. 110-122. Oxford Academics.
  • Bishop, C. M. (2006). Pattern Recognition And Machine Learning. Springer.
  • Clark, H. (1996). Using Language, Cambridge.
  • DeVito, M., Birnholtz, J. & Hancock et al. (2018). How People Form Folk Theories of Social Media Feeds and What It Means for How We Study Self, Proceedings of the ACM Conference on Human Factors in Computing Systems p. 1–12. https://socialmedia.northwestern.edu/wp-content/uploads/2018/01/FolkTheoryFormation_CHI2018.pdf.
  • Fenske, O., Gutschmidt, A. & Grunert, H. (2020). Was ist Künstliche Intelligenz?. Whitepaper-Serie des Zentrums für Künstliche Intelligenz in MV Ausgabe 1. Rostock.
  • Frith, Ch. & Frith, U. (2006). How we predict what other people are going to do. Brain Research. 1079/1, p. 36–46.
  • Fussell, S. & Krauss, M. (1992). Coordination of knowledge in communication: Effects of speakers’ assumptions about others’ knowledge, Journal of Personality and Social Psychology, 62/ 3, p. 378–391.
  • Gilpin, L., Bau, D., Yuan, et al. (2018). Explaining Explanations: An Overview of Interpretability of Machine Learning, IEE 5th International Conference on Data Science and Advanced Analytics (DSAA), https://doi.org/10.1109/DSAA.2018.00018.
  • Goodfellow, I., Bengio, Y. & Courville, A. (2016). Deep Learning. MIT Press.
  • Gross, F. & Röllecke, R. (2022). Dieter Baacke Preis Handbuch 17. Love, Hate & More. Gesellschaft für Medienpädagogik und Kommunikationskultur der Bundesrepublik Deutschland e. V. (GMK).
  • Horstmann, A. & Krämer, N. (2019). Great Expectations? Relation of Previous Experiences With Social Robots in Real Life or the Media and Expectancies Based on Qualitative and Quantitative Assessment. Frontiers in Psychology, 10, p. 939, https://doi.org/10.3389/fpsyg. 2019.00939.
  • Kersting, K. & Tresp, V. (2019). Maschinelles und Tiefes Lernen. Digitale Welt 3. 32–34 (2019). https://doi.org/10.1007/s42354-019-0209-4.
  • Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice-Hall.
  • Krempl, S. (2023). Manipulationsgefahr: EU-Kommission fordert rasch Kennzeichnung von KI-Inhalten in: heise online. https:// www.heise.de/news/Manipulationsgefahr-EU-Kommission-fordert-rasch-Kennzeichnung-von-KI-Inhalten-9179211.html, (2023, December 12).
  • Krämer, N., Artelt, A.Z Geminn et al. (2019). KI-basierte Sprachassistenten im Alltag: Forschungsbedarf aus informatischer, psychologischer, ethischer und rechtlicher Sicht. Universität Duisburg-Essen, https://doi.org/10.17185/duepublico/70571.
  • LeCun, Y. & Bengio, Y., Hinton, G. (2015). Deep learning. Nature, 521(7553), p. 436-444.
  • Lesch, H. & Schwartz, T. (2020). Unberechenbar. Das Leben ist mehr als eine Gleichung, Freiburg.
  • McCarthy, John. (1955). Proposal for the Dartmouth Summer Research Project on Artificial Intelligence in AI Magazine, 27/4, 2006.
  • Michie, D. & Spiegelhalter, D. (1994). Machine Learning, Neural and Statistical Classification. Ellis Horwood Series in Artificial Intelligence, New York.
  • Minsky, M., Papert, S. Perceptrons. (1969). An Introduction to Computational Geometry. Boston; Margaret A. Boden, 2006. Mind as Machine. A History of Cognitive Science. Oxford.
  • Mitchell, T. M. (1997). Machine Learning. McGraw Hill.
  • Nass, C., Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56/1, p. 81–103; Krämer, N., 2008. Soziale Wirkungen von virtuellen Helfern. Stuttgart.
  • Nensa, F., Demircioglu, A., Rischpler, Ch. (2019). Artificial Intelligence in Nuclear Medicine, Journal of Nuclear Medicine 60/1, p. 1–9, https://doi.org/10.2967/jnumed.118.220590.
  • Neuhöfer, S. (2023). Grundrechtsfähigkeit Künstlicher Intelligenz. Duncker&Humblot. Berlin.
  • Ngo, T., Kunkel, J., Ziegler, J. (2020). Exploring Mental Models of Recommender Systems: A Qualitative Study. UMAP ’20: Proceedings of the 28th Conference on User Modeling, Adaptation and Personalization, p. 183–191.
  • Nilsson, N. J. (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann.
  • Premack, D. & James Premack, A. (1995). Origins of human social competence, in Michael S. Gazzaniga (Ed.), The cognitive neurosciences, p. 205–218, Cambridge.
  • Reeves, B., Nass, C. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge.
  • Russell, S. J., & Norvig, P. (2016). Artificial Intelligence: A Modern Approach. Pearson.
  • Scharf, I. & Tödte, J. (2020). Digitalisierung mit Kultureller Bildung gestalten. In: Kulturelle Bildung .Online. https://www.kubi-online.de/ artikel/digitalisierung-kultureller-bildung-gestalten. 18.12.2023. Schneider, S., 2019. Artificial You. Princeton University Press, New Jersey.
  • Scherk, J., Pöchhacker, G. & Wagner, K. (2017). Künstliche Intelligenz, Artificial Intelligence. Pöchhacker Innovation Consulting. Linz.
  • Siegler, R. S. (1998). Children's thinking (3rd ed.). Prentice Hall.
  • Sindermann, M., Albrich, K. (2023). Chancen und Risiken: Künstliche Intelligenz im Spannungsfeld des Kinder- und Jugendmedienschutzes, BzKJAKTUELL 4/2023.
  • Süss, D.; Lampert, C. & Wijnen, C. (2010),. Medienpädagogische Ansätze: Grundhaltungen und ihre Konsequenzen. In: Medienpädagogik. VS Verlag für Sozialwissenschaften, ISBN 978-3-658-19823-7.
  • Turing, A. M. (1950). Cumputing Machinery and Intelligence. Mind A Quarterly Review Of Psychology And Philosophy.
  • Voosen, P. (2017). How AI detectives are cracking open the black box of deep learning. As neural nets push into science, researchers probe back. Science, https://www.science.org/content/article/how-ai-detectives-are-cracking-open-black-box-deep-learning.

CONCEPTUAL REVIEW OF ARTIFICIAL INTELLIGENCE; DIFFERENCES BETWEEN HUMAN AND MACHINE LEARNING

Year 2024, , 648 - 659, 01.07.2024
https://doi.org/10.7456/tojdac.1464262

Abstract

Machine learning and artificial intelligence produce algorithms that appear to be able to make "intelligent" decisions similar to those of humans but function differently from human thinking. To make decisions based on machine suggestions, humans should be able to understand the background of these suggestions. However, since humans are oriented to understand human intelligence, it is not yet fully clear whether humans can truly understand the "thinking" generated by machine learning, or whether they merely transfer human-like cognitive processes to machines. In addition, media representations of artificial intelligence show higher capabilities and greater human likeness than they currently have. In our daily lives, we increasingly encounter assistance systems that are designed to facilitate human tasks and decisions based on intelligent algorithms. These algorithms are predominantly based on machine learning technologies, which make it possible to discover previously unknown correlations and patterns by analyzing large amounts of data. One example is the machine analysis of thousands of X-ray images of sick and healthy people. This requires identifying the patterns by which images labeled as "healthy" can be distinguished from those labeled as "sick" and to find an algorithm that identifies the latter. In the meantime, "trained" algorithms created in this way are used in various fields of application, not only for medical diagnoses but also in the pre-selection of applicants for a job advertisement or in communication with the help of voice assistants. These voice assistants are enabled by intelligent algorithms to offer internet services through short commands. Harald Lesch, referring to his book Unpredictable, written together with Thomas Schwarz, says the development of artificial intelligence can be compared to bringing aliens to Earth. With machine learning, a previously unknown form of non-human intelligence has been created. This chapter discusses whether forms of artificial intelligence, as they are currently being publicly discussed, differ substantially from human thinking. Furthermore, it will be discussed to what extent humans can comprehend the functioning of artificial intelligence that has been created through machine learning when interacting with them. Finally, the risks and opportunities will be weighed and discussed..

References

  • Asendorpf, J. (2004). Psychologie der Persönlichkeit, Heidelberg
  • Baacke, D. (1998). Zum Konzept und zur Operationalisierung von Medienkompetenz, https://www.produktivemedienarbeit.de/ressourcen/bibliothek/fachartikel/baacke_operationalisierung.shtml.
  • Beck, S. R., Riggs, K. J. & Burns, P. (2011). Multiple developments in counterfactual thinking. Understanding counterfactuals, understanding causation, p. 110-122. Oxford Academics.
  • Bishop, C. M. (2006). Pattern Recognition And Machine Learning. Springer.
  • Clark, H. (1996). Using Language, Cambridge.
  • DeVito, M., Birnholtz, J. & Hancock et al. (2018). How People Form Folk Theories of Social Media Feeds and What It Means for How We Study Self, Proceedings of the ACM Conference on Human Factors in Computing Systems p. 1–12. https://socialmedia.northwestern.edu/wp-content/uploads/2018/01/FolkTheoryFormation_CHI2018.pdf.
  • Fenske, O., Gutschmidt, A. & Grunert, H. (2020). Was ist Künstliche Intelligenz?. Whitepaper-Serie des Zentrums für Künstliche Intelligenz in MV Ausgabe 1. Rostock.
  • Frith, Ch. & Frith, U. (2006). How we predict what other people are going to do. Brain Research. 1079/1, p. 36–46.
  • Fussell, S. & Krauss, M. (1992). Coordination of knowledge in communication: Effects of speakers’ assumptions about others’ knowledge, Journal of Personality and Social Psychology, 62/ 3, p. 378–391.
  • Gilpin, L., Bau, D., Yuan, et al. (2018). Explaining Explanations: An Overview of Interpretability of Machine Learning, IEE 5th International Conference on Data Science and Advanced Analytics (DSAA), https://doi.org/10.1109/DSAA.2018.00018.
  • Goodfellow, I., Bengio, Y. & Courville, A. (2016). Deep Learning. MIT Press.
  • Gross, F. & Röllecke, R. (2022). Dieter Baacke Preis Handbuch 17. Love, Hate & More. Gesellschaft für Medienpädagogik und Kommunikationskultur der Bundesrepublik Deutschland e. V. (GMK).
  • Horstmann, A. & Krämer, N. (2019). Great Expectations? Relation of Previous Experiences With Social Robots in Real Life or the Media and Expectancies Based on Qualitative and Quantitative Assessment. Frontiers in Psychology, 10, p. 939, https://doi.org/10.3389/fpsyg. 2019.00939.
  • Kersting, K. & Tresp, V. (2019). Maschinelles und Tiefes Lernen. Digitale Welt 3. 32–34 (2019). https://doi.org/10.1007/s42354-019-0209-4.
  • Kolb, D. A. (1984). Experiential learning: Experience as the source of learning and development. Prentice-Hall.
  • Krempl, S. (2023). Manipulationsgefahr: EU-Kommission fordert rasch Kennzeichnung von KI-Inhalten in: heise online. https:// www.heise.de/news/Manipulationsgefahr-EU-Kommission-fordert-rasch-Kennzeichnung-von-KI-Inhalten-9179211.html, (2023, December 12).
  • Krämer, N., Artelt, A.Z Geminn et al. (2019). KI-basierte Sprachassistenten im Alltag: Forschungsbedarf aus informatischer, psychologischer, ethischer und rechtlicher Sicht. Universität Duisburg-Essen, https://doi.org/10.17185/duepublico/70571.
  • LeCun, Y. & Bengio, Y., Hinton, G. (2015). Deep learning. Nature, 521(7553), p. 436-444.
  • Lesch, H. & Schwartz, T. (2020). Unberechenbar. Das Leben ist mehr als eine Gleichung, Freiburg.
  • McCarthy, John. (1955). Proposal for the Dartmouth Summer Research Project on Artificial Intelligence in AI Magazine, 27/4, 2006.
  • Michie, D. & Spiegelhalter, D. (1994). Machine Learning, Neural and Statistical Classification. Ellis Horwood Series in Artificial Intelligence, New York.
  • Minsky, M., Papert, S. Perceptrons. (1969). An Introduction to Computational Geometry. Boston; Margaret A. Boden, 2006. Mind as Machine. A History of Cognitive Science. Oxford.
  • Mitchell, T. M. (1997). Machine Learning. McGraw Hill.
  • Nass, C., Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56/1, p. 81–103; Krämer, N., 2008. Soziale Wirkungen von virtuellen Helfern. Stuttgart.
  • Nensa, F., Demircioglu, A., Rischpler, Ch. (2019). Artificial Intelligence in Nuclear Medicine, Journal of Nuclear Medicine 60/1, p. 1–9, https://doi.org/10.2967/jnumed.118.220590.
  • Neuhöfer, S. (2023). Grundrechtsfähigkeit Künstlicher Intelligenz. Duncker&Humblot. Berlin.
  • Ngo, T., Kunkel, J., Ziegler, J. (2020). Exploring Mental Models of Recommender Systems: A Qualitative Study. UMAP ’20: Proceedings of the 28th Conference on User Modeling, Adaptation and Personalization, p. 183–191.
  • Nilsson, N. J. (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann.
  • Premack, D. & James Premack, A. (1995). Origins of human social competence, in Michael S. Gazzaniga (Ed.), The cognitive neurosciences, p. 205–218, Cambridge.
  • Reeves, B., Nass, C. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge.
  • Russell, S. J., & Norvig, P. (2016). Artificial Intelligence: A Modern Approach. Pearson.
  • Scharf, I. & Tödte, J. (2020). Digitalisierung mit Kultureller Bildung gestalten. In: Kulturelle Bildung .Online. https://www.kubi-online.de/ artikel/digitalisierung-kultureller-bildung-gestalten. 18.12.2023. Schneider, S., 2019. Artificial You. Princeton University Press, New Jersey.
  • Scherk, J., Pöchhacker, G. & Wagner, K. (2017). Künstliche Intelligenz, Artificial Intelligence. Pöchhacker Innovation Consulting. Linz.
  • Siegler, R. S. (1998). Children's thinking (3rd ed.). Prentice Hall.
  • Sindermann, M., Albrich, K. (2023). Chancen und Risiken: Künstliche Intelligenz im Spannungsfeld des Kinder- und Jugendmedienschutzes, BzKJAKTUELL 4/2023.
  • Süss, D.; Lampert, C. & Wijnen, C. (2010),. Medienpädagogische Ansätze: Grundhaltungen und ihre Konsequenzen. In: Medienpädagogik. VS Verlag für Sozialwissenschaften, ISBN 978-3-658-19823-7.
  • Turing, A. M. (1950). Cumputing Machinery and Intelligence. Mind A Quarterly Review Of Psychology And Philosophy.
  • Voosen, P. (2017). How AI detectives are cracking open the black box of deep learning. As neural nets push into science, researchers probe back. Science, https://www.science.org/content/article/how-ai-detectives-are-cracking-open-black-box-deep-learning.
There are 38 citations in total.

Details

Primary Language English
Subjects Journalism, Journalism Studies
Journal Section RESEARCH ARTICLES
Authors

Büşra Sarıkaya 0000-0002-9492-7493

Early Pub Date June 24, 2024
Publication Date July 1, 2024
Submission Date April 3, 2024
Acceptance Date May 20, 2024
Published in Issue Year 2024

Cite

APA Sarıkaya, B. (2024). CONCEPTUAL REVIEW OF ARTIFICIAL INTELLIGENCE; DIFFERENCES BETWEEN HUMAN AND MACHINE LEARNING. Turkish Online Journal of Design Art and Communication, 14(3), 648-659. https://doi.org/10.7456/tojdac.1464262


All site content, except where otherwise noted, is licensed under a Creative Common Attribution Licence. (CC-BY-NC 4.0)

by-nc.png