Araştırma Makalesi
BibTex RIS Kaynak Göster

Perceptual differences between AI and human compositions: the impact of musical factors and cultural background

Yıl 2024, Cilt: 12 Sayı: 4, 463 - 490, 30.12.2024
https://doi.org/10.12975/rastmd.20241245

Öz

The issues of what Artificial Intelligence (AI) can and cannot do in the field of music are among the important topics that both music researchers and AI experts are curious about. This study offers a significant analysis within the context of the growing role of AI technologies in music composition and their impact on creative processes. It contributes to the literature by positioning AI as a complementary tool to the composer’s creativity and by enhancing the understanding of cultural adaptation processes. The study aims to identify the perceptual differences between AI and composer compositions, examine the musical and cultural foundations of these differences, and uncover the factors that influence the listener’s experience. In the research design, a mixed-method approach was adopted, combining qualitative and quantitative research methods. In the quantitative phase, a double-blind experimental design was employed to ensure that participants evaluated composer and AI works impartially. In the qualitative phase, participants’ opinions were gathered. The participants were 10 individuals aged between 19 and 25, with diverse cultural and educational backgrounds; 6 had received formal music education, while 4 were casual listeners. The data collection instruments included a structured interview form and the Assessment Scale for Perceptual Factors in Musical Works. During the research process, each participant evaluated two AI and two composer works in 20-minute standardized listening sessions. All listening sessions were conducted using professional audio equipment. The analysis revealed that composer works scored significantly higher than AI works across all categories (p<.05). Notable differences were observed, particularly in the categories of emotional depth (X composer = 4.6, X AI = 3.1) and memorability (Xcomposer = 4.4, XAI = 3.2). The study concluded that composer works were more effective than AI compositions in terms of emotional depth, structural coherence, and cultural resonance. Additionally, cultural background and music education emerged as significant factors shaping perceptual differences. Future research should broaden the participant pool and incorporate neurocognitive data to facilitate a deeper understanding of perceptual mechanisms. Furthermore, the development of AI systems for use in music should include the integration of Transformer and RNN-based advanced learning models, the implementation of traditional music theory principles, the enhancement of emotional expressiveness, the improvement of cultural adaptation capacities, and the refinement of real-time interaction mechanisms.

Etik Beyan

Ethics committee approval was obtained with Decision No. 2024/375 in accordance with the Social and Human Sciences Scientific Research and Publication Ethics Committee of R.T. Afyon Kocatepe University.

Teşekkür

Thanks goes to the youth of the State Conservatory for their involvement. Many thanks to Özlem Folb for her valuable help in translating the article.

Kaynakça

  • Aylett, M. P., Vinciarelli, A., & Wester, M. (2020). Voice puppetry: A new application of automatic voice transformation for emotional speech synthesis. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 108-109). Association for Computing Machinery.
  • Bent, I., & Drabkin, W. (1987). Analysis. Macmillan.
  • Born, G., & Devine, K. (2019). Music technology, gender and class: Digitalization, educational and social change in England. Twentieth-Century Music, 16(1), 3-37.
  • Briot, J. P., Hadjeres, G., & Pachet, F. D. (2020). Deep learning techniques for music production. Springer.
  • Burns, K.H. (1994). The history and development of algorithms in music composition, 1957–1993. Docotoral dissertation. Ball State University.
  • Camurri, A., Hashimoto, S., Ricchetti, M., Ricci, A., Suzuki, K., Trocca, R., & Volpe, G. (1999). Synthesis and analysis of emotionally expressive music performance. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (pp. 317-322).
  • Carnovalini, F., & Rodà, A. (2020). Computational creativity and music production systems: An introduction to the current state of play. Frontiers in Artificial Intelligence, 3(14), 1-26.
  • Chen, Z. (2024). Composing music under certain conditions based on neural network. Applied and Computational Engineering, 64, 186-192.
  • Clarke, E., & Doffman, M. (2019). Distributed creativity. Oxford University Press.
  • Collins, T. & Laney, R., (2017) Computer- generated stylistic compositions with long- term repetitive and phrasal structure. Journal of Creative Music Systems 1(2). doi: https://doi.org/10.5920/JCMS.2017.02
  • Cook, N. (2020). Music as creative practice. Oxford University Press.
  • Cope, D. (1987). An expert system from computer-assisted composition. Computer Music Journal, 11(4), 30-46.
  • Cope, D. (2003). Computer analysis of musical allusions. Computer Music Journal, 27(1), 11-28.
  • Daniel, J.C. (2016). Neural mechanisms of musical rhythm processing: Cross-cultural differences and stages of beat perception. Doctoral thesis. The University of Western Ontario. Canada.
  • Deltorn, J. & Macrez, F. (2018). Authorship in the Age of Machine learning and Artificial Intelligence (August 1, 2018). In: Sean M. O’Connor (ed.), The Oxford Handbook of Music Law and Policy, Oxford University Press, 2019 (Forthcoming), Centre for International Intellectual Property Studies (CEIPI) Research Paper No. 2018-10, https:// doi.org/10.2139/ssrn.3261329
  • Dubnov, S., Huang, K., & Wang, C. (2021). Towards intercultural analysis using music information dynamics. arXiv preprint arXiv:2111.12588. https://doi. org/10.48550/arXiv.2111.12588
  • Fernández, J. D., & Vico, F. (2013). Artificial intelligence methods in algorithmic composition: A comprehensive review. Journal of Artificial Intelligence Research, 48, 513-582.
  • Ferreira, P., Limongi, R., & Fávero, L. P. (2023). Data-driven music production: Application of deep learning models for symbolic music composition. Applied Sciences, 13(7), 4543.
  • Giordano, B.L. (2011). Music perception. The Journal of the Acoustical Society of America, 129(6), 4086-4086
  • Greenberg, D. M., Wride, S. J., Snowden, D. A., Spathis, D., Potter, J., & Rentfrow, P. J. (2022). Universal and variable elements in music preferences: A study on preferential responses to Western music across 53 countries. Journal of Personality and Social Psychology, 122(2), 286-302.
  • Harper-Scott, J. P. E., & Samson, J. (2021). Introduction to music studies. Cambridge University Press.
  • Hong, J.W. (2022). Living with the most human-like non-humans: Understanding human-AI interactions in different social contexts. AI & Society, 37, 1405-1415
  • Hong, J.-W., Fischer, K., Ha, Y., & Zeng, Y. (2021). I wrote a song for you: An experiment testing the impact of machine characteristics on the evaluation of AI-composed music. Computers in Human Behavior, 131, 107239. https://doi.org/10.1016/j. chb.2022.107239
  • Huang, C.Z.A., Vaswani, A., Uszkoreit, J., Shazeer, N., Simon, I., Hawthorne, C., ... & Eck, D. (2020). Music transformer: Generating long-term structured music. In Proceedings of ICLR 2020.
  • Huang, H., Man, J., Li, L., & Zeng, R. (2024). Musical timbre style transfer with diffusion models. PeerJ Computer Science, 10(e2194). https://doi.org/10.7717/peerj-cs.2194
  • Jones, M. R. (2010). Music perception: Current research and future directions. In Springer Handbook of Auditory Research (pp. 1-12). Springer.
  • Koelsch, S. (2020). The cognitive neuroscience of music. Oxford University Press.
  • Koelsch, S., & Siebel, W. A. (2005). Towards a neural basis of music perception. Trends in Cognitive Sciences, 9(12), 578-584.
  • Laidlow, R. (2023). Artificial intelligence in the creative process within contemporary classical music. Contemporary Music Review, 42(1-2), 1-21.
  • Laidlow, R. (2023). Artificial intelligence in the creative process within contemporary classical music. Contemporary Music Review, 42(1-2), 1-21.
  • Liebman, E., & Stone, P. (2020). Artificial musical intelligence: a surwey. arXiv:2006.10553v1. https://doi. org/10.48550/arXiv.2006.10553
  • Liu, C.-H., & Ting, C.-K. (2017). Computational intelligence in music composition: A survey. IEEE Transactions on Emerging Topics in Computational Intelligence, 1(1), 2-15. https://doi. org/10.1109/TETCI.2016.2642200
  • McDermott, H.J. (2004). Music perception with cochlear implants: A review. Trends in Amplification, 8(2), 49-82.
  • McNamee, A.K., Schwanauer, S.M., & Levitt, D.A. (1995). Machine models of music. Journal of Music Theory, 39(1), 170-183
  • Miranda, E.R. (2021). The Handbook of Artificial Intelligence for Music: Foundations, Advanced Approaches and Developments. Springer
  • Morrison, S.J., & Demorest, S.M. (2009).Cultural constraints on music perception and cognition. Progress in Brain Research, 178, 67-77.
  • Nettl, B. (2015). The study of ethnomusicology: Thirty-three discussions. University of Illinois Press.
  • Pachet, F., Roy, P., & Carré, B. (2020). Music creation supported by Flow Machines: Towards new categories of novelty. Journal of Creative Music Systems, 5(1), 22-46.
  • Pearce, M., & Wiggins, G. A. (2020). Experimental comparison of PPM variants on a monophonic music prediction task. Journal of New Music Research, 49(1), 53-79.
  • Peretz, I., & Zatorre, R. J. (Eds.). (2003). The cognitive neuroscience of music. Oxford University Press
  • Prabhakaran, V., Qadri, R., & Hutchinson, B. (2022). Cultural incongruencies in artificial intelligence. arXiv preprint arXiv:2211.13069. https://doi. org/10.48550/arXiv.2211.13069
  • Ragot, M., Martin, N., & Cojean, S. (2020). AI-generated vs. human artworks: A perception bias towards artificial intelligence? In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-13). ACM. https://doi. org/10.1145/3334480.3382892
  • Roads, C. (1985). Music and artificial intelligence research. ACM Computing Surveys, 17(2), 163-190. https://doi. org/10.1145/4468.4469
  • Samo, A., & Highhouse, S. (2023). Artificial intelligence and art: Identifying aesthetic judgment factors that differentiate human from machine-produced artworks. Psychology of Aesthetics, Creativity, and the Arts, 17(4), 459-471. http://dx.doi. org/10.1037/aca0000570
  • Sarmento, P., Loth, J., & Barthet, M. (2024). Between me and AI: An analysis of listener perspectives on AI- composed progressive metal music. arXiv preprint arXiv:2407.21615. https://doi.org/10.48550/arXiv.2407.21615
  • Schoner, B., Cooper, C., Douglas, C., & Gershenfeld, N. (2020). Data-driven modeling of acoustic instruments. In Proceedings of the International Computer Music Conference (pp.358-365).
  • Shank, D.B., Stefanik, C., Stuhlsatz, C., Kacirek, K., & Belfi, A.M.(2023) Composer bias towards AI-generated music: Listeners like it less when they think it was composed by AI. Journal of Experimental Psychology: Applied, 29(3), 676-692.
  • Siganos, A. (2024) International musical preferences as a measure of culture: Evidence from cross-border mergers. The European Journal of Finance,30(2),166-189.
  • Simonetta,F.,Avanzini,F.,&Ntalampiras, S.(2022) A perceptual measure for evaluating automatic music transcriptions resynthesis. Applied Sciences,12(4),1876.
  • Stevens, C.J. (2012) Music perception and cognition: A review of recent cross-cultural research. Topics in Cognitive Science, 4(4),653–667.
  • Stolyarov II,G.(2019) Empowering musical creation through machines algorithms and artificial intelligence. INSAM Journal of Contemporary Music Art and Technology, 2, 81-99.
  • Stolzenburg, F. (2013). Harmony perception with periodicity detection. arXiv:1306.6458v6
  • Sturm, B. L., Ben-Tal, O., Monaghan, U., Collins, N., Herremans, D., Chew, E., Hadjeres, G., Deruty, E., & Pachet, F. (2018). Machine learning research that matters for music creation: A case study. Journal of New Music Research, 47(5), 1-21. https://doi.org /10.1080/09298215.2018.1515233
  • Sun, D., Wang, H., & Xiong, J. (2024) Do you want to listen to my music buddy? An experiment on AI musicians. International Journal of Human–Computer Interaction, 40(12),3133–3143
  • Susino, M. (2015) Intercultural emotional experiences as a reaction to music. Journal of Cross-Cultural Psychology, 46(8),1050– 1062
  • Tirovolas, A.K., & Levitin, D.J. (2011). Research on music perception and cognition from 1983 to 2010: A categorical bibliometric analysis of empirical papers on music perception. Music Perception,29(1),23–36
  • Tubadji, A., Huang, H., & Webber, D.J. (2021) Cultural proximity bias in AI acceptability:The importance of being human. Technological Forecasting and Social Change, 173, 121100.
  • Wang, J., Jin, Y., Zhang, K., Lin, F., & Chen, L. (2019) An unsupervised methodology for musical style transformation. In Proceedings of the 2019 International Conference on Computational Intelligence and Security (pp.247–251)
  • Wiggins, G.A., Pearce, M.T., & Müllensiefen, D. (2012) Computational modeling of music cognition and musical creativity. In R.T. Dean (Ed.), The Oxford handbook of computer music. Oxford University Press.
  • Webster, P.R., & Mertens, G. (2022) Music technology and education during crisis times:The effects and lessons learned from COVID-19. International Journal of Music Education,40(2),102–116.
  • Xia, Y., Jiang, Y., & Ye, T. (2020). Music classification in MIDI format based on LSTM Model. arXiv:2010.07739. https://doi. org/10.48550/arXiv.2010.07739
  • Xiong, Z., Wang, W., Yu, J., Lin, Y., & Wang, Z. (2023) A comprehensive review on evaluation methodologies for AI- generated music. IEEE Access, 11, 123456-123470.
  • Yang, L.C. & Lerch, A. (2020) On the evaluation of generative models in music. Neural Computing and Applications, 32(9),4773–4784.
  • Yaozhu Chan, P., Dong, M., & Li, H. (2019) The science of harmony: a psychophysical basis for perceptual tensions and resolutions in music. Journal of Neuroscience, 39(15), 2825–2834.
  • Yuqiang, L., Shengchen, L., & Georgy, F. (2020) How musical features and representations affect objective assessments in musical composition. In Proceedings of the 21st International Society for Music Information Retrieval Conference (pp.234– 241).
  • Zhu, Y., Baca, J., Rekabdar, B., & Rawassizadeh, R.(2023)A survey on AI-based tools and models for music production. ACM Computing Surveys, 56(2),1–35. https://doi. org/10.48550/arXiv.2308.12982
  • Zlatkov, D., Ens, J., & Pasquier, P. (2023) Investigating bias against AI-composed music. Lecture Notes in Computer Science, (pp. 308–323). Springer.

Perceptual differences between AI and human compositions: the impact of musical factors and cultural background

Yıl 2024, Cilt: 12 Sayı: 4, 463 - 490, 30.12.2024
https://doi.org/10.12975/rastmd.20241245

Öz

The issues of what Artificial Intelligence (AI) can and cannot do in the field of music are among the important topics that both music researchers and AI experts are curious about. This study offers a significant analysis within the context of the growing role of AI technologies in music composition and their impact on creative processes. It contributes to the literature by positioning AI as a complementary tool to the composer’s creativity and by enhancing the understanding of cultural adaptation processes. The study aims to identify the perceptual differences between AI and composer compositions, examine the musical and cultural foundations of these differences, and uncover the factors that influence the listener’s experience. In the research design, a mixed-method approach was adopted, combining qualitative and quantitative research methods. In the quantitative phase, a double-blind experimental design was employed to ensure that participants evaluated composer and AI works impartially. In the qualitative phase, participants’ opinions were gathered. The participants were 10 individuals aged between 19 and 25, with diverse cultural and educational backgrounds; 6 had received formal music education, while 4 were casual listeners. The data collection instruments included a structured interview form and the Assessment Scale for Perceptual Factors in Musical Works. During the research process, each participant evaluated two AI and two composer works in 20-minute standardized listening sessions. All listening sessions were conducted using professional audio equipment. The analysis revealed that composer works scored significantly higher than AI works across all categories (p<.05). Notable differences were observed, particularly in the categories of emotional depth (X composer = 4.6, X AI = 3.1) and memorability (Xcomposer = 4.4, XAI = 3.2). The study concluded that composer works were more effective than AI compositions in terms of emotional depth, structural coherence, and cultural resonance. Additionally, cultural background and music education emerged as significant factors shaping perceptual differences. Future research should broaden the participant pool and incorporate neurocognitive data to facilitate a deeper understanding of perceptual mechanisms. Furthermore, the development of AI systems for use in music should include the integration of Transformer and RNN-based advanced learning models, the implementation of traditional music theory principles, the enhancement of emotional expressiveness, the improvement of cultural adaptation capacities, and the refinement of real-time interaction mechanisms.

Etik Beyan

Ethics committee approval was obtained with Decision No. 2024/375 in accordance with the Social and Human Sciences Scientific Research and Publication Ethics Committee of R.T. Afyon Kocatepe University.

Teşekkür

Thanks goes to the youth of the State Conservatory for their involvement. Many thanks to Özlem Folb for her valuable help in translating the article.

Kaynakça

  • Aylett, M. P., Vinciarelli, A., & Wester, M. (2020). Voice puppetry: A new application of automatic voice transformation for emotional speech synthesis. In Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 108-109). Association for Computing Machinery.
  • Bent, I., & Drabkin, W. (1987). Analysis. Macmillan.
  • Born, G., & Devine, K. (2019). Music technology, gender and class: Digitalization, educational and social change in England. Twentieth-Century Music, 16(1), 3-37.
  • Briot, J. P., Hadjeres, G., & Pachet, F. D. (2020). Deep learning techniques for music production. Springer.
  • Burns, K.H. (1994). The history and development of algorithms in music composition, 1957–1993. Docotoral dissertation. Ball State University.
  • Camurri, A., Hashimoto, S., Ricchetti, M., Ricci, A., Suzuki, K., Trocca, R., & Volpe, G. (1999). Synthesis and analysis of emotionally expressive music performance. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (pp. 317-322).
  • Carnovalini, F., & Rodà, A. (2020). Computational creativity and music production systems: An introduction to the current state of play. Frontiers in Artificial Intelligence, 3(14), 1-26.
  • Chen, Z. (2024). Composing music under certain conditions based on neural network. Applied and Computational Engineering, 64, 186-192.
  • Clarke, E., & Doffman, M. (2019). Distributed creativity. Oxford University Press.
  • Collins, T. & Laney, R., (2017) Computer- generated stylistic compositions with long- term repetitive and phrasal structure. Journal of Creative Music Systems 1(2). doi: https://doi.org/10.5920/JCMS.2017.02
  • Cook, N. (2020). Music as creative practice. Oxford University Press.
  • Cope, D. (1987). An expert system from computer-assisted composition. Computer Music Journal, 11(4), 30-46.
  • Cope, D. (2003). Computer analysis of musical allusions. Computer Music Journal, 27(1), 11-28.
  • Daniel, J.C. (2016). Neural mechanisms of musical rhythm processing: Cross-cultural differences and stages of beat perception. Doctoral thesis. The University of Western Ontario. Canada.
  • Deltorn, J. & Macrez, F. (2018). Authorship in the Age of Machine learning and Artificial Intelligence (August 1, 2018). In: Sean M. O’Connor (ed.), The Oxford Handbook of Music Law and Policy, Oxford University Press, 2019 (Forthcoming), Centre for International Intellectual Property Studies (CEIPI) Research Paper No. 2018-10, https:// doi.org/10.2139/ssrn.3261329
  • Dubnov, S., Huang, K., & Wang, C. (2021). Towards intercultural analysis using music information dynamics. arXiv preprint arXiv:2111.12588. https://doi. org/10.48550/arXiv.2111.12588
  • Fernández, J. D., & Vico, F. (2013). Artificial intelligence methods in algorithmic composition: A comprehensive review. Journal of Artificial Intelligence Research, 48, 513-582.
  • Ferreira, P., Limongi, R., & Fávero, L. P. (2023). Data-driven music production: Application of deep learning models for symbolic music composition. Applied Sciences, 13(7), 4543.
  • Giordano, B.L. (2011). Music perception. The Journal of the Acoustical Society of America, 129(6), 4086-4086
  • Greenberg, D. M., Wride, S. J., Snowden, D. A., Spathis, D., Potter, J., & Rentfrow, P. J. (2022). Universal and variable elements in music preferences: A study on preferential responses to Western music across 53 countries. Journal of Personality and Social Psychology, 122(2), 286-302.
  • Harper-Scott, J. P. E., & Samson, J. (2021). Introduction to music studies. Cambridge University Press.
  • Hong, J.W. (2022). Living with the most human-like non-humans: Understanding human-AI interactions in different social contexts. AI & Society, 37, 1405-1415
  • Hong, J.-W., Fischer, K., Ha, Y., & Zeng, Y. (2021). I wrote a song for you: An experiment testing the impact of machine characteristics on the evaluation of AI-composed music. Computers in Human Behavior, 131, 107239. https://doi.org/10.1016/j. chb.2022.107239
  • Huang, C.Z.A., Vaswani, A., Uszkoreit, J., Shazeer, N., Simon, I., Hawthorne, C., ... & Eck, D. (2020). Music transformer: Generating long-term structured music. In Proceedings of ICLR 2020.
  • Huang, H., Man, J., Li, L., & Zeng, R. (2024). Musical timbre style transfer with diffusion models. PeerJ Computer Science, 10(e2194). https://doi.org/10.7717/peerj-cs.2194
  • Jones, M. R. (2010). Music perception: Current research and future directions. In Springer Handbook of Auditory Research (pp. 1-12). Springer.
  • Koelsch, S. (2020). The cognitive neuroscience of music. Oxford University Press.
  • Koelsch, S., & Siebel, W. A. (2005). Towards a neural basis of music perception. Trends in Cognitive Sciences, 9(12), 578-584.
  • Laidlow, R. (2023). Artificial intelligence in the creative process within contemporary classical music. Contemporary Music Review, 42(1-2), 1-21.
  • Laidlow, R. (2023). Artificial intelligence in the creative process within contemporary classical music. Contemporary Music Review, 42(1-2), 1-21.
  • Liebman, E., & Stone, P. (2020). Artificial musical intelligence: a surwey. arXiv:2006.10553v1. https://doi. org/10.48550/arXiv.2006.10553
  • Liu, C.-H., & Ting, C.-K. (2017). Computational intelligence in music composition: A survey. IEEE Transactions on Emerging Topics in Computational Intelligence, 1(1), 2-15. https://doi. org/10.1109/TETCI.2016.2642200
  • McDermott, H.J. (2004). Music perception with cochlear implants: A review. Trends in Amplification, 8(2), 49-82.
  • McNamee, A.K., Schwanauer, S.M., & Levitt, D.A. (1995). Machine models of music. Journal of Music Theory, 39(1), 170-183
  • Miranda, E.R. (2021). The Handbook of Artificial Intelligence for Music: Foundations, Advanced Approaches and Developments. Springer
  • Morrison, S.J., & Demorest, S.M. (2009).Cultural constraints on music perception and cognition. Progress in Brain Research, 178, 67-77.
  • Nettl, B. (2015). The study of ethnomusicology: Thirty-three discussions. University of Illinois Press.
  • Pachet, F., Roy, P., & Carré, B. (2020). Music creation supported by Flow Machines: Towards new categories of novelty. Journal of Creative Music Systems, 5(1), 22-46.
  • Pearce, M., & Wiggins, G. A. (2020). Experimental comparison of PPM variants on a monophonic music prediction task. Journal of New Music Research, 49(1), 53-79.
  • Peretz, I., & Zatorre, R. J. (Eds.). (2003). The cognitive neuroscience of music. Oxford University Press
  • Prabhakaran, V., Qadri, R., & Hutchinson, B. (2022). Cultural incongruencies in artificial intelligence. arXiv preprint arXiv:2211.13069. https://doi. org/10.48550/arXiv.2211.13069
  • Ragot, M., Martin, N., & Cojean, S. (2020). AI-generated vs. human artworks: A perception bias towards artificial intelligence? In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-13). ACM. https://doi. org/10.1145/3334480.3382892
  • Roads, C. (1985). Music and artificial intelligence research. ACM Computing Surveys, 17(2), 163-190. https://doi. org/10.1145/4468.4469
  • Samo, A., & Highhouse, S. (2023). Artificial intelligence and art: Identifying aesthetic judgment factors that differentiate human from machine-produced artworks. Psychology of Aesthetics, Creativity, and the Arts, 17(4), 459-471. http://dx.doi. org/10.1037/aca0000570
  • Sarmento, P., Loth, J., & Barthet, M. (2024). Between me and AI: An analysis of listener perspectives on AI- composed progressive metal music. arXiv preprint arXiv:2407.21615. https://doi.org/10.48550/arXiv.2407.21615
  • Schoner, B., Cooper, C., Douglas, C., & Gershenfeld, N. (2020). Data-driven modeling of acoustic instruments. In Proceedings of the International Computer Music Conference (pp.358-365).
  • Shank, D.B., Stefanik, C., Stuhlsatz, C., Kacirek, K., & Belfi, A.M.(2023) Composer bias towards AI-generated music: Listeners like it less when they think it was composed by AI. Journal of Experimental Psychology: Applied, 29(3), 676-692.
  • Siganos, A. (2024) International musical preferences as a measure of culture: Evidence from cross-border mergers. The European Journal of Finance,30(2),166-189.
  • Simonetta,F.,Avanzini,F.,&Ntalampiras, S.(2022) A perceptual measure for evaluating automatic music transcriptions resynthesis. Applied Sciences,12(4),1876.
  • Stevens, C.J. (2012) Music perception and cognition: A review of recent cross-cultural research. Topics in Cognitive Science, 4(4),653–667.
  • Stolyarov II,G.(2019) Empowering musical creation through machines algorithms and artificial intelligence. INSAM Journal of Contemporary Music Art and Technology, 2, 81-99.
  • Stolzenburg, F. (2013). Harmony perception with periodicity detection. arXiv:1306.6458v6
  • Sturm, B. L., Ben-Tal, O., Monaghan, U., Collins, N., Herremans, D., Chew, E., Hadjeres, G., Deruty, E., & Pachet, F. (2018). Machine learning research that matters for music creation: A case study. Journal of New Music Research, 47(5), 1-21. https://doi.org /10.1080/09298215.2018.1515233
  • Sun, D., Wang, H., & Xiong, J. (2024) Do you want to listen to my music buddy? An experiment on AI musicians. International Journal of Human–Computer Interaction, 40(12),3133–3143
  • Susino, M. (2015) Intercultural emotional experiences as a reaction to music. Journal of Cross-Cultural Psychology, 46(8),1050– 1062
  • Tirovolas, A.K., & Levitin, D.J. (2011). Research on music perception and cognition from 1983 to 2010: A categorical bibliometric analysis of empirical papers on music perception. Music Perception,29(1),23–36
  • Tubadji, A., Huang, H., & Webber, D.J. (2021) Cultural proximity bias in AI acceptability:The importance of being human. Technological Forecasting and Social Change, 173, 121100.
  • Wang, J., Jin, Y., Zhang, K., Lin, F., & Chen, L. (2019) An unsupervised methodology for musical style transformation. In Proceedings of the 2019 International Conference on Computational Intelligence and Security (pp.247–251)
  • Wiggins, G.A., Pearce, M.T., & Müllensiefen, D. (2012) Computational modeling of music cognition and musical creativity. In R.T. Dean (Ed.), The Oxford handbook of computer music. Oxford University Press.
  • Webster, P.R., & Mertens, G. (2022) Music technology and education during crisis times:The effects and lessons learned from COVID-19. International Journal of Music Education,40(2),102–116.
  • Xia, Y., Jiang, Y., & Ye, T. (2020). Music classification in MIDI format based on LSTM Model. arXiv:2010.07739. https://doi. org/10.48550/arXiv.2010.07739
  • Xiong, Z., Wang, W., Yu, J., Lin, Y., & Wang, Z. (2023) A comprehensive review on evaluation methodologies for AI- generated music. IEEE Access, 11, 123456-123470.
  • Yang, L.C. & Lerch, A. (2020) On the evaluation of generative models in music. Neural Computing and Applications, 32(9),4773–4784.
  • Yaozhu Chan, P., Dong, M., & Li, H. (2019) The science of harmony: a psychophysical basis for perceptual tensions and resolutions in music. Journal of Neuroscience, 39(15), 2825–2834.
  • Yuqiang, L., Shengchen, L., & Georgy, F. (2020) How musical features and representations affect objective assessments in musical composition. In Proceedings of the 21st International Society for Music Information Retrieval Conference (pp.234– 241).
  • Zhu, Y., Baca, J., Rekabdar, B., & Rawassizadeh, R.(2023)A survey on AI-based tools and models for music production. ACM Computing Surveys, 56(2),1–35. https://doi. org/10.48550/arXiv.2308.12982
  • Zlatkov, D., Ens, J., & Pasquier, P. (2023) Investigating bias against AI-composed music. Lecture Notes in Computer Science, (pp. 308–323). Springer.
Toplam 67 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Müzik Teknolojisi ve Kayıt
Bölüm Orijinal araştırma
Yazarlar

Seyhan Canyakan 0000-0001-6373-4245

Erken Görünüm Tarihi 30 Aralık 2024
Yayımlanma Tarihi 30 Aralık 2024
Gönderilme Tarihi 2 Ekim 2024
Kabul Tarihi 30 Aralık 2024
Yayımlandığı Sayı Yıl 2024 Cilt: 12 Sayı: 4

Kaynak Göster

APA Canyakan, S. (2024). Perceptual differences between AI and human compositions: the impact of musical factors and cultural background. Rast Musicology Journal, 12(4), 463-490. https://doi.org/10.12975/rastmd.20241245

Yazarlarımızın editöryal süreçlerin aksamaması için editöryal emaillere 3 gün içinde yanıt vermeleri gerekmektedir.