Research Article
BibTex RIS Cite

Yapay zekâ ile üretilen müziklerin bestecilik perspektifinden değerlendirilmesi

Year 2025, Issue: Yapay zekâ ve sanat özel sayısı, 239 - 261, 22.10.2025
https://doi.org/10.46372/arts.1743089

Abstract

Bu çalışma, yapay zekâ (YZ) tarafından üretilen müziğin bestecilik bağlamında nasıl değerlendirilebileceğini kuramsal olarak incelemektedir. Günümüzde YZ, müzik üretiminde önemli bir rol üstlenmekte ve bu durum, geleneksel bestecilik anlayışını ve yaratım sürecini yeniden gözden geçirmeyi gerektirmektedir. Bu çalışmanın problem cümlesi, “YZ tarafından üretilen müziğin bestecilik bağlamında değerlendirilmesine ilişkin kuramsal boşluğun giderilmesi” şeklinde tanımlanmıştır. Çalışma kapsamında, YZ’nin oluşturduğu müziksel yapılar ile insan besteleri; estetik değer, yaratıcı ifade gücü ve yapısal tutarlılık açısından karşılaştırılmıştır. Güncel örnekler üzerinden yapılan analizler, YZ’nin teknik olarak başarılı müzikler üretebildiğini, ancak duygusal derinlik, özgünlük ve sanatsal ifade açısından sınırlı kaldığını ortaya koymaktadır. Bu nedenle YZ, geleneksel bir besteci değil, yaratıcı süreci destekleyen bir araç olarak değerlendirilmekte ve bestecilere alternatif üretim yolları sunmaktadır. Çalışma, YZ ile müzik üretimi konusundaki kuramsal tartışmalara katkı sunmayı amaçlamaktadır.

References

  • Arielli, E. (2024a). Even an AI could do that. In E. Arielli & L. Manovich (Eds.), Artificial aesthetics: Generative AI, art and visual media (pp. 8–24). Retrieved from https://manovich.net/index.php/projects/artificial-aesthetics
  • Arielli, E. (2024b). Human perception and the artificial gaze. In E. Arielli & L. Manovich (Eds.), Artificial aesthetics: Generative AI, art and visual media (pp. 95–117). Retrieved from https://manovich.net/index.php/projects/artificial-aesthetics
  • Agapiou, A. (2024). A systematic review of the Socio-Legal dimensions of responsible AI and its role in improving health and safety in construction. Buildings, 14(5), 1469, 1-21. https://doi.org/10.3390/buildings14051469
  • Angeler, D. G., & Benavent-Corai, J. (2017). Ecological approaches to quantifying (Bio) diversity in music. Quantifying diversity in music. 1-26. https://doi.org/10.20944/preprints201704.0133.v1
  • Anglada-Tort, M., & Müllensiefen, D. (2017). The repeated recording illusion. Music perception an interdisciplinary journal, 35(1), 94 117. https://doi.org/10.1525/mp.2017.35.1.94
  • Argstatter, H. (2015). Perception of basic emotions in music: Culture-specific or multicultural? Psychology of music, 44(4), 674–690. https://doi.org/10.1177/0305735615589214
  • Avdeeff, M. (2019). Artificial intelligence & popular music: SKYGGE, flow machines, and the audio uncanny valley. Arts, 8(4), 130, 1-13. https://doi.org/10.3390/arts8040130
  • Balkwill, L., & Thompson, W. F. (1999). A Cross-Cultural investigation of the perception of emotion in music: psychophysical and cultural cues. Music perception an interdisciplinary journal, 17(1), 43–64. https://doi.org/10.2307/40285811
  • Berkowitz, A. E. (2024). Artificial intelligence and musicking. Music perception an interdisciplinary journal, 41(5), 393–412. https://doi.org/10.1525/mp.2024.41.5.393
  • Briot, J., Hadjeres, G., & Pachet, F. (2019). Deep learning techniques for music generation. In computational synthesis and creative systems. https://doi.org/10.1007/978-3-319-70163-9
  • Cardona, J. Z., Ceballos, M. C., Morales, A. M. T., Jaramillo, E. D., & De Jesús Rodríguez, B. (2022). Music modulates emotional responses in growing pigs. Scientific reports, 12(1), 1-10. https://doi.org/10.1038/s41598-022-07300-6
  • Christodoulou, A., & Jensenius, A. R. (2024). Navigating challenges in multimodal music data management for AI systems. AIMC 2024 (09/09-11/09), 1-19. https://aimc2024.pubpub.org/pub/4y99xcm3
  • Çebi, E., Reisoğlu, P., & Goktas, E. (2023). The influence of artificial intelligence on copyright law. Interdisciplinary studies in society, law, and politics, 2(2), 33–41. https://doi.org/10.61838/kman.isslp.2.2.5
  • Dash, A., & Agres, K. R. (2023). AI-based affective music generation systems: A review of methods, and challenges. arXiv (Cornell University), 1-26. https://doi.org/10.48550/arxiv.2301.06890
  • De Prisco, R., Malandrino, D., Pirozzi, D., Zaccagnino, G., & Zaccagnino, R. (2016). Understanding the structure of musical compositions: Is visualization an effective approach? Information visualization, 16(2), 139–152. https://doi.org/10.1177/1473871616655468
  • Donahue, C., McAuley, J., & Puckette, M. (2018). Adversarial audio synthesis. arXiv (Cornell University), 1-16. https://doi.org/10.48550/arxiv.1802.04208
  • Dong, H., Hsiao, W., Yang, L., & Yang, Y. (2018). MUSEGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment. arXiv (Cornell University), 1-13. https://doi.org/10.48550/arxiv.1709.06298
  • Eerola, T., Vuoskoski, J. K., Peltola, H. R., Putkinen, V., & Schäfer, K. (2018). An integrative review of the enjoyment of sadness associated with music. Physics of life reviews, 25, 100-121.
  • Fischinger, T., Kaufmann, M., & Schlotz, W. (2018). If it’s Mozart, it must be good? The influence of textual information and age on musical appreciation. Psychology of music, 48(4), 579–597. https://doi.org/10.1177/0305735618812216
  • Fu, Y., Newman, M., Going, L., Feng, Q., & Lee, J. H. (2025). Exploring the collaborative co-creation process with AI: a case study in novice music production. arXiv (Cornell University), 1-15. https://doi.org/10.48550/arxiv.2501.15276
  • Funston, L. (2024). Do androids dream of Sonata in C? University of colorado honors journal, 1-6, https://doi.org/10.33011/cuhj20242539
  • Gokul, V., Francis, C., & Dubnov, S. (2024). Evaluating co-creativity using total information flow. In international conference on ArtsIT, Interactivity and game creation, 331-345. Cham: springer nature Switzerland, https://doi.org/10.1007/978-3-031-97254-6_23
  • Hernandez-Olivan, C., Puyuelo, J. A., & Beltran, J. R. (2022). Subjective evaluation of deep learning models for symbolic music composition. arXiv (Cornell University), 1-5, https://doi.org/10.48550/arxiv.2203.14641
  • Herremans, D., Chuan, C., & Chew, E. (2017). A functional taxonomy of music generation systems. ACM computing surveys, 50(5), 1–30. https://doi.org/10.1145/3108242
  • Kwiecień, J., Skrzyński, P., Chmiel, W., Dąbrowski, A., Szadkowski, B., & Pluta, M. (2024). Technical, musical, and legal aspects of an AI-Aided algorithmic music production system. Applied sciences, 14(9), 354, 1-20. https://doi.org/10.3390/app14093541
  • Laukka, P., Eerola, T., Thingujam, N. S., Yamasaki, T., & Beller, G. (2013). Universal and culture-specific factors in the recognition and performance of musical affect expressions. Emotion, 13(3), 434–449. https://doi.org/10.1037/a0031388
  • Lee, J. H., & Hu, X. (2014). Cross-cultural similarities and differences in music mood perception. iConference 2014 Proceedings, 259-269. https://doi.org/10.9776/14081
  • Lee, K. J. M., Pasquier, P., & Yuri, J. (2025). Revival: collaborative artistic creation through human-AI interactions in musical creativity. arXiv e-prints, arXiv-2503, 1-7. https://doi.org/10.48550/arXiv.2503.15498
  • Li, H. (2020). Piano automatic computer composition by deep learning and blockchain technology. IEEE access, 8, 188951–188958. https://doi.org/10.1109/access.2020.3031155
  • Li, N. (2023). The application categories and technical frameworks of artificial intelligence Technologies in higher education music composition instruction. Higher education research, 232-24. https://doi.org/10.11648/j.her.20230806.14
  • Lu, J. (2025). Comparison of algorithmic music composition: translational models, mathematical models, and AI tools. Applied and computational engineering, 160(1), 45–54. https://doi.org/10.54254/2755-2721/2025.tj23489
  • McCormack, J., Gifford, T., Hutchings, P., Rodriguez, M. T. L., Yee-King, M., & D’Inverno, M. (2019). In a silent way: communication between AI and improvising musicians beyond sound. arXiv (Cornell University). https://doi.org/10.48550/arxiv.1902.06442
  • Micchi, G., Bigo, L., Giraud, M., Groult, R., & Levé, F. (2021). I keep counting: An experiment in human/AI co-creative songwriting. Transactions of the international society for music information retrieval, 4(1), 263-275. https://doi.org/10.5334/tismir.93
  • Miranda, E. R., & Williams, D. (2015). Artificial intelligence in organised sound. Organised sound, 20(1), 76–81. https://doi.org/10.1017/s1355771814000454
  • Moura, F. T., Maw, C., & Castrucci, C. (2020). Artificial creativity and music: Exploring how different levels of automation during composition process impact listeners value perception. In European marketing academy (EMAC) conference (pp. 1-10).
  • Novelli, N., & Proksch, S. (2022b). Am I (deep) blue? Music-making AI and emotional awareness. Frontiers in Neurorobotics, 16, 1-8. https://doi.org/10.3389/fnbot.2022.897110
  • Ou, B. (2023). Investigating midi data simplification by ai models. Applied and computational engineering, 21(1), 114-120. https://doi.org/10.54254/2755-2721/21/20231129
  • Ozili, P. K. (2024). Technology impact model: A transition from the technology acceptance model. AI & Society. https://doi.org/10.1007/s00146-024-01896-1
  • Paroiu, R., & Trausan-Matu, S. (2023). Measurement of music aesthetics using deep neural networks and dissonances. Information, 14(7), 358. https://doi.org/10.3390/info14070358 Payne, C. (2019, April). MuseNet: A large-scale deep learning model for music generation. “OpenAI Blog”. https://openai.com/blog/musenet/
  • Prabowo, B. A., & Asmarani, R. (2025). Generative literature: The role of artificial intelligence in the creative writing process. Allure journal, 5(1), 1–9. https://doi.org/10.26877/allure.v5i1.19959
  • Roberts, A., Engel, J., Raffel, C., Hawthorne, C., & Eck, D. (2018). A hierarchical latent vector model for learning long-term structure in music. arXiv (Cornell University). https://doi.org/10.48550/arxiv.1803.05428
  • Shank, D. B., Stefanik, C., Stuhlsatz, C., Kacirek, K., & Belfi, A. M. (2022). AI composer bias: Listeners like music less when they think it was composed by an AI. Journal of experimental psychology applied, 29(3), 676–692. https://doi.org/10.1037/xap0000447
  • Sihui, W., & Xiaoxi, T. (2024). Cultural resonances and stylistic divergences: A conceptual study of Shi Guangnan’s “Regret for the Past” and Rossini’s “William Tell.” Journal of digitainability realism & mastery (Dream), 3(10), 32–55. https://doi.org/10.56982/dream.v3i10.267
  • Singh, N., Mishra, M., & Machover, T. (2024). AI for musical discovery. An MIT Exploration of generative AI. https://doi.org/10.21428/e4baedd9.8fa181e9
  • Sturm, B. L., Santos, J. F., Ben-Tal, O., & Korshunova, I. (2016). Music transcription modelling and composition using deep learning. arXiv (Cornell University). https://doi.org/10.48550/arxiv.1604.08723
  • Sturm, B. L. T., Déguernel, K., Huang, R. S., Holzapfel, A., Bown, O., Collins, N., Sterne, J., Vila, L. C., Casini, L., Cabrera, D. A., Drott, E. A., & Ben-Tal, O. (2024). MusAIcology: AI Music and the need for a new kind of music studies. OSF preprints, 1-19. https://doi.org/10.31235/osf.io/9pz4x
  • Tabak, C. (2023). Intelligent music applications: innovative solutions for musicians and listeners. Uluslararası anadolu sosyal bilimler dergisi, 7(3), 752–773. https://doi.org/10.47525/ulasbid.1324070
  • Wang, X., Wei, Y., & Yang, D. (2021). Cross‐cultural analysis of the correlation between musical elements and emotion. Cognitive computation and systems, 4(2), 116–129. https://doi.org/10.1049/ccs2.12032
  • Weng, S., & Chen, H. (2020). Exploring the role of deep learning technology in the sustainable development of the music production industry. Sustainability, 12(2), 625, 1-20. https://doi.org/10.3390/su12020625
  • Xin, D. (2024). English poetry generated by Artificial Intelligence: A stylistic analysis and exploration of literary value. Philosophy and social science., 1(10), 47–50. https://doi.org/10.62381/p243a08
  • Xu, L. (2024). A study on the fair use principles of artificial intelligence generated music. Lecture notes in E-education psychology and public media, 34(1), 228–235. https://doi.org/10.54254/2753-7048/34/20231932
  • Velardo, V., & Vallati, M. (2015). On the stylistic evolution of a society of virtual melody composers. In lecture notes in computer science (pp. 249–260). https://doi.org/10.1007/978-3-319-16498-4_22
  • Zhang, Y., & Li, Z. (2021). Automatic synthesis technology of music teaching melodies based on recurrent neural network. Scientific programming, 2021, 1–10. https://doi.org/10.1155/2021/1704995
  • Zhu, M. (2023). Research on chord generation in automated music composition using deep learning algorithms. Informatica, 47(8), 89-94. https://doi.org/10.31449/inf.v47i8.4885

Evaluation of music generated by artificial intelligence from a compositional perspective

Year 2025, Issue: Yapay zekâ ve sanat özel sayısı, 239 - 261, 22.10.2025
https://doi.org/10.46372/arts.1743089

Abstract

This study explores how music generated by artificial intelligence (AI) can be evaluated from a compositional perspective. As AI becomes more involved in music production, it challenges traditional notions of creativity and authorship. “The problem statement of this study is defined as addressing the theoretical gap concerning the evaluation of AI-generated music in the context of composition.” The study compares AI-generated music and human composition in terms of aesthetic value, originality, and coherence. Findings from recent literature show that while AI can create technically competent and musically pleasing works, it lacks emotional depth, creative intuition, and artistic intent. Therefore, AI is seen not as a composer but as a supportive tool that enhances the creative process. This study contributes to ongoing theoretical debates about AI’s role in contemporary music composition.

References

  • Arielli, E. (2024a). Even an AI could do that. In E. Arielli & L. Manovich (Eds.), Artificial aesthetics: Generative AI, art and visual media (pp. 8–24). Retrieved from https://manovich.net/index.php/projects/artificial-aesthetics
  • Arielli, E. (2024b). Human perception and the artificial gaze. In E. Arielli & L. Manovich (Eds.), Artificial aesthetics: Generative AI, art and visual media (pp. 95–117). Retrieved from https://manovich.net/index.php/projects/artificial-aesthetics
  • Agapiou, A. (2024). A systematic review of the Socio-Legal dimensions of responsible AI and its role in improving health and safety in construction. Buildings, 14(5), 1469, 1-21. https://doi.org/10.3390/buildings14051469
  • Angeler, D. G., & Benavent-Corai, J. (2017). Ecological approaches to quantifying (Bio) diversity in music. Quantifying diversity in music. 1-26. https://doi.org/10.20944/preprints201704.0133.v1
  • Anglada-Tort, M., & Müllensiefen, D. (2017). The repeated recording illusion. Music perception an interdisciplinary journal, 35(1), 94 117. https://doi.org/10.1525/mp.2017.35.1.94
  • Argstatter, H. (2015). Perception of basic emotions in music: Culture-specific or multicultural? Psychology of music, 44(4), 674–690. https://doi.org/10.1177/0305735615589214
  • Avdeeff, M. (2019). Artificial intelligence & popular music: SKYGGE, flow machines, and the audio uncanny valley. Arts, 8(4), 130, 1-13. https://doi.org/10.3390/arts8040130
  • Balkwill, L., & Thompson, W. F. (1999). A Cross-Cultural investigation of the perception of emotion in music: psychophysical and cultural cues. Music perception an interdisciplinary journal, 17(1), 43–64. https://doi.org/10.2307/40285811
  • Berkowitz, A. E. (2024). Artificial intelligence and musicking. Music perception an interdisciplinary journal, 41(5), 393–412. https://doi.org/10.1525/mp.2024.41.5.393
  • Briot, J., Hadjeres, G., & Pachet, F. (2019). Deep learning techniques for music generation. In computational synthesis and creative systems. https://doi.org/10.1007/978-3-319-70163-9
  • Cardona, J. Z., Ceballos, M. C., Morales, A. M. T., Jaramillo, E. D., & De Jesús Rodríguez, B. (2022). Music modulates emotional responses in growing pigs. Scientific reports, 12(1), 1-10. https://doi.org/10.1038/s41598-022-07300-6
  • Christodoulou, A., & Jensenius, A. R. (2024). Navigating challenges in multimodal music data management for AI systems. AIMC 2024 (09/09-11/09), 1-19. https://aimc2024.pubpub.org/pub/4y99xcm3
  • Çebi, E., Reisoğlu, P., & Goktas, E. (2023). The influence of artificial intelligence on copyright law. Interdisciplinary studies in society, law, and politics, 2(2), 33–41. https://doi.org/10.61838/kman.isslp.2.2.5
  • Dash, A., & Agres, K. R. (2023). AI-based affective music generation systems: A review of methods, and challenges. arXiv (Cornell University), 1-26. https://doi.org/10.48550/arxiv.2301.06890
  • De Prisco, R., Malandrino, D., Pirozzi, D., Zaccagnino, G., & Zaccagnino, R. (2016). Understanding the structure of musical compositions: Is visualization an effective approach? Information visualization, 16(2), 139–152. https://doi.org/10.1177/1473871616655468
  • Donahue, C., McAuley, J., & Puckette, M. (2018). Adversarial audio synthesis. arXiv (Cornell University), 1-16. https://doi.org/10.48550/arxiv.1802.04208
  • Dong, H., Hsiao, W., Yang, L., & Yang, Y. (2018). MUSEGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment. arXiv (Cornell University), 1-13. https://doi.org/10.48550/arxiv.1709.06298
  • Eerola, T., Vuoskoski, J. K., Peltola, H. R., Putkinen, V., & Schäfer, K. (2018). An integrative review of the enjoyment of sadness associated with music. Physics of life reviews, 25, 100-121.
  • Fischinger, T., Kaufmann, M., & Schlotz, W. (2018). If it’s Mozart, it must be good? The influence of textual information and age on musical appreciation. Psychology of music, 48(4), 579–597. https://doi.org/10.1177/0305735618812216
  • Fu, Y., Newman, M., Going, L., Feng, Q., & Lee, J. H. (2025). Exploring the collaborative co-creation process with AI: a case study in novice music production. arXiv (Cornell University), 1-15. https://doi.org/10.48550/arxiv.2501.15276
  • Funston, L. (2024). Do androids dream of Sonata in C? University of colorado honors journal, 1-6, https://doi.org/10.33011/cuhj20242539
  • Gokul, V., Francis, C., & Dubnov, S. (2024). Evaluating co-creativity using total information flow. In international conference on ArtsIT, Interactivity and game creation, 331-345. Cham: springer nature Switzerland, https://doi.org/10.1007/978-3-031-97254-6_23
  • Hernandez-Olivan, C., Puyuelo, J. A., & Beltran, J. R. (2022). Subjective evaluation of deep learning models for symbolic music composition. arXiv (Cornell University), 1-5, https://doi.org/10.48550/arxiv.2203.14641
  • Herremans, D., Chuan, C., & Chew, E. (2017). A functional taxonomy of music generation systems. ACM computing surveys, 50(5), 1–30. https://doi.org/10.1145/3108242
  • Kwiecień, J., Skrzyński, P., Chmiel, W., Dąbrowski, A., Szadkowski, B., & Pluta, M. (2024). Technical, musical, and legal aspects of an AI-Aided algorithmic music production system. Applied sciences, 14(9), 354, 1-20. https://doi.org/10.3390/app14093541
  • Laukka, P., Eerola, T., Thingujam, N. S., Yamasaki, T., & Beller, G. (2013). Universal and culture-specific factors in the recognition and performance of musical affect expressions. Emotion, 13(3), 434–449. https://doi.org/10.1037/a0031388
  • Lee, J. H., & Hu, X. (2014). Cross-cultural similarities and differences in music mood perception. iConference 2014 Proceedings, 259-269. https://doi.org/10.9776/14081
  • Lee, K. J. M., Pasquier, P., & Yuri, J. (2025). Revival: collaborative artistic creation through human-AI interactions in musical creativity. arXiv e-prints, arXiv-2503, 1-7. https://doi.org/10.48550/arXiv.2503.15498
  • Li, H. (2020). Piano automatic computer composition by deep learning and blockchain technology. IEEE access, 8, 188951–188958. https://doi.org/10.1109/access.2020.3031155
  • Li, N. (2023). The application categories and technical frameworks of artificial intelligence Technologies in higher education music composition instruction. Higher education research, 232-24. https://doi.org/10.11648/j.her.20230806.14
  • Lu, J. (2025). Comparison of algorithmic music composition: translational models, mathematical models, and AI tools. Applied and computational engineering, 160(1), 45–54. https://doi.org/10.54254/2755-2721/2025.tj23489
  • McCormack, J., Gifford, T., Hutchings, P., Rodriguez, M. T. L., Yee-King, M., & D’Inverno, M. (2019). In a silent way: communication between AI and improvising musicians beyond sound. arXiv (Cornell University). https://doi.org/10.48550/arxiv.1902.06442
  • Micchi, G., Bigo, L., Giraud, M., Groult, R., & Levé, F. (2021). I keep counting: An experiment in human/AI co-creative songwriting. Transactions of the international society for music information retrieval, 4(1), 263-275. https://doi.org/10.5334/tismir.93
  • Miranda, E. R., & Williams, D. (2015). Artificial intelligence in organised sound. Organised sound, 20(1), 76–81. https://doi.org/10.1017/s1355771814000454
  • Moura, F. T., Maw, C., & Castrucci, C. (2020). Artificial creativity and music: Exploring how different levels of automation during composition process impact listeners value perception. In European marketing academy (EMAC) conference (pp. 1-10).
  • Novelli, N., & Proksch, S. (2022b). Am I (deep) blue? Music-making AI and emotional awareness. Frontiers in Neurorobotics, 16, 1-8. https://doi.org/10.3389/fnbot.2022.897110
  • Ou, B. (2023). Investigating midi data simplification by ai models. Applied and computational engineering, 21(1), 114-120. https://doi.org/10.54254/2755-2721/21/20231129
  • Ozili, P. K. (2024). Technology impact model: A transition from the technology acceptance model. AI & Society. https://doi.org/10.1007/s00146-024-01896-1
  • Paroiu, R., & Trausan-Matu, S. (2023). Measurement of music aesthetics using deep neural networks and dissonances. Information, 14(7), 358. https://doi.org/10.3390/info14070358 Payne, C. (2019, April). MuseNet: A large-scale deep learning model for music generation. “OpenAI Blog”. https://openai.com/blog/musenet/
  • Prabowo, B. A., & Asmarani, R. (2025). Generative literature: The role of artificial intelligence in the creative writing process. Allure journal, 5(1), 1–9. https://doi.org/10.26877/allure.v5i1.19959
  • Roberts, A., Engel, J., Raffel, C., Hawthorne, C., & Eck, D. (2018). A hierarchical latent vector model for learning long-term structure in music. arXiv (Cornell University). https://doi.org/10.48550/arxiv.1803.05428
  • Shank, D. B., Stefanik, C., Stuhlsatz, C., Kacirek, K., & Belfi, A. M. (2022). AI composer bias: Listeners like music less when they think it was composed by an AI. Journal of experimental psychology applied, 29(3), 676–692. https://doi.org/10.1037/xap0000447
  • Sihui, W., & Xiaoxi, T. (2024). Cultural resonances and stylistic divergences: A conceptual study of Shi Guangnan’s “Regret for the Past” and Rossini’s “William Tell.” Journal of digitainability realism & mastery (Dream), 3(10), 32–55. https://doi.org/10.56982/dream.v3i10.267
  • Singh, N., Mishra, M., & Machover, T. (2024). AI for musical discovery. An MIT Exploration of generative AI. https://doi.org/10.21428/e4baedd9.8fa181e9
  • Sturm, B. L., Santos, J. F., Ben-Tal, O., & Korshunova, I. (2016). Music transcription modelling and composition using deep learning. arXiv (Cornell University). https://doi.org/10.48550/arxiv.1604.08723
  • Sturm, B. L. T., Déguernel, K., Huang, R. S., Holzapfel, A., Bown, O., Collins, N., Sterne, J., Vila, L. C., Casini, L., Cabrera, D. A., Drott, E. A., & Ben-Tal, O. (2024). MusAIcology: AI Music and the need for a new kind of music studies. OSF preprints, 1-19. https://doi.org/10.31235/osf.io/9pz4x
  • Tabak, C. (2023). Intelligent music applications: innovative solutions for musicians and listeners. Uluslararası anadolu sosyal bilimler dergisi, 7(3), 752–773. https://doi.org/10.47525/ulasbid.1324070
  • Wang, X., Wei, Y., & Yang, D. (2021). Cross‐cultural analysis of the correlation between musical elements and emotion. Cognitive computation and systems, 4(2), 116–129. https://doi.org/10.1049/ccs2.12032
  • Weng, S., & Chen, H. (2020). Exploring the role of deep learning technology in the sustainable development of the music production industry. Sustainability, 12(2), 625, 1-20. https://doi.org/10.3390/su12020625
  • Xin, D. (2024). English poetry generated by Artificial Intelligence: A stylistic analysis and exploration of literary value. Philosophy and social science., 1(10), 47–50. https://doi.org/10.62381/p243a08
  • Xu, L. (2024). A study on the fair use principles of artificial intelligence generated music. Lecture notes in E-education psychology and public media, 34(1), 228–235. https://doi.org/10.54254/2753-7048/34/20231932
  • Velardo, V., & Vallati, M. (2015). On the stylistic evolution of a society of virtual melody composers. In lecture notes in computer science (pp. 249–260). https://doi.org/10.1007/978-3-319-16498-4_22
  • Zhang, Y., & Li, Z. (2021). Automatic synthesis technology of music teaching melodies based on recurrent neural network. Scientific programming, 2021, 1–10. https://doi.org/10.1155/2021/1704995
  • Zhu, M. (2023). Research on chord generation in automated music composition using deep learning algorithms. Informatica, 47(8), 89-94. https://doi.org/10.31449/inf.v47i8.4885
There are 54 citations in total.

Details

Primary Language English
Subjects Media Technologies
Journal Section Research Articles
Authors

Selin Oyan Küpeli 0009-0006-9102-1596

Publication Date October 22, 2025
Submission Date July 15, 2025
Acceptance Date October 12, 2025
Published in Issue Year 2025 Issue: Yapay zekâ ve sanat özel sayısı

Cite

APA Oyan Küpeli, S. (2025). Evaluation of music generated by artificial intelligence from a compositional perspective. ARTS: Artuklu Sanat Ve Beşeri Bilimler Dergisi(Yapay zekâ ve sanat özel sayısı), 239-261. https://doi.org/10.46372/arts.1743089