Research Article
BibTex RIS Cite

YAPAY ZEKA İÇ MİMARLIĞA NE ÖLÇÜDE KATKI SAĞLAYABİLİR? BETİMSEL DOĞRULUK ÜZERİNE KARŞILAŞTIRMALI BİR ANALİZ

Year 2026, Volume: 7 Issue: 1 , 95 - 132 , 30.03.2026
https://doi.org/10.53710/jcode.1765357
https://izlik.org/JA78GT75SU

Abstract

Bu çalışma, Claude 3.5 Sonnet, Gemini 1.5 Flash ve ChatGPT 4o olmak üzere üç yapay zeka modelinin, yedi temel tasarım kriterine dayalı olarak iç mekân tasarımı çıktıları üretme performansını değerlendirmektedir. Bu kriterler; tasarım stili, renk, aydınlatma, mobilya ve ürün seçimi, iç mekân malzemeleri, mimari özellikler ve mekânsal yerleşimden oluşmaktadır. Değerlendirme süreci, altı farklı mekân tasarımı üzerinden gerçekleştirilmiş ve 15 katılımcı, yapay zeka tarafından üretilen çıktıları 1 ile 5 arasında puanlamıştır. Sonuçlar, Claude 3.5 Sonnet’in birçok kriterde tutarlı puanlar elde etmesi sayesinde genel performans açısından en başarılı model olduğunu ortaya koymaktadır. Bunu, tasarım stili ve renk alanlarında öne çıkan ancak belirli ölçüde değişkenlik gösteren Gemini 1.5 Flash takip etmektedir. ChatGPT 4o ise mobilya ve aydınlatma kategorilerinde güçlü performans sergilemesine karşın, genel tutarsızlıklar nedeniyle daha düşük bir sıralamada yer almıştır. Modeller rekabetçi bir performans göstermelerine rağmen, özellikle mekânsal yerleşim ve iç mekân malzemeleri gibi alanlar tüm modeller için belirli zorluklar teşkil etmiş ve bu bağlamda gelişime açık noktalar ortaya koymuştur. Bu çalışma, yapay zekâ destekli sistemlerin tasarım süreçlerine katkı potansiyelini vurgularken, karmaşık mekânsal ve malzeme odaklı bağlamlarda mevcut sınırlamaların aşılabilmesi adına daha fazla geliştirme ve kapsam genişletme gerekliliğine dikkat çekmektedir.

References

  • Akram, J. A. A. (2013). Toward a psychological design process for interior architecture. Journal of King Saud University - Architecture & Planning, 25, 21–38.
  • Almaz, A. F., El-Agouz, E. A. E. A., Abdelfatah, M. T., & Mohamed, I. R. (2024). The future role of artificial intelligence (AI) design's integration into architectural and interior design education is to improve efficiency, sustainability, and creativity. Sustainability and Creativity, 3(12), 1749–1772. DOI: 10.13189/cea.2024.120336
  • Audry, S. (2021). Art in the age of machine learning. Cambridge, MA: The MIT Press.
  • Bayrak, E. (2020). Yapay Zekâ ve Mekân Tasarımı Etkileşiminin Günümüz Tasarım Eğitiminde Değerlendirilmesi [Master’s thesis, Hacettepe Üniversitesi, Güzel Sanatlar Enstitüsü].
  • Beltagy, I., Peters, M. E., & Cohan, A. (2020). Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
  • Betz, G., Richardson, K., & Voigt, C. (2021). Thinking aloud: Dynamic context generation improves zero-shot reasoning performance of GPT-2. arXiv preprint arXiv:2103.13033.
  • Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., et al. (2021). On the opportunities and risks of foundation models. arXiv:2108.07258.
  • Ching, F. D., & Binggeli, C. (2018). Interior design illustrated. John Wiley & Sons.
  • Chiu, M. L. (1995). Collaborative design in CAAD studios: Shared ideas, resources, and representations. In Proceedings of the International Conference on CAAD Futures (Vol. 95, pp. 749–759).
  • Dang, H., Mecke, L., Lehmann, F., Goller, S., & Buschek, D. (2022). How to prompt? Opportunities and challenges of zero- and few-shot learning for human-AI interaction in creative applications of generative models. arXiv preprint arXiv:2209.01390.
  • Darwiche, A. (2018). Human-level intelligence or animal-like abilities? Communications of the ACM, 61(10), 56–67. https://doi.org/10.1145/3271625
  • Denton, E. L., Chintala, S., Szlam, A., & Fergus, R. (2015). Deep generative image models using a Laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems (pp. 1486–1494).
  • Deveci, M. (2022). Yapay Zekâ Uygulamalarının Sanat ve Tasarım Alanlarına Yansıması. Vankulu Sosyal Araştırmalar Dergisi, 9, 119–140.
  • Eckert, C., & Stacey, M. (2000). Sources of inspiration: A language of design. Design Studies, 21, 523–538. https://doi.org/10.1016/s0142-694x(00)00022-3
  • Goldschmidt, G. (1998). Creative architectural design: Reference versus precedence. Journal of Architectural and Planning Research, 15, 258–270.
  • Graef, S., & Georgievski, I. (2021). Software architecture for next-generation AI planning systems. arXiv (Cornell University). https://doi.org/10.48550/arXiv.2102.10985
  • Halpern, O. (2020). Architectural intelligence: How designers and architects created the digital landscape by Molly Wright Steenson. Technology and Culture, 61(4), 1265–1267. https://doi.org/10.1353/tech.2020.0151
  • Fernandez, P. (2022). Technology behind text to image generators. Library Hi Tech News, 39(10), 1–4. https://doi.org/10.1108/lhtn-10-2022-0116
  • Fiebrink, R. (2019). Machine learning education for artists, musicians, and other creative practitioners. ACM Transactions on Computing Education (TOCE), 19(4), 1–32. https://doi.org/10.1145/3294008
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Advances in Neural Information Processing Systems (pp. 2672–2680).
  • Gwern. (2020). GPT-3 creative fiction. Retrieved from https://www.Gwern.Net/GPT-3
  • He, X., & Deng, L. (2017). Deep learning for image-to-text generation: A technical overview. IEEE Signal Processing Magazine, 34(6), 109–116. https://doi.org/10.1109/MSP.2017.2741510
  • Huynh-The, T., Pham, Q. V., Pham, X. Q., Nguyen, T. T., Han, Z., & Kim, D. S. (2023). Artificial intelligence for the metaverse: A survey. Engineering Applications of Artificial Intelligence, 117, 105581. https://doi.org/10.1016/j.engappai.2022.105581
  • Hhandelwal, U., He, H., Qi, P., & Jurafsky, D. (2018). Sharp nearby, fuzzy far away: How neural language models use context. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol. 1, Long Papers).
  • Liu, V., & Chilton, L. B. (2022). Design guidelines for prompt engineering textto-image generative models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22), New Orleans, LA, USA. Association for Computing Machinery. https://doi.org/10.1145/3491102.3501825
  • Lu, Y., Bartolo, M., Moore, A., Riedel, S., & Stenetorp, P. (2021). Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. arXiv preprint arXiv:2104.08786.
  • Münch, T. (2022). System architecture design and platform development strategies: An introduction to electronic systems development in the age of AI, agile development, and organizational change. Springer.
  • O’Connor, J., & Andreas, J. (2021). What context features can transformer language models use? arXiv preprint arXiv:2106.08367.
  • Oppenlaender, J. (2022). The creativity of text-based generative art. arXiv:2206.02904.
  • Pavlichenko, N., & Ustalov, D. (2023, July). Best prompts for text-to-image models and how to find them. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 2067–2071). https://doi.org/10.1145/3539618.3592000
  • Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
  • Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., & Lee, H. (2016). Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396. https://doi.org/10.48550/arXiv.1605.05396
  • Reed, S. E., Akata, Z., Mohan, S., Tenka, S., Schiele, B., & Lee, H. (2016). Learning what and where to draw. In Advances in Neural Information Processing Systems (pp. 217–225).
  • Reviriego, P., & Merino-Gómez, E. (2022). Text to image generation: Leaving no language behind. arXiv preprint arXiv:2208.09333.
  • Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical text-conditional image generation with CLIP latents. arXiv:2204.06125.
  • Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). Highresolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 10684–10695). IEEE. https://doi.org/10.1109/cvpr52688.2022.01042
  • Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., & Ganguli, S. (2015). Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015) (pp. 2256–2265). PMLR.
  • Smith, P. D. (2018). Hands-on artificial intelligence for beginners: An introduction to AI concepts, algorithms, and their implementation. Packt Publishing Ltd.
  • Smith, B. C. (2019). The promise of artificial intelligence: Reckoning and judgment. The MIT Press.
  • Sucu, İ., & Ataman, E. (2020). Dijital evrenin yeni dünyası olarak yapay zeka ve Her Filmi üzerine bir çalışma. Yeni Medya Elektronik Dergisi, 4(1), 40–52.
  • Vartiainen, H., & Tedre, M. (2023). Using artificial intelligence in craft education: Crafting with text-to-image generative models. Digital Creativity, 34(1), 1–21. https://doi.org/10.1080/14626268.2023.2174557
  • Vartiainen, H., Liukkonen, P., & Tedre, M. (2025). Emerging human-technology relationships in a co-design process with generative AI. Thinking Skills and Creativity, 56, 101742. https://doi.org/10.1016/j.tsc.2024.101742
  • Xu, T., Zhang, P., Huang, Q., Zhang, H., Gan, Z., Huang, X., & He, X. (2018). AttnGAN: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1316–1324). https://doi.org/10.1109/cvpr.2018.00143
  • Yıldırım, B., & Emirarslan, S. (2021). İç mimarlıkta yapay zekâ: İnsana öykünen makineler çağında yapay zekânın mesleki paydaşlığı. Yapay Zekâ ve Dijital Teknoloji, İksad Publishing House, Ankara, 101.
  • Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., & Metaxas, D. N. (2017). StackGAN: Text to photorealistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (pp. 59). https://doi.org/10.1109/iccv.2017.629

How Well Can AI Contribute to Interior Architecture? A Comparative Analysis of Descriptive Accuracy

Year 2026, Volume: 7 Issue: 1 , 95 - 132 , 30.03.2026
https://doi.org/10.53710/jcode.1765357
https://izlik.org/JA78GT75SU

Abstract

This study evaluates the performance of three AI models—Claude 3.5 Sonnet, Gemini 1.5 Flash, and ChatGPT 4o—in generating interior design outputs based on seven key design criteria: design style, color, lighting, furniture and product, interior materials, architectural features, and spatial layout. The evaluation was conducted across six different space designs, with 15 participants scoring the AI-generated outputs on a scale of 1 to 5. The results indicate that Claude 3.5 Sonnet achieved the highest overall performance due to its consistent scores across multiple criteria, followed closely by Gemini 1.5 Flash, which excelled in design style and color but exhibited slight variability. ChatGPT 4o, while demonstrating strong performance in furniture and lighting, struggled with inconsistencies, leading to its lower overall ranking. Despite their competitive performance, areas such as spatial layout and interior materials presented challenges for all models, highlighting opportunities for improvement. This study underscores the growing potential of AI in supporting design processes while emphasizing the need for further refinement and expansion to address limitations in complex spatial and material contexts.

References

  • Akram, J. A. A. (2013). Toward a psychological design process for interior architecture. Journal of King Saud University - Architecture & Planning, 25, 21–38.
  • Almaz, A. F., El-Agouz, E. A. E. A., Abdelfatah, M. T., & Mohamed, I. R. (2024). The future role of artificial intelligence (AI) design's integration into architectural and interior design education is to improve efficiency, sustainability, and creativity. Sustainability and Creativity, 3(12), 1749–1772. DOI: 10.13189/cea.2024.120336
  • Audry, S. (2021). Art in the age of machine learning. Cambridge, MA: The MIT Press.
  • Bayrak, E. (2020). Yapay Zekâ ve Mekân Tasarımı Etkileşiminin Günümüz Tasarım Eğitiminde Değerlendirilmesi [Master’s thesis, Hacettepe Üniversitesi, Güzel Sanatlar Enstitüsü].
  • Beltagy, I., Peters, M. E., & Cohan, A. (2020). Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
  • Betz, G., Richardson, K., & Voigt, C. (2021). Thinking aloud: Dynamic context generation improves zero-shot reasoning performance of GPT-2. arXiv preprint arXiv:2103.13033.
  • Bommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., Bernstein, M. S., et al. (2021). On the opportunities and risks of foundation models. arXiv:2108.07258.
  • Ching, F. D., & Binggeli, C. (2018). Interior design illustrated. John Wiley & Sons.
  • Chiu, M. L. (1995). Collaborative design in CAAD studios: Shared ideas, resources, and representations. In Proceedings of the International Conference on CAAD Futures (Vol. 95, pp. 749–759).
  • Dang, H., Mecke, L., Lehmann, F., Goller, S., & Buschek, D. (2022). How to prompt? Opportunities and challenges of zero- and few-shot learning for human-AI interaction in creative applications of generative models. arXiv preprint arXiv:2209.01390.
  • Darwiche, A. (2018). Human-level intelligence or animal-like abilities? Communications of the ACM, 61(10), 56–67. https://doi.org/10.1145/3271625
  • Denton, E. L., Chintala, S., Szlam, A., & Fergus, R. (2015). Deep generative image models using a Laplacian pyramid of adversarial networks. In Advances in Neural Information Processing Systems (pp. 1486–1494).
  • Deveci, M. (2022). Yapay Zekâ Uygulamalarının Sanat ve Tasarım Alanlarına Yansıması. Vankulu Sosyal Araştırmalar Dergisi, 9, 119–140.
  • Eckert, C., & Stacey, M. (2000). Sources of inspiration: A language of design. Design Studies, 21, 523–538. https://doi.org/10.1016/s0142-694x(00)00022-3
  • Goldschmidt, G. (1998). Creative architectural design: Reference versus precedence. Journal of Architectural and Planning Research, 15, 258–270.
  • Graef, S., & Georgievski, I. (2021). Software architecture for next-generation AI planning systems. arXiv (Cornell University). https://doi.org/10.48550/arXiv.2102.10985
  • Halpern, O. (2020). Architectural intelligence: How designers and architects created the digital landscape by Molly Wright Steenson. Technology and Culture, 61(4), 1265–1267. https://doi.org/10.1353/tech.2020.0151
  • Fernandez, P. (2022). Technology behind text to image generators. Library Hi Tech News, 39(10), 1–4. https://doi.org/10.1108/lhtn-10-2022-0116
  • Fiebrink, R. (2019). Machine learning education for artists, musicians, and other creative practitioners. ACM Transactions on Computing Education (TOCE), 19(4), 1–32. https://doi.org/10.1145/3294008
  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. In Advances in Neural Information Processing Systems (pp. 2672–2680).
  • Gwern. (2020). GPT-3 creative fiction. Retrieved from https://www.Gwern.Net/GPT-3
  • He, X., & Deng, L. (2017). Deep learning for image-to-text generation: A technical overview. IEEE Signal Processing Magazine, 34(6), 109–116. https://doi.org/10.1109/MSP.2017.2741510
  • Huynh-The, T., Pham, Q. V., Pham, X. Q., Nguyen, T. T., Han, Z., & Kim, D. S. (2023). Artificial intelligence for the metaverse: A survey. Engineering Applications of Artificial Intelligence, 117, 105581. https://doi.org/10.1016/j.engappai.2022.105581
  • Hhandelwal, U., He, H., Qi, P., & Jurafsky, D. (2018). Sharp nearby, fuzzy far away: How neural language models use context. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol. 1, Long Papers).
  • Liu, V., & Chilton, L. B. (2022). Design guidelines for prompt engineering textto-image generative models. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI ’22), New Orleans, LA, USA. Association for Computing Machinery. https://doi.org/10.1145/3491102.3501825
  • Lu, Y., Bartolo, M., Moore, A., Riedel, S., & Stenetorp, P. (2021). Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. arXiv preprint arXiv:2104.08786.
  • Münch, T. (2022). System architecture design and platform development strategies: An introduction to electronic systems development in the age of AI, agile development, and organizational change. Springer.
  • O’Connor, J., & Andreas, J. (2021). What context features can transformer language models use? arXiv preprint arXiv:2106.08367.
  • Oppenlaender, J. (2022). The creativity of text-based generative art. arXiv:2206.02904.
  • Pavlichenko, N., & Ustalov, D. (2023, July). Best prompts for text-to-image models and how to find them. In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (pp. 2067–2071). https://doi.org/10.1145/3539618.3592000
  • Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.
  • Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., & Lee, H. (2016). Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396. https://doi.org/10.48550/arXiv.1605.05396
  • Reed, S. E., Akata, Z., Mohan, S., Tenka, S., Schiele, B., & Lee, H. (2016). Learning what and where to draw. In Advances in Neural Information Processing Systems (pp. 217–225).
  • Reviriego, P., & Merino-Gómez, E. (2022). Text to image generation: Leaving no language behind. arXiv preprint arXiv:2208.09333.
  • Ramesh, A., Dhariwal, P., Nichol, A., Chu, C., & Chen, M. (2022). Hierarchical text-conditional image generation with CLIP latents. arXiv:2204.06125.
  • Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). Highresolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 10684–10695). IEEE. https://doi.org/10.1109/cvpr52688.2022.01042
  • Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., & Ganguli, S. (2015). Deep unsupervised learning using nonequilibrium thermodynamics. In Proceedings of the 32nd International Conference on Machine Learning (ICML 2015) (pp. 2256–2265). PMLR.
  • Smith, P. D. (2018). Hands-on artificial intelligence for beginners: An introduction to AI concepts, algorithms, and their implementation. Packt Publishing Ltd.
  • Smith, B. C. (2019). The promise of artificial intelligence: Reckoning and judgment. The MIT Press.
  • Sucu, İ., & Ataman, E. (2020). Dijital evrenin yeni dünyası olarak yapay zeka ve Her Filmi üzerine bir çalışma. Yeni Medya Elektronik Dergisi, 4(1), 40–52.
  • Vartiainen, H., & Tedre, M. (2023). Using artificial intelligence in craft education: Crafting with text-to-image generative models. Digital Creativity, 34(1), 1–21. https://doi.org/10.1080/14626268.2023.2174557
  • Vartiainen, H., Liukkonen, P., & Tedre, M. (2025). Emerging human-technology relationships in a co-design process with generative AI. Thinking Skills and Creativity, 56, 101742. https://doi.org/10.1016/j.tsc.2024.101742
  • Xu, T., Zhang, P., Huang, Q., Zhang, H., Gan, Z., Huang, X., & He, X. (2018). AttnGAN: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 1316–1324). https://doi.org/10.1109/cvpr.2018.00143
  • Yıldırım, B., & Emirarslan, S. (2021). İç mimarlıkta yapay zekâ: İnsana öykünen makineler çağında yapay zekânın mesleki paydaşlığı. Yapay Zekâ ve Dijital Teknoloji, İksad Publishing House, Ankara, 101.
  • Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., & Metaxas, D. N. (2017). StackGAN: Text to photorealistic image synthesis with stacked generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision (pp. 59). https://doi.org/10.1109/iccv.2017.629
There are 45 citations in total.

Details

Primary Language English
Subjects Interior Architecture
Journal Section Research Article
Authors

Yaren Şekerci 0000-0003-4509-6299

Müge Develier

Submission Date August 19, 2025
Acceptance Date December 13, 2025
Publication Date March 30, 2026
DOI https://doi.org/10.53710/jcode.1765357
IZ https://izlik.org/JA78GT75SU
Published in Issue Year 2026 Volume: 7 Issue: 1

Cite

APA Şekerci, Y., & Develier, M. (2026). How Well Can AI Contribute to Interior Architecture? A Comparative Analysis of Descriptive Accuracy. Journal of Computational Design, 7(1), 95-132. https://doi.org/10.53710/jcode.1765357

88x31.png

The papers published in JCoDe are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.