Araştırma Makalesi
BibTex RIS Kaynak Göster

BÜYÜK DİL MODELLERİ “ORTAK DENETÇİ” GİBİ HAREKET EDEBİLİR Mİ?

Yıl 2026, Sayı: 34, 174 - 184, 16.02.2026
https://doi.org/10.58348/denetisim.1782835
https://izlik.org/JA63JC78EA

Öz

Bu çalışma, büyük dil modellerinin (BDM) denetim iş akışlarına "eş denetçiler" olarak entegrasyonunu inceleyerek, kanıt izlenebilirliğini, yönetişimi ve insan hesap verebilirliğini sağlayan çerçevelere entegre edilmesinin gerekliliğini vurgulamaktadır. Yapay zeka destekli denetime artan ilgiye rağmen, önceki çalışmalar BDM'nin teknik yeteneklerini denetim standartları ve düzenleyici uyumluluk gereklilikleriyle sistematik olarak birleştirmemiştir. Denetim doktrini, yapay zeka yönetişim çerçeveleri ve doğal dil işleme araştırmalarını sentezleyen bir anlatı literatür taraması yoluyla, çalışma böyle bir entegrasyonun nasıl sağlanabileceğini incelemektedir.
BDM'ler, mesleki yargının yerini almak yerine, güvence süreçlerini geliştiren denetlenebilir destek sunmaktadır. Hibrit erişim, politika kısıtlamalı üretim ve kriptografik köken gibi özellikleri bir araya getirerek, önerilen mimari hem olgusal güvenilirliği hem de düzenleyici uyumluluğu ele almaktadır. Bulgular, etkili bir BDM uygulamasının standartlarla sıkı bir uyum gerektirdiğinin altını çizmektedir. Sonuç olarak araştırma, denetimde güvenilir yapay zekânın sağlam teknik güvenlik önlemlerine, yönetişim yapılarına ve sürekli insan gözetimine bağlı olduğunu doğrulamaktadır.

Kaynakça

  • Asai, A., Wu, Z., Wang, Y., Sil, A., & Hajishirzi, H. (2023). Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2310.11511
  • Beurer-Kellner, L., Fischer, M., & Vechev, M. (2023). Prompting Is Programming: A Query Language for Large Language Models. Proceedings of the ACM on Programming Languages, 7 (PLDI), 1946–1969. https://doi.org/10.1145/3591300
  • Carlini, N., Jagielski, M., Choquette-Choo, C. A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., & Tramèr, F. (2023). Poisoning Web-Scale Training Datasets is Practical (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2302.10149
  • Carlini, N., Tramèr, F., Wallace, E., Jagielski, M., Herbert-Voss, A., Lee, K., Roberts, A., Brown, T., Song, D., Erlingsson, Ú., Oprea, A., & Raffel, C. (2021). Extracting Training Data from Large Language Models. 30th USENIX Security Symposium (USENIX Security 21), 2633–2650. Retrieved December 10, 2025, from https://www.usenix.org/conference/usenixsecurity21/presentation/carlini-extracting
  • Chalkidis, I., Jana, A., Hartung, D., Bommarito, M., Androutsopoulos, I., Katz, D., & Aletras, N. (2022). LexGLUE: A Benchmark Dataset for Legal Language Understanding in English. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 4310–4330. https://doi.org/10.18653/v1/2022.acl-long.297
  • Coalition for Content Provenance and Authenticity [C2PA]. (2024). Content Credentials: C2PA Technical Specification. Retrieved December15,2025,from https://spec.c2pa.org/specifications/specifications/2.1/specs/C2PA_Specification.html
  • European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules (Artificial Intelligence Act). Retrieved November 18, 2025, from http://data.europa.eu/eli/reg/2024/1689/oj
  • Formal, T., Lassance, C., Piwowarski, B., & Clinchant, S. (2021). SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2109.10086
  • Gao, L., Dai, Z., Pasupat, P., Chen, A., Chaganty, A. T., Fan, Y., Zhao, V., Lao, N., Lee, H., Juan, D.-C., & Guu, K. (2023a). RARR: Researching and Revising What Language Models Say, Using Language Models. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 16477–16508. https://doi.org/10.18653/v1/2023.acl-long.910
  • Gao, T., Yen, H., Yu, J., & Chen, D. (2023b). Enabling Large Language Models to Generate Text with Citations. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 6465–6488. https://doi.org/10.18653/v1/2023.emnlp-main.398
  • Geng, S., Josifoski, M., Peyrard, M., & West, R. (2023). Grammar-Constrained Decoding for Structured NLP Tasks without Finetuning. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 10932–10952. https://doi.org/10.18653/v1/2023.emnlp-main.674
  • Greshake, K., Abdelnabi, S., Mishra, S., Endres, C., Holz, T., & Fritz, M. (2023). Not what you’ve signed up for: Compromising real-world LLM-integrated applications with indirect prompt injection. Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security, 79–90. https://doi.org/10.1145/3605764.3623985
  • Guha, N., Nyarko, J., Ho, D. E., Ré, C., Chilton, A., Narayana, A., Chohlas-Wood, A., Peters, A., Waldon, B., Rockmore, D. N., Zambrano, D., Talisman, D., Hoque, E., Surani, F., Fagan, F., Sarfaty, G., Dickinson, G. M., Porat, H., Hegland, J., … Li, Z. (2023). LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2308.11462
  • Huang, L., Yu, W., Ma, W., Zhong, W., Feng, Z., Wang, H., Chen, Q., Peng, W., Feng, X., Qin, B., & Liu, T. (2023). A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. https://doi.org/10.48550/ARXIV.2311.05232
  • Institute of Internal Auditors [IIA]. (2025). AI in Internal Audit | Knowledge center to empower internal audit teams with the latest info and practical guidance in AI. Retrieved November 7, 2025, from https://www.theiia.org/en/resources/knowledge-centers/artificial-intelligence/
  • International Auditing and Assurance Standards Board [IAASB]. (2010). IFAC Releases 2010 Handbooks Containing All IAASB Pronouncements and the Code of Ethics for Professional Accountants. Retrieved November 1, 2025, from https://www.iaasb.org/news-events/2010-04/ifac-releases-2010-handbooks-containing-all-iaasb-pronouncements-and-code-ethics-professional
  • International Organization for Standardization [ISO]. (2023a). ISO/IEC 23894:2023. Retrieved November 1, 2025, from https://www.iso.org/standard/77304.html
  • International Organization for Standardization [ISO]. (2023b). ISO/IEC 42001:2023. Retrieved November 1, 2025, from https://www.iso.org/standard/42001
  • Izacard, G., & Grave, E. (2021). Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 874–880. https://doi.org/10.18653/v1/2021.eacl-main.74
  • Jain, N., Schwarzschild, A., Wen, Y., Somepalli, G., Kirchenbauer, J., Chiang, P. Y., & Goldstein, T. (2023). Baseline defenses for adversarial attacks against aligned language models. arXiv Preprint arXiv:2309.00614. https://doi.org/10.48550/arXiv.2309.00614
  • Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2023). Survey of Hallucination in Natural Language Generation. ACM Computing Surveys, 55(12), 1–38. https://doi.org/10.1145/3571730
  • Karpukhin, V., Oguz, B., Min, S., Lewis, P., Wu, L., Edunov, S., Chen, D., & Yih, W. (2020). Dense Passage Retrieval for Open-Domain Question Answering. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 6769–6781. https://doi.org/10.18653/v1/2020.emnlp-main.550
  • Khattab, O., & Zaharia, M. (2020). ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT. Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 39–48. https://doi.org/10.1145/3397271.3401075
  • Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W., Rocktäschel, T., Riedel, S., & Kiela, D. (2021). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (No. arXiv:2005.11401). arXiv. https://doi.org/10.48550/arXiv.2005.11401
  • Li, Z., Qu, L., & Haffari, G. (2021). Total Recall: A Customized Continual Learning Method for Neural Semantic Parsers. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 3816–3831. https://doi.org/10.18653/v1/2021.emnlp-main.310
  • Min, S., Krishna, K., Lyu, X., Lewis, M., Yih, W., Koh, P. W., Iyyer, M., Zettlemoyer, L., & Hajishirzi, H. (2023). FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation (No. arXiv:2305.14251). arXiv. https://doi.org/10.48550/arXiv.2305.14251
  • National Institute of Standards and Technology [NIST]. (2024). Artificial intelligence risk management framework: Generative artificial intelligence profile (Nos. 600–1). National Institute of Standards and Technology (U.S.). https://doi.org/10.6028/NIST.AI.600-1
  • Nogueira, R., & Cho, K. (2019). Passage Re-ranking with BERT (Version 5). arXiv. https://doi.org/10.48550/ARXIV.1901.04085
  • OWASP Foundation. (2025). OWASP Top 10 for Large Language Model Applications. OWASP Top 10 for Large Language Model Applications. Retrieved November 1, 2025, from https://owasp.org/www-project-top-10-for-large-language-model-applications/
  • Pipitone, N., & Alami, G. H. (2024). LegalBench-RAG: A Benchmark for Retrieval-Augmented Generation in the Legal Domain (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2408.10343
  • Public Company Accounting Oversight Board [PCAOB]. (2024). Generative-AI-Spotlight. Retrieved November 1, 2025, from https://pcaobus.org/documents/generative-ai-spotlight.pdf
  • Public Company Accounting Oversight Board [PCAOB]. (2025). AS 1215: Audit Documentation. Retrieved November 1, 2025, from https://pcaobus.org/oversight/standards/auditing-standards/details/AS1215
  • Rashkin, H., Nikolaev, V., Lamm, M., Aroyo, L., Collins, M., Das, D., Petrov, S., Tomar, G. S., Turc, I., & Reitter, D. (2023). Measuring Attribution in Natural Language Generation Models. Computational Linguistics, 49(4), 777–840. https://doi.org/10.1162/coli_a_00486
  • Robertson, S., & Zaragoza, H. (2009). The Probabilistic Relevance Framework: BM25 and Beyond. Foundations and Trends® in Information Retrieval, 3(4), 333–389. https://doi.org/10.1561/1500000019
  • Santhanam, K., Khattab, O., Saad-Falcon, J., Potts, C., & Zaharia, M. (2022). ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 3715–3734. https://doi.org/10.18653/v1/2022.naacl-main.272 The Committee of Sponsoring Organizations of the Treadway Commission [COSO]. (2013). Internal Control. Retrieved November 1, 2025, from https://www.coso.org/internal-control
  • The World Wide Web Consortium [W3C]. (2013). PROV-DM: The PROV Data Model. PROV-DM: The PROV Data Model. Retrieved November 1, 2025, from https://www.w3.org/TR/prov-dm/
  • Xu, J., Ma, M. D., Wang, F., Xiao, C., & Chen, M. (2023). Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2305.14710

CAN LARGE LANGUAGE MODELS ACT AS “CO-AUDITORS”?

Yıl 2026, Sayı: 34, 174 - 184, 16.02.2026
https://doi.org/10.58348/denetisim.1782835
https://izlik.org/JA63JC78EA

Öz

This study explores the integration of large language models (LLMs) into audit workflows as "co-auditors," emphasizing the necessity of embedding them within frameworks that ensure evidence traceability, governance, and human accountability. Despite growing interest in AI-augmented auditing, prior work has not systematically bridged LLM technical capabilities with audit standards and regulatory compliance requirements. Through a narrative literature review synthesizing audit doctrine, AI governance frameworks, and natural language processing research, the study examines how such integration can be achieved.
Rather than substituting professional judgment, LLMs offer auditable support that enhances assurance processes. By incorporating hybrid retrieval, policy-constrained generation, and cryptographic provenance, the proposed architecture addresses both factual reliability and regulatory compliance. The findings underscore that effective LLM deployment requires strict alignment with standards. Ultimately, the research confirms that trustworthy AI in auditing depends on robust technical safeguards, governance structures, and sustained human oversight.

Kaynakça

  • Asai, A., Wu, Z., Wang, Y., Sil, A., & Hajishirzi, H. (2023). Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2310.11511
  • Beurer-Kellner, L., Fischer, M., & Vechev, M. (2023). Prompting Is Programming: A Query Language for Large Language Models. Proceedings of the ACM on Programming Languages, 7 (PLDI), 1946–1969. https://doi.org/10.1145/3591300
  • Carlini, N., Jagielski, M., Choquette-Choo, C. A., Paleka, D., Pearce, W., Anderson, H., Terzis, A., Thomas, K., & Tramèr, F. (2023). Poisoning Web-Scale Training Datasets is Practical (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2302.10149
  • Carlini, N., Tramèr, F., Wallace, E., Jagielski, M., Herbert-Voss, A., Lee, K., Roberts, A., Brown, T., Song, D., Erlingsson, Ú., Oprea, A., & Raffel, C. (2021). Extracting Training Data from Large Language Models. 30th USENIX Security Symposium (USENIX Security 21), 2633–2650. Retrieved December 10, 2025, from https://www.usenix.org/conference/usenixsecurity21/presentation/carlini-extracting
  • Chalkidis, I., Jana, A., Hartung, D., Bommarito, M., Androutsopoulos, I., Katz, D., & Aletras, N. (2022). LexGLUE: A Benchmark Dataset for Legal Language Understanding in English. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 4310–4330. https://doi.org/10.18653/v1/2022.acl-long.297
  • Coalition for Content Provenance and Authenticity [C2PA]. (2024). Content Credentials: C2PA Technical Specification. Retrieved December15,2025,from https://spec.c2pa.org/specifications/specifications/2.1/specs/C2PA_Specification.html
  • European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules (Artificial Intelligence Act). Retrieved November 18, 2025, from http://data.europa.eu/eli/reg/2024/1689/oj
  • Formal, T., Lassance, C., Piwowarski, B., & Clinchant, S. (2021). SPLADE v2: Sparse Lexical and Expansion Model for Information Retrieval (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2109.10086
  • Gao, L., Dai, Z., Pasupat, P., Chen, A., Chaganty, A. T., Fan, Y., Zhao, V., Lao, N., Lee, H., Juan, D.-C., & Guu, K. (2023a). RARR: Researching and Revising What Language Models Say, Using Language Models. Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 16477–16508. https://doi.org/10.18653/v1/2023.acl-long.910
  • Gao, T., Yen, H., Yu, J., & Chen, D. (2023b). Enabling Large Language Models to Generate Text with Citations. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 6465–6488. https://doi.org/10.18653/v1/2023.emnlp-main.398
  • Geng, S., Josifoski, M., Peyrard, M., & West, R. (2023). Grammar-Constrained Decoding for Structured NLP Tasks without Finetuning. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, 10932–10952. https://doi.org/10.18653/v1/2023.emnlp-main.674
  • Greshake, K., Abdelnabi, S., Mishra, S., Endres, C., Holz, T., & Fritz, M. (2023). Not what you’ve signed up for: Compromising real-world LLM-integrated applications with indirect prompt injection. Proceedings of the 16th ACM Workshop on Artificial Intelligence and Security, 79–90. https://doi.org/10.1145/3605764.3623985
  • Guha, N., Nyarko, J., Ho, D. E., Ré, C., Chilton, A., Narayana, A., Chohlas-Wood, A., Peters, A., Waldon, B., Rockmore, D. N., Zambrano, D., Talisman, D., Hoque, E., Surani, F., Fagan, F., Sarfaty, G., Dickinson, G. M., Porat, H., Hegland, J., … Li, Z. (2023). LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2308.11462
  • Huang, L., Yu, W., Ma, W., Zhong, W., Feng, Z., Wang, H., Chen, Q., Peng, W., Feng, X., Qin, B., & Liu, T. (2023). A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. https://doi.org/10.48550/ARXIV.2311.05232
  • Institute of Internal Auditors [IIA]. (2025). AI in Internal Audit | Knowledge center to empower internal audit teams with the latest info and practical guidance in AI. Retrieved November 7, 2025, from https://www.theiia.org/en/resources/knowledge-centers/artificial-intelligence/
  • International Auditing and Assurance Standards Board [IAASB]. (2010). IFAC Releases 2010 Handbooks Containing All IAASB Pronouncements and the Code of Ethics for Professional Accountants. Retrieved November 1, 2025, from https://www.iaasb.org/news-events/2010-04/ifac-releases-2010-handbooks-containing-all-iaasb-pronouncements-and-code-ethics-professional
  • International Organization for Standardization [ISO]. (2023a). ISO/IEC 23894:2023. Retrieved November 1, 2025, from https://www.iso.org/standard/77304.html
  • International Organization for Standardization [ISO]. (2023b). ISO/IEC 42001:2023. Retrieved November 1, 2025, from https://www.iso.org/standard/42001
  • Izacard, G., & Grave, E. (2021). Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, 874–880. https://doi.org/10.18653/v1/2021.eacl-main.74
  • Jain, N., Schwarzschild, A., Wen, Y., Somepalli, G., Kirchenbauer, J., Chiang, P. Y., & Goldstein, T. (2023). Baseline defenses for adversarial attacks against aligned language models. arXiv Preprint arXiv:2309.00614. https://doi.org/10.48550/arXiv.2309.00614
  • Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y. J., Madotto, A., & Fung, P. (2023). Survey of Hallucination in Natural Language Generation. ACM Computing Surveys, 55(12), 1–38. https://doi.org/10.1145/3571730
  • Karpukhin, V., Oguz, B., Min, S., Lewis, P., Wu, L., Edunov, S., Chen, D., & Yih, W. (2020). Dense Passage Retrieval for Open-Domain Question Answering. Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 6769–6781. https://doi.org/10.18653/v1/2020.emnlp-main.550
  • Khattab, O., & Zaharia, M. (2020). ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT. Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, 39–48. https://doi.org/10.1145/3397271.3401075
  • Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W., Rocktäschel, T., Riedel, S., & Kiela, D. (2021). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks (No. arXiv:2005.11401). arXiv. https://doi.org/10.48550/arXiv.2005.11401
  • Li, Z., Qu, L., & Haffari, G. (2021). Total Recall: A Customized Continual Learning Method for Neural Semantic Parsers. Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 3816–3831. https://doi.org/10.18653/v1/2021.emnlp-main.310
  • Min, S., Krishna, K., Lyu, X., Lewis, M., Yih, W., Koh, P. W., Iyyer, M., Zettlemoyer, L., & Hajishirzi, H. (2023). FActScore: Fine-grained Atomic Evaluation of Factual Precision in Long Form Text Generation (No. arXiv:2305.14251). arXiv. https://doi.org/10.48550/arXiv.2305.14251
  • National Institute of Standards and Technology [NIST]. (2024). Artificial intelligence risk management framework: Generative artificial intelligence profile (Nos. 600–1). National Institute of Standards and Technology (U.S.). https://doi.org/10.6028/NIST.AI.600-1
  • Nogueira, R., & Cho, K. (2019). Passage Re-ranking with BERT (Version 5). arXiv. https://doi.org/10.48550/ARXIV.1901.04085
  • OWASP Foundation. (2025). OWASP Top 10 for Large Language Model Applications. OWASP Top 10 for Large Language Model Applications. Retrieved November 1, 2025, from https://owasp.org/www-project-top-10-for-large-language-model-applications/
  • Pipitone, N., & Alami, G. H. (2024). LegalBench-RAG: A Benchmark for Retrieval-Augmented Generation in the Legal Domain (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2408.10343
  • Public Company Accounting Oversight Board [PCAOB]. (2024). Generative-AI-Spotlight. Retrieved November 1, 2025, from https://pcaobus.org/documents/generative-ai-spotlight.pdf
  • Public Company Accounting Oversight Board [PCAOB]. (2025). AS 1215: Audit Documentation. Retrieved November 1, 2025, from https://pcaobus.org/oversight/standards/auditing-standards/details/AS1215
  • Rashkin, H., Nikolaev, V., Lamm, M., Aroyo, L., Collins, M., Das, D., Petrov, S., Tomar, G. S., Turc, I., & Reitter, D. (2023). Measuring Attribution in Natural Language Generation Models. Computational Linguistics, 49(4), 777–840. https://doi.org/10.1162/coli_a_00486
  • Robertson, S., & Zaragoza, H. (2009). The Probabilistic Relevance Framework: BM25 and Beyond. Foundations and Trends® in Information Retrieval, 3(4), 333–389. https://doi.org/10.1561/1500000019
  • Santhanam, K., Khattab, O., Saad-Falcon, J., Potts, C., & Zaharia, M. (2022). ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction. Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 3715–3734. https://doi.org/10.18653/v1/2022.naacl-main.272 The Committee of Sponsoring Organizations of the Treadway Commission [COSO]. (2013). Internal Control. Retrieved November 1, 2025, from https://www.coso.org/internal-control
  • The World Wide Web Consortium [W3C]. (2013). PROV-DM: The PROV Data Model. PROV-DM: The PROV Data Model. Retrieved November 1, 2025, from https://www.w3.org/TR/prov-dm/
  • Xu, J., Ma, M. D., Wang, F., Xiao, C., & Chen, M. (2023). Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models (Version 2). arXiv. https://doi.org/10.48550/ARXIV.2305.14710
Toplam 37 adet kaynakça vardır.

Ayrıntılar

Birincil Dil İngilizce
Konular Bilgi Güvenliği Yönetimi
Bölüm Araştırma Makalesi
Yazarlar

Hakan Emekci 0000-0002-4074-5600

Gönderilme Tarihi 12 Eylül 2025
Kabul Tarihi 1 Ocak 2026
Yayımlanma Tarihi 16 Şubat 2026
DOI https://doi.org/10.58348/denetisim.1782835
IZ https://izlik.org/JA63JC78EA
Yayımlandığı Sayı Yıl 2026 Sayı: 34

Kaynak Göster

APA Emekci, H. (2026). CAN LARGE LANGUAGE MODELS ACT AS “CO-AUDITORS”? Denetişim, 34, 174-184. https://doi.org/10.58348/denetisim.1782835

Denetişim dergisi yayımladığı çalışmalarla; alanındaki profesyoneller, akademisyenler ve düzenleyiciler arasında etkili bir iletişim ağı kurarak, Dünyada etkin bir denetim ve yönetim sistemine ulaşma yolculuğunda önemli mesafelerin kat edilmesine katkı sağlamaktadır.