1. Purpose and Scope
This policy defines how all forms of generative AI (GenAI) and AI-assisted tools — including large language models (ChatGPT, Claude, Gemini), and image, audio, and video models (Midjourney, DALL·E, Sora, Whisper, etc.) — may and may not be used by authors, reviewers, and editors in any submission to or publication in JCSS (research articles, review articles, book reviews, translations, etc.).
The policy is grounded in the Council of Higher Education of Turkey (YÖK) "Ethical Guide on the Use of Generative AI in Scientific Research and Publication Activities in Higher Education Institutions" of 18 April 2024, the TÜBİTAK ULAKBİM TR Dizin evaluation criteria, the higher-order standards of COPE, WAME, and ICMJE, and the AI policies of major international publishers (Elsevier, Taylor & Francis, Wiley, Sage, Emerald). The journal commits to remaining aligned with the current ethical expectations of the indices it is in or applying to (TR Dizin, DOAJ, Scopus, ESCI).
2. Core Ethical Principles
1. Transparency: AI use shall be openly disclosed, naming the tool, version, purpose, and scope of effect.
2. Honesty: GenAI-supported content cannot replace the author's original intellectual contribution; original observation and interpretation are central to communication research.
3. Diligence: Authors must verify GenAI outputs against primary sources, the field literature, and — where possible — fieldwork data.
4. Accountability: Sole responsibility for the entire work — including statistical results, content categories, codings, and AI-generated text/visuals — rests with the human author(s).
5. Confidentiality: Participant data, interview transcripts, field notes, review reports, and ethics-committee documents shall not be uploaded to public AI services.
6. Fairness and bias awareness: Communication researchers are expected to be aware of and account for gender, ethnic, class, language, and cultural biases in GenAI outputs.
7. Responsibility against disinformation: Given the field's special responsibility, authors and editors shall preserve the credibility of scholarly communication and refrain from contributing to synthetic-media problems.
8. Contribution to ethical climate: Authors, reviewers, and editors actively contribute to responsible AI use.
3. Permitted Uses
The following uses are permitted provided that the disclosure obligation in Section 5 is satisfied:
3.1 Writing and Language
• Grammar, spelling, and punctuation checking in Turkish or English at the assistive level.
• Improvement of style, flow, and reduction of repetition.
• Academic-English support for non-native authors.
• Translation review of the author's own translation.
3.2 Research Design and Literature
• Preliminary literature search and key-concept suggestions, with all results verified from primary sources.
• Brainstorming on theoretical framing, hypotheses, and research questions.
• Classification of prior studies and pre-screening support for systematic reviews.
• Formal suggestions for article structure, titles, and abstracts.
3.3 Method and Data (Under Full Author Control)
• Linguistic clarity check of survey items.
• Preliminary pre-coding support for open-ended responses or interview transcripts; final coding and interpretation are always conducted by human researchers and submitted to peer review.
• Drafting a codebook for content or discourse analysis; final coding is done by human coders, and inter-coder reliability is reported.
• Basic descriptive-statistics verification; assistance with R/Python/SPSS syntax.
• Formal improvement of data visualisations, strictly based on the author's own data.
• Script support for API-based social-media data collection; compliance with platform terms and data-collection ethics is the author's responsibility.
3.4 Technical
• Format checks on tables, schemas, and figures; conformity with citation styles (APA, Chicago, MLA, etc.).
• Keyword generation and indexing suggestions.
• Open-access and journal-template compliance verification.
4. Prohibited Uses
The following uses are strictly unacceptable and trigger the sanctions defined in Section 8:
4.1 Authorship and Responsibility
• Listing AI tools as authors, co-authors, or contributors.
• Citing AI tools as sources in place of scholarly references.
• Producing the entire work or a substantial part of it via AI and presenting it as original work (this constitutes plagiarism).
4.2 Data Integrity — Critical for Communication Research
• Synthetic generation or AI-driven rewriting of survey responses, interview transcripts, focus-group data, or field notes.
• Passing non-existent participants (synthetic personas, "AI personas") or fabricated bot-user data off as real data.
• Manipulation of social-media data by AI, or use of AI in producing benchmark/gold-standard examples without human validation.
• Reporting content-analysis or coding results that are generated by AI without human verification.
4.3 Visual, Audio, and Video Integrity
• AI-generated photographs, illustrations, screenshot composites, or video frames (including deepfakes) shall not be published.
• AI-assisted editing or manipulation of interview recordings or field audio/video; when AI transcription tools are used, the output must be human-verified.
• Replacing original advertising, campaign, or social-media visuals (as objects of analysis) with AI-recreated versions.
4.4 Participant Privacy
• Uploading non-anonymised participant data, identifiable audio/video recordings, or interview transcripts to public AI services.
• Using AI tools for data-processing purposes not specified in the ethics-committee approval.
• Sharing personal data in ways inconsistent with the Turkish Personal Data Protection Law (KVKK) or applicable international regulations.
4.5 Process Violations
• Concealing or misreporting AI use.
• Uploading review reports, editorial correspondence, or unpublished manuscripts to public AI services.
• Allowing the author's content to be incorporated into model training in violation of the AI tool's terms of service.
5. Disclosure Obligation
Except for purely assistive proofreading tools (e.g., Microsoft Editor, Turkish spell-checkers), authors must disclose all permitted AI uses described in Section 3 as follows:
"Declaration on the Use of Generative AI and AI-Assisted Technologies: During the preparation of this work, the author(s) used [tool name and version] for [purpose: language editing / codebook drafting / writing statistical syntax / etc.]. The author(s) reviewed the outputs and assume full responsibility for the entire published work."
• The declaration shall appear in the Methods section or in the Acknowledgements, above the reference list.
• It shall remain visible in the published version, alongside both Turkish and English abstracts.
• If multiple tools are used, each shall be listed separately.
• If AI was used at a methodological step (interview transcription, social-media data collection, content analysis), the use shall be described in detail in Methods (tool, version, validation procedure, inter-coder reliability, etc.).
6. AI Use in Peer Review and Editorial Processes
9. Reviewers may not upload manuscripts assigned to them — or excerpts from them — to public AI services; this is a clear breach of confidentiality and triggers the sanctions in Section 8.
10. The review report must be written based on the reviewer's own reading, evaluation, and expertise; reports generated or substantially rewritten by AI are not accepted.
11. Reviewers may use AI only for assistive language polishing of their own report; reporting such use to the editor is encouraged.
12. Editors may use AI-assisted tools for plagiarism detection (iThenticate), language checking, format compliance, and keyword suggestions. Reviewer selection, acceptance/rejection, and content evaluation, however, are made solely by humans.
13. In addition to the 20% iThenticate similarity threshold, the journal reserves the right to use AI-generated text detection tools; aware that such tools can misclassify, the journal always grants the author the opportunity to respond.
7. Author Responsibility
• Authors must review all AI-generated or AI-improved content and remove hallucinations, fabricated references, and incorrect information.
• Primary data (surveys, interviews, focus groups, content-analysis samples, social-media data) shall be preserved in raw form and made available to the editor on request.
• When AI is used in coding or analysis, inter-coder reliability (e.g., Krippendorff's α, Cohen's κ) and AI–human agreement shall be reported clearly.
• Authors must have reviewed the AI tool's terms of service and confirmed that personal-data, copyright, and data-training conditions do not impair the rights to the manuscript.
• In multi-authored works, AI use shall be shared with all co-authors and disclosed with their joint consent.
• Authors must be ready to share AI prompts and outputs with the editor on request.
8. Violations and Sanctions
• Missing or false disclosure: revision and resubmission with appropriate declaration are requested.
• Authorship claim, synthetic data/participants, manipulated visuals/videos, or large-scale AI content generation: the work is removed from evaluation; the author's institution may be notified.
• If a violation is identified in a published article: correction, expression of concern, or retraction is applied in line with COPE guidelines.
• Confidentiality breach by reviewers: the reviewer is removed from the reviewer pool and, where applicable, reported to the relevant institution.
9. Effective Date and Revision
This policy enters into force on the date of its adoption by the JCSS Editorial Board. In view of the rapid evolution of AI technologies, it is reviewed at least annually and aligned with updates from TR Dizin, YÖK, COPE/WAME/ICMJE, and major publishers' policies. Authors and reviewers are subject to the version of the policy in force on the date of submission.
Kaynaklar / References
Bu politika, aşağıdaki birincil kaynaklardan derlenen ilkeler doğrultusunda hazırlanmıştır. / This policy was developed based on the principles compiled from the following primary sources.
• Elsevier — The use of generative AI and AI-assisted technologies in writing
• Elsevier — Generative AI policies for journals
• Elsevier — AI in the review process
• Taylor & Francis — AI Policy
• T&F Newsroom — Expanded Guidance on AI Application
• Taylor & Francis Editorial Policies (Author Services)
• Wiley — Generative AI Policy (PDF)
• Sage — Assistive and generative AI guidelines for authors
• Emerald — Stance on AI tools and authorship
• Emerald — Stance on AI in content creation and peer review
• Clarivate — Bringing Generative AI to the Web of Science
• Clarivate — Editorial Policies & Selection Process
• TR Dizin — Başvuru Koşulları ve Değerlendirme
• YÖK — Üretken Yapay Zekâ Kullanımına Dair Etik Rehber (PDF)
• COPE — Authorship and AI Tools
• COPE — Focus on Artificial Intelligence