ChatGPT and Generative AI Policy

Use of Large Language Models and Generative AI in Manuscript Preparation

Wasilatuna: Journal of Islamic Communication and Broadcasting recognizes that generative artificial intelligence technologies, including Large Language Models (LLMs) such as ChatGPT, may serve as supportive tools in the preparation of scholarly manuscripts. These technologies may assist authors in developing initial ideas, organizing arguments, summarizing materials, paraphrasing texts, and improving linguistic quality.

However, generative AI systems do not possess human judgment, ethical reasoning, or critical reflection. Therefore, all outputs produced with the assistance of AI must remain under the full control, evaluation, and responsibility of the authors. To safeguard academic integrity, authors using LLMs are required to observe the following principles:

Objectivity
AI-generated text may reflect biases embedded in its training data, including social, cultural, gender, or ideological bias. Certain perspectives especially minority or marginalized viewpoints may be underrepresented. Because AI outputs are decontextualized, such biases may not always be immediately visible.

Scientific accuracy
LLMs may generate inaccurate, misleading, or incomplete information, particularly when dealing with complex, interdisciplinary, or ambiguous topics. AI systems can produce statements that sound plausible but are not scientifically valid, including incorrect facts or fabricated references.

Contextual limitations
AI tools have limited ability to interpret nuanced human communication, such as irony, metaphor, humor, or culturally specific expressions. This limitation may result in misinterpretations or inappropriate formulations in scholarly writing.

Training data constraints
The reliability of generative AI depends on the availability and quality of training data. In some academic fields, languages, or regional contexts, data scarcity may reduce the usefulness and accuracy of AI-generated output.

Guidelines for Authors

Authors who use generative AI in the preparation of their manuscripts must:

Disclose the use of AI tools
Authors must clearly state whether generative AI was used, identify the specific tool or model, and explain its purpose. This information should be included in the Methods section or Acknowledgments, as appropriate.

Provide a formal declaration of AI use
A disclosure statement must be added at the end of the main manuscript, before the References list, under a separate heading titled:
“Declaration of AI and AI-Assisted Technologies in the Writing Process.”

The statement should follow this format:

During the preparation of this manuscript, the author(s) used [NAME OF TOOL / SERVICE] for the purpose of [PURPOSE]. After using this tool, the author(s) reviewed, edited, and verified all content and take full responsibility for the integrity of the publication.

This declaration is not required when only basic tools such as grammar checkers, spell checkers, or reference managers are used.

Verify all content and references
Authors are fully responsible for checking the accuracy, validity, and scholarly appropriateness of all text, data, and citations generated or assisted by AI.

Validate all sources
Any references suggested or generated by AI must be verified against original sources and formatted according to accepted academic standards.

Avoid plagiarism
Because generative AI may reproduce or closely resemble existing texts, authors must conduct originality checks and ensure proper citation of all sources.

Acknowledge the limitations of AI
Authors should remain aware of potential bias, error, or knowledge gaps in AI-generated material.

Respect authorship standards
AI tools, including ChatGPT, must not be listed as authors or co-authors of any manuscript.

Failure to disclose the use of generative AI may result in corrective or disciplinary action in accordance with the journal’s ethical policies.

Responsibilities of Editors and Reviewers

The integrity of editorial decision-making and peer review is fundamental to scholarly publishing. Therefore, Wasilatuna establishes clear ethical boundaries regarding the use of generative AI.

For reviewers
All manuscripts under review are confidential. Reviewers must not upload, share, or process manuscripts or review reports using generative AI tools. Using AI to draft, structure, or assist in peer-review reports is prohibited. Peer review requires expert human judgment and cannot be delegated to machines.

For editors
Manuscripts, editorial correspondence, and decision letters must remain confidential and must not be entered into generative AI systems, including for language editing. Editorial decisions must be based solely on the professional evaluation of the editor.

AI tools may be used only for limited technical purposes, such as plagiarism detection or formatting checks, provided that they do not directly process the scholarly content and operate within secure, approved systems that comply with data protection and ethical standards.

If there is reasonable suspicion of undisclosed or inappropriate use of generative AI by authors or reviewers, the editor must initiate an investigation in accordance with the journal’s governance procedures.

Although emerging technologies can support academic work, the evaluation of scientific quality and editorial judgment remains a fundamentally human responsibility.

Further Information

Wasilatuna: Journal of Islamic Communication and Broadcasting refers to the recommendations of the World Association of Medical Editors (WAME) and the Committee on Publication Ethics (COPE) regarding authorship and the use of artificial intelligence tools in scholarly publishing.

This policy may be updated as technologies and publishing practices evolve.