Generative AI policies
Generative AI policies for journals
The rapid evolution of generative AI and AI-assisted technologies is reshaping the way scholarly content is created, reviewed, and communicated. In response to these developments, CBSJ is introducing a set of AI policies designed to promote transparency, integrity, and responsible use of these tools across all stages of publication. Our guidelines aim to assist authors, reviewers, editors, and readers in navigating the opportunities and challenges that AI technologies present. We recognize that this is a dynamic field, and we are committed to regularly reviewing and updating our policies to ensure they remain aligned with best practices and ethical standards in academic publishing.
For authors
At CBSJ, we recognize that generative AI and AI-assisted tools are becoming part of the scholarly writing process. To ensure transparency and uphold the highest standards of research integrity, we set forth the following guidelines.
Scope of Policy
This policy addresses the use of AI technologies during the writing and preparation of manuscripts. It does not restrict the use of AI for research data analysis or insight generation as part of the research process itself.
Use in Scientific Writing
Authors may utilize generative AI and AI-assisted technologies solely to enhance the clarity, language, and readability of their manuscripts. However, any AI use must occur under human oversight, with authors reviewing, editing, and validating all outputs. AI-generated content, despite appearing authoritative, can often be inaccurate, incomplete, or biased; thus, the ultimate responsibility for the accuracy, integrity, and originality of the manuscript rests with the authors.
Disclosure Requirement
Authors who utilize AI or AI-assisted technologies during the writing of their manuscripts must clearly and transparently disclose this use within a dedicated section of their submission. A corresponding disclosure statement will also be included in the final published article. This practice promotes trust and accountability among authors, readers, reviewers, editors, and the wider research community, while ensuring adherence to the terms of use associated with AI tools. Reviewers will be able to consult this disclosure — typically located before the reference list — as part of their manuscript assessment.
Authorship Principles
AI tools may not be credited as authors or co-authors, nor cited as independent sources. Authorship entails legal and ethical responsibilities — including approving the final manuscript, responding to concerns about the work's integrity, and ensuring the work’s originality — responsibilities that only humans can bear. Authors must also familiarize themselves with our Ethics in Publishing guidelines before submission.
Use of Generative AI and AI-Assisted Technologies in Figures, Images, and Artwork Policy on Figures and Images
The use of generative AI or AI-assisted technologies to create, modify, or manipulate figures and images within submitted manuscripts is strictly prohibited. This includes actions such as enhancing, obscuring, introducing, or removing specific features. Basic adjustments to brightness, contrast, or color balance are permissible, provided that such changes do not distort or conceal any original information. Submitted figures may be subjected to forensic analysis to detect inappropriate alterations.
Permitted Exceptions
The only exception to this restriction is when AI or AI-assisted technologies are an integral part of the research methodology — for example, in fields such as AI-assisted biomedical imaging. In such cases, authors must provide a detailed description of the tools used, including the model name, version number, developer or manufacturer, and a clear, reproducible explanation of the processes involved. Authors must also comply with the usage policies of the respective AI tools and provide access to original, unaltered images when requested by the editorial team.
Policy on Artwork and Graphical Abstracts
Authors may use generative AI tools to create graphical abstracts or illustrative artwork associated with their manuscripts. In cases where AI-generated material is used for cover art, authors must clearly disclose this in the manuscript, addressing it explicitly for the attention of the journal editor and publisher. Authors are responsible for ensuring they hold all necessary rights to the material and must provide appropriate attribution for any AI-generated content.
For reviewers
Peer review is a cornerstone of scholarly publishing, grounded in principles of confidentiality, integrity, and critical human judgment. CBSJ upholds the highest standards in peer review and sets forth the following policy regarding the use of AI technologies in this process.
Confidentiality Requirements
Reviewers must treat all manuscripts and associated materials as strictly confidential. Under no circumstances should a reviewer upload a manuscript, or any portion of it, into a generative AI or AI-assisted tool. Doing so may compromise the confidentiality of the authors’ work, violate proprietary rights, and, if the manuscript includes personal or sensitive data, breach data protection regulations.
This confidentiality requirement extends to the peer review report itself. Reviewers must not submit their reports to AI tools, even if only to improve grammar, style, or readability, as review reports may contain confidential information regarding the manuscript or the identity of the authors.
Role of Human Judgment
The evaluation of scientific work demands original analysis, critical thinking, and ethical responsibility — capabilities that cannot be outsourced to AI technologies. Reviewers are expected to perform their duties independently and personally, without reliance on generative AI or AI-assisted systems to draft, interpret, or assess manuscripts. The use of AI in the review process risks introducing inaccuracies, incomplete assessments, or biased conclusions. Reviewers bear full responsibility for the integrity, accuracy, and professionalism of their reports.
For Editor
Editors are entrusted with safeguarding the integrity and confidentiality of the editorial process. At CBSJ, the following guidelines govern the use of AI technologies during manuscript handling.
Confidentiality Requirements
All submitted manuscripts and related communications must be treated as strictly confidential. Editors must not upload any part of a manuscript, correspondence, or decision letter into generative AI or AI-assisted tools. Doing so could compromise the confidentiality of authors' intellectual property, breach proprietary rights, and, where applicable, violate data privacy laws.
Editorial Decision-Making and Human Oversight
The editorial evaluation and decision-making process demand careful human judgment, critical thinking, and responsibility — qualities that cannot be delegated to AI technologies. Editors must not rely on generative AI or AI-assisted systems to evaluate manuscripts, formulate editorial decisions, or draft editorial correspondence. AI-generated content carries risks of inaccuracy, bias, and incompleteness, and the editor remains fully accountable for the fairness, quality, and integrity of the editorial outcome.
Disclosure of AI Use by Authors
Authors are permitted to use AI technologies during the writing of their manuscripts, provided that this use is solely for improving language and readability and is clearly disclosed within a designated section of the manuscript. Editors may find this disclosure typically located before the reference list and should take it into account as part of the editorial assessment.
Addressing Potential Violations
If an editor has reason to believe that an author or reviewer has failed to comply with the journal’s AI policies, they should promptly report the concern to the publisher for further review and appropriate action.
Use of Responsible AI Technologies by the Journal
CBSJ may employ in-house or licensed AI-assisted tools to support editorial operations, including completeness checks, plagiarism detection, and reviewer identification. All technologies used are vetted to ensure they uphold strict confidentiality, data privacy, and ethical standards, and they are periodically evaluated for bias and compliance with applicable regulations.
We are committed to responsibly integrating technological innovations that assist editors and reviewers, while maintaining the human oversight, transparency, and ethical rigor that underpin scholarly publishing.
*** Generative AI refers to a class of artificial intelligence technologies capable of creating original content across various formats, including text, images, audio, and synthetic data. Common examples include ChatGPT, Claude, Gemini, Perplexity, and DALL-E.***