Policy on the ethical use of Artificial Intelligence (AI) tools
Framework Statement
The journal Kapitari adheres to the initiatives, statements, and guidelines promoted by the international and regional scientific publishing community to guide the responsible, transparent, and ethically supervised use of Artificial Intelligence (AI) tools in scientific research and communication.
In particular, Kapitari adopts the following as references:
- Committee on Publication Ethics (COPE) – AI and Authorship / AI tools in publishing
- World Association of Medical Editors (WAME) – Chatbots, Generative AI and Scientific Manuscripts
- International Committee of Medical Journal Editors (ICMJE) – AI-assisted technologies
- Heredia Declaration (GEDIA) – Principles on the use of AI in scientific publishing
Kapitari recognizes the potential of AI tools to support research, analysis, writing, editing, and visualization processes, provided that they do not replace human responsibility, nor compromise academic integrity, information traceability, the confidentiality of the editorial process, or the quality of published knowledge.
- Purpose
This policy establishes clear, proportionate, and operational rules for the use of AI tools by authors, reviewers, editors, and editorial staff, in order to:
- safeguard scientific integrity;
- ensure transparency in academic production;
- protect editorial confidentiality;
- and ensure the reproducibility and verifiability of results.
- Operational Definitions
2.1 Generative Artificial Intelligence (GenAI)
Systems capable of generating new content (text, code, images, audio, visualizations) from instructions or inputs, typically using language models or multimodal systems (e.g., chatbots, image or diagram generators).
This type of AI can introduce inaccuracies, biases, non-existent references, attribution errors, as well as risks associated with confidentiality and intellectual property.
2.2 Non-Generative Artificial Intelligence (Non-Gen AI)
Tools that do not produce substantially new content, but rather support tasks such as spell and grammar checking, similarity detection, classification, information extraction, tagging, analytics, or metadata management.
2.3 Hybrid Tools
Many solutions combine generative and non-generative functions. At Kapitari, classification is based on actual use:
- If the tool is used for writing, paraphrasing, summarizing, generating text, code, or images → it is considered generative AI and requires disclosure.
- If it is used only for basic linguistic correction without substantive generation → it does not require disclosure.
- Guiding Principles
- Human Responsibility: The intellectual, ethical, and legal responsibility for the manuscript rests solely with the authors.
- Proportional Transparency: Disclosure of the relevant use of AI is required without creating unnecessary bureaucratic burdens.
- Confidentiality and Intellectual Property: Unpublished manuscripts and sensitive data must not be exposed on third-party platforms without adequate safeguards.
- Verifiability and Traceability: All AI-assisted content must be reviewable, verifiable, and, where applicable, reproducible.
- Authorship Rule (Non-negotiable)
Artificial intelligence tools cannot be listed as authors or co-authors, nor can they assume authorship responsibilities.
Authorship is an exclusively human responsibility.
- Permitted and Prohibited Uses (by Role)
5.1 Authors
Permitted (with human supervision):
- Style improvement, clarity enhancement, translation, or advanced proofreading, provided the author reviews and validates the final content.
- Support in manuscript organization, idea structuring, or programming assistance, with human verification.
- Use of AI as part of the research method (e.g., analytics, machine learning), provided it is described in a reproducible manner in the Methods section.
Not permitted / grounds for editorial investigation:
- Replacing the author's intellectual contribution (substantive writing without human oversight).
- Generating unverifiable conclusions, results, or references.
- Presenting fabricated content or synthetic data without a robust and declared methodology.
- Deliberately omitting the disclosure of generative AI use.
5.2 Reviewers
- It is prohibited to upload unpublished manuscripts or review process data to third-party generative AI tools.
- The use of AI is permitted only for linguistic correction of the review.
- If AI is used beyond basic proofreading, the reviewer must explicitly indicate this.
5.3 Editors and Editorial Team
The journal may use AI tools (preferably non-generative) for editorial tasks such as:
- metadata support,
- • similarity detection,
- • linguistic quality control,
- administrative support.
The use of AI does not replace human editorial judgment and must be recorded internally when applicable.
- Figures, Graphs, Images, and Data Visualization
Kapitari adopts an approach based on integrity and traceability, consistent with its disciplinary profile.
6.1 Permitted Use (with requirements)
The use of AI—including generative AI—to create figures, diagrams, charts, or visualizations is permitted, provided that:
- the figure is derived directly from data or results provided by the authors;
- there is a verifiable correspondence between the data and the representation;
- the author declares the use of AI and can provide, upon request, the minimum verifiable inputs (table, dataset, code).
6.2 Prohibited Use
- Altering evidence or introducing elements that modify the interpretation of the data.
- Generating images that appear to be empirical data without explicitly indicating it.
- Replacing real data with synthetic data without a declared methodology.
- Statement on the Use of AI
When is it mandatory?
When generative AI or advanced AI is used beyond basic proofreading, especially if it is involved in:
- substantive writing,
- analysis,
- advanced translation,
- production of data-driven figures.
Where should it be declared?
- In the Methods section, if it affects research, analysis, or visualization.
- In a note following the acknowledgments, if it is limited to writing or translation support.
Minimum content:
- Tool used and type of AI.
- Purpose of use.
- Scope of intervention.
- Human oversight measures.
- Management of Undisclosed or Improper Use
Detection is based on a combination of technological tools and expert editorial review.
If suspicions arise, the editor may request clarification and supporting evidence.
Editorial decisions will be made proportionally and may include corrections, rejection of the manuscript, or, in post-publication cases, the application of the Policy on Corrections, Retractions, and Expressions of Concern.
- Commitment to continuous improvement
This policy will be reviewed and updated periodically to maintain its consistency with:
- the evolution of international evidence and consensus;
- editorial and indexing standards;
- the journal's technical and operational capabilities.