AI Use Policy
General Provisions
The Policy on the Use of Artificial Intelligence and AI-supported Technologies defines the rules and standards for the use of artificial intelligence (AI) tools by authors, reviewers, and editorial board members of the journal “Agora. Social Sciences Journal” during the preparation, submission, review and editing of publications.
The purpose of this policy is to ensure transparency, responsible scientific practice, and compliance with international publication ethics standards, in accordance with the recommendations of the COPE, WAME, and the JAMA Network.
Definition
AI tools refer to any automated or semi-automated systems, including generative language models, automatic text analysis tools, image processing algorithms, programming or data analysis tools, speech editing services, and other similar resources.
Main Principles
Human Responsibility. AI cannot be considered the author of scientific material. Only humans are capable of bearing ethical, legal, and scientific responsibilities for the content of an article.
Transparency. Any use of AI must be clearly disclosed in accordance with the requirements of this policy.
Reliability. The authors bear full responsibility for the correctness, validity, and accuracy of all results, information, and interpretations, including those generated or modified by AI tools.
Ethics and Compliance with the Standards. The use of AI must not lead to violations of scientific integrity, including data fabrication, plagiarism, fictitious citations and misleading visualizations.
Requirements for Authors
- Prohibition of Listing AI as an Author
AI tools cannot be included in the list of authors because they do not meet the criteria for authorship.
- Mandatory Disclosure of AI Usage
When submitting a publication, the authors must disclose:
- whether AI was used (yes/no);
- names of the tools (e.g., ChatGPT, Gemini, Copilot, etc.), version (if available), and date of use;
- the purpose and scope of AI usage (language editing, text generation, data analysis, image creation, etc.);
- parts of the material to which AI was applied.
If AI was used in conducting research or data analysis, this must be described in the “Methods and Data” section of the manuscript. Other uses (such as text editing) may be indicated in the “Acknowledgments” section.
- Responsibility for Content
Authors guarantee:
- verification of all texts, images, and data created/modified by AI;
- absence of plagiarism, fabrication, fictitious references, and manipulation;
- compliance of the obtained results with the scientific and ethical standards.
The editorial board may request additional materials (source data and prompts) to verify correctness.
The use of AI in creating images, graphics, and illustrations
If images, diagrams, maps, or graphs are created or modified by AI, the caption must indicate the tool used and the nature of its application.
Authors must ensure accuracy and not mislead readers, in particular by avoiding the use of AI-generated visualizations that create a false impression of the real data.
If AI is used to create visual materials, the authors must ensure that they have the appropriate rights to use them.
Use of AI by reviewers
- Confidentiality
Reviewers are not permitted to upload manuscripts or their fragments to third-party AI services that do not guarantee confidentiality of the data.
- Disclosure of AI Use by the Reviewer
If a reviewer has used AI tools to assist in forming conclusions or for technical editing, they must inform the editorial office.
Use of AI by the editorial team
- The editorial team may use AI for the technical processing of manuscripts (plagiarism checks, identification of structural issues, language editing); however, all editorial decisions are made by humans.
- Automated conclusions cannot be the sole basis for ethical decisions regarding a manuscript; in such cases, the editorial team conducts an expert review.
Using AI detection tools
The editorial team may use specialized tools to detect the texts and images created by AI. However, automatic indicators are not considered sufficient evidence of a violation. In case of doubt, the editorial team may contact the authors for clarification and additional information.
Policy violation
The violations include:
- covert use of AI;
- understatement or falsification of information regarding the use of AI;
- use of AI resulting in fabricated data, references, or visualizations;
- breach of manuscript confidentiality by reviewers.
Depending on the nature of the violation, the editorial board may take the following measures: request an explanation, require corrections, refuse publication, inform the author’s institution, or retract a published article in accordance with the retraction procedure.
Recommended wording for authors
In the cover letter:
“The author(s) disclose that AI tool(s) [name, version] were used for [task] during the preparation of this publication. All results created or modified by AI were thoroughly reviewed and comply with the requirements of scientific integrity.”
In the article (in the section “Acknowledgments” or “Methods and Data”):
"An AI tool [name, version] was used for [task] during the preparation of this manuscript. The authors confirm that all data and conclusions are accurate and have been verified."
Policy Review
Owing to the rapid development of AI, this Policy will be reviewed at least once a year or after the publication of new guidelines from COPE or other leading organizations in the field of scientific publishing.