Artificial Intelligence Policy

The Editorial Board recognizes the potential use of artificial intelligence (AI) tools as auxiliary instruments in scholarly work, provided that the principles of transparency, accountability, and academic integrity are strictly observed.

Use of AI by Authors

Permissible Use of AI. Authors may use AI tools for:

  • language editing and stylistic improvement of the text;
  • grammar and spelling checks;
  • technical data processing without altering the substantive content;
  • auxiliary analytical support, provided that the author maintains full control over and responsibility for the interpretation of results.

Prohibited Use of AI. The following practices are not permitted:

  • submission of texts wholly or predominantly generated by AI as original scholarly work;
  • use of AI for fabrication of data, results, or references;
  • use of AI for undisclosed appropriation of others’ scholarly texts;
  • listing AI tools as authors or co-authors of a scientific article.

AI tools cannot be considered authors; they may only serve as instruments under human supervision. Authors bear full responsibility for the content of their manuscripts, regardless of any AI use.

The Editorial Board reserves the right to:

  • screen manuscripts using specialized tools and establish threshold levels for AI-generated content;
  • request additional clarification regarding the use of AI;
  • reject manuscripts in cases involving fabricated references or DOI links, or undisclosed use of AI in manuscript preparation;
  • conduct video interviews with authors to verify their academic competence.

At the time of submission, authors must declare:

  • whether AI was used (yes/no);
  • the name(s) of the tool(s) (e.g., ChatGPT, Gemini, Copilot), including version(s), if available;
  • the purpose and extent of AI use (e.g., language editing, text generation, data analysis, image creation);
  • the sections of the manuscript where AI was applied.

The AI usage declaration should be provided in free form and included at the end of the main text of the article (see Manuscript Preparation Guidelines).

Use of AI by Reviewers

Reviewers are not permitted to upload manuscripts or their parts to external AI services that do not guarantee confidentiality.

If reviewers use AI tools to assist in drafting their reports or for technical editing, they must inform the editorial office.

Use of AI by the Editorial Office

The editorial office may use AI tools for technical processing of submitted manuscripts, including plagiarism detection, identification of structural issues, and language editing. However, all editorial decisions are made exclusively by members of the Editorial Board.

The editorial office may also employ specialized tools to detect text or visual materials generated by AI. At the same time, the results of automated analysis are not considered sufficient evidence of a violation of publication ethics. Where reasonable concerns arise, the editorial office may request explanations or additional information from the author regarding the preparation of the manuscript.

Violations of the AI Usage Policy

The main violations include:

  • failure to disclose the use of AI tools in manuscript preparation;
  • intentional misrepresentation or distortion of information regarding AI use;
  • use of AI resulting in fabricated or unreliable data, references, or visual materials;
  • breach of confidentiality by reviewers with respect to submitted manuscripts.

Depending on the nature and severity of the violation, the editorial office may take the following actions:

  • request clarification from the author;
  • require corrections or revisions;
  • reject the manuscript;
  • notify the author’s affiliated institution;
  • initiate retraction of a published article in accordance with the established retraction procedure.