Policies and conditions for use of Artificial Intelligence (AI)

Policies and conditions for use of Artificial Intelligence (AI)

Aurora Journal aims to provide greater transparency and guidance for editors, reviewers and authors regarding the use of generative AI and AI-assisted technologies.

Based on initial studies carried out by the Committee on Publication Ethics (COPE), CAMBRIDG UNIVERSITY PRESS and ELSEVIER, on authorship, Chatbots and Generative Artificial Intelligence in relation to Academic Publications, Revista Aurora defines the following submission and evaluation standards:

For authors: The use of generative AI and AI-assisted technologies is strictly prohibited for article writing, generating results, creating or altering images or graphics. They can only be used to improve the language and readability of the article. The application of technology must be done with human supervision and control and authors must carefully review and edit the result. It is important to mention that, if used, authors must disclose use of AI and AI-assisted technologies and a statement will appear in the published work. AI tools cannot be listed as authors of an article. Authors are fully responsible for the content of their manuscript, even the parts enhanced by an AI tool, and are therefore responsible for any violation of publication ethics (SAMPAIO, 2023).

1. Chatbots cannot be listed as authors or co-authors of any article submitted to Revista Aurora. No AI tools should be used to generate texts that make up the structure of the article to be submitted to the journal.

Chatbots do not meet the authorship criteria, as they are not capable of assuming moral or legal responsibility for the originality and integrity of the work, approval of the final version of the text to be published, as well as understanding conflicts of interest or managing copyright contracts and excuse me.

2. If an AI tool is used for functions related to the support and systematization of elements that served to aid the production of the article, the authors must be transparent about their objective, in addition to specifying which chatbot was used and other information about its application in the production of the article.

Authors who use AI tools in the production of images or graphic elements of the article, data collection and analysis, among other functions that do not categorize the text production of the article, must be transparent when disclosing in the Summary, introduction and Materials and Methods (or similar section) the type of chatbot used, the function performed by the AI ​​tool and how this process occurred.

 In order to enable scientific analysis, including replication and counterfeit identification, information about the use of the AI ​​tool must be made available in its entirety, also including the specifics of the tool, search results, and the time and date of the query.

3. Authors are fully responsible for the content of their manuscript, including those relating to the research, organization and analysis of data and information that make up the article. Therefore, they are responsible for any violation of publication ethics (in addition to those already caused by specific cuts that configure humanities research), plagiarism, fabrication or falsification, including texts and images generated by AI.

If the author has made use of AI, it is necessary to include before the references, a topic with the title ‘AI Declaration and AI-assisted technologies in the writing process’.

In the declaration, authors must specify the tool used, the reason and the method of application.

 

Suggested Format for the Declaration:

“During the preparation of this work, the author(s) used [name of tool/model or service] version [number and/or date] to [justify the reason]. After using this tool/template/service, the author(s) reviewed and edited the content in accordance with the scientific method and assumes full responsibility for the content of the publication.”

For editors and reviewers: Editors and reviewers should not upload a submitted manuscript or any part thereof into a generative AI tool, as this may violate the confidentiality and proprietary rights of authors, and when the article contains personally identifiable information, may violate data privacy rights. They should also not be used to assist in the evaluation or decision-making process of a manuscript, as the critical thinking and original assessment required for this work is outside the scope of this technology and there is a risk that the technology will generate incorrect, incomplete conclusions. or biased about the manuscript. The editor is responsible for the editorial process, the final decision and its communication to the authors (SAMPAIO, 2023).

1. Editors and reviewers of Aurora Journal must under no circumstances use AI tools to carry out evaluations and analyzes of articles submitted to the journal.

If any AI tool is used during the submission analysis and evaluation process, editors and reviewers must inform its use and be aware that the evaluation/analysis provided will be disregarded from the editorial process.

 

REFERENCES

CAMBRIDGE. Auhorship and contributorship. 2023. Disponível em: https://www.cambridge.org/core/services/authors/publishing-ethics/research-publishing-ethics-guidelines-for-journals/authorship-and-contributorship. Acesso em: 12 Jun. 2024.

COPE. Authorship and AI tools, 2023. Disponível em: https://publicationethics.org/cope-position-statements/ai-author. Acesso em: 12 Jun. 2024.

ELSEVIER. The use of generative AI and AI-assisted technologies in writing for Elsevier, 2023. Disponível em:  https://www.elsevier.com/about/policies-and-standards/the-use-of-generative-ai-and-ai-assisted-technologies-in-writing-for-elsevier. Acesso em: 12 Jun. 2024.

SAMPAIO, Rafael Cardoso. Recomendações iniciais para editores de periódicos científicos sobre o uso de Inteligência Artificial generativa. Blog DADOS, 2023 [published 07 Feb. 2023]. Available from: http://dados.iesp.uerj.br/recomendacoes-iniciais-para-editores-de-periodicos-cientificos-sobre-o-uso-de-inteligencia-artificial-generativa/. Acesso em: 12 Jun. 2024.