Evaluating a scientific paper entails duties uniquely inherent to human capabilities. The essential critical thinking and evaluation necessary for peer-review exceed what generative AI and AI-assisted tools can provide, leading to potential issues of inaccuracies, incompleteness, or bias from such technologies. Alongside the fundamental rule that submitted papers must remain confidential, these factors form the basis of our Generative AI guidelines for reviewers and editors:

● Manuscripts or any sections thereof should not be inputted into Generative AI platforms by reviewers or editors due to the lack of assurance regarding the destination, storage, or viewing of these materials, as well as their future utilization. This action could infringe on the authors’ rights to confidentiality, ownership, and/or data protection. Furthermore, it might breach the usage policies of the Generative AI platform.

● The obligation of confidentiality also applies to the peer review feedback and any additional manuscript-related communications, like notification or decision emails, which could likewise hold sensitive details regarding the manuscript and/or its authors. Consequently, these should not be entered into a Generative AI platform, even solely for enhancing the language and clarity.

● The employment of Generative AI in aiding the review, assessment, or decision-making procedures for a manuscript is prohibited.

JASET are dedicated to the ongoing development and adoption of in-house or licensed technologies that support editors and reviewers while upholding confidentiality, ownership, and data privacy standards.