Policy on the use of AI
Policy on the Use of Artificial Intelligence (AI) and AI-Assisted Technologies
The Editorial Board of the journal News of Pharmacy recognizes the potential of generative artificial intelligence (GenAI) and technologies based on large language models (LLMs), such as ChatGPT, to support scientific research. At the same time, the Journal adheres to the recommendations of the Committee on Publication Ethics (COPE) and international associations of medical editors (WAME, ICMJE) regarding transparency and accountability in the use of such tools.
Use of AI by Authors
AI tools (including LLMs) may not be listed as authors or co-authors of a manuscript. Authorship implies responsibility for the scientific integrity of the work, which AI cannot assume. Any use of AI must be properly disclosed by the human authors.
Authors bear full responsibility for the accuracy, reliability, and originality of the manuscript content, including any data, figures, or text generated with the assistance of AI.
Disclosure Requirements
- If AI tools were used for text generation, data collection, analysis, or interpretation of results, this must be clearly stated in the Materials and Methods section and/or in a separate section entitled Use of Artificial Intelligence Technologies.
- The name of the tool, its version, and the specific purpose for which it was used must be indicated.
Use of AI by Reviewers
Reviewers are strictly prohibited from uploading manuscripts, in whole or in part, to AI systems. This constitutes a violation of confidentiality and copyright policies, as many AI systems use input data for model training. Reviewers bear personal responsibility for the content of and critical evaluation presented in their reviews.
Use of AI by Editors
Members of the Editorial Board do not use AI technologies to make final decisions regarding the acceptance or rejection of manuscripts. The assessment of scientific merit remains the exclusive prerogative of human experts.
The Editorial Board reserves the right to use specialized software to detect AI-generated content in cases where there is reason to suspect violations of publication ethics or where factual inaccuracies typical of LLM-generated text (“hallucinations”) are identified.
