Academics have created new ethical guidelines for using AI in academic writing.
Researchers from the universities of Oxford, Cambridge, Copenhagen, and Singapore, have devised these guidelines for using Large Language Models (LLMs) like ChatGPT.
With increased use of LLMs in academic writing, concerns have been raised about plagiarism, authorship, and academic integrity.
To address these issues, the guidelines were published in Nature Machine Intelligence and outline three essential criteria.
These include human vetting to ensure accuracy and integrity, requiring substantial human contributions to the work, and ensuring appropriate acknowledgment and transparency of LLM use.
The guidelines also include a template for LLM Use Acknowledgement, which researchers can use when submitting manuscripts.
This aims to streamline adherence to ethical standards in AI-assisted academic writing and provide greater transparency about LLM use.
Professor Julian Savulescu of The Uehiro Oxford Institute, a co-author, said: "Large Language Models are the Pandora's Box for academic research.
"They could eliminate academic independence, creativity, originality, and thought itself.
"But they could also facilitate unimaginable co-creation and productivity.
"These guidelines are the first steps to using LLMs responsibly and ethically in academic writing and research."
This publication marks a crucial step in managing the relationship between human academic work and machine intelligence.
By empowering researchers to leverage AI technology ethically, they aim to boost productivity and innovation while preserving academic integrity.
Co-author, Dr Brian Earp, also from The Uehiro Oxford Institute, said: "It's appropriate and necessary to be extremely cautious when faced with new technological possibilities, including the ability for human writers to co-create academic material using generative AI.
"This is especially true when things are scaling up and moving quickly.
"But ethical guidelines are not only about reducing risk; they are also about maximising potential benefits."
Professor Timo Minssen from the University of Copenhagen added: "Guidance is essential in shaping the ethical use of AI in academic research, and in particular concerning the co-creation of academic articles with LLMs.
"Appropriate acknowledgment based on the principles of research ethics should ensure transparency, ethical integrity, and proper attribution.
"Ideally, this will promote a collaborative and more inclusive environment where human ingenuity and machine intelligence can enhance scholarly discourse."
The guidelines present opportunities for academic communities worldwide and can be applied across all academic disciplines.
The paper 'Guidelines for ethical use and acknowledgement of large language models in academic writing' has been published in Nature Machine Intelligence.
The guidelines come at a time when LLMs are becoming more prevalent and easy to access, providing a much-needed framework for maintaining the quality and credibility of scholarly work amidst the rapid advancement of AI technology.
Comments: Our rules
We want our comments to be a lively and valuable part of our community - a place where readers can debate and engage with the most important local issues. The ability to comment on our stories is a privilege, not a right, however, and that privilege may be withdrawn if it is abused or misused.
Please report any comments that break our rules.
Read the rules here