The Responsible Use of Generative AI in Research: Guiding Principles and Recommendations

The Responsible Use of Generative AI in Research: Guiding Principles and Recommendations 900 900 Open and Universal Science (OPUS) Project

Artificial Intelligence (AI) has become an integral part of our daily lives, reshaping how we interact with technology and influencing various sectors, including research and innovation. In recent years, there has been an unprecedented surge in advancements in AI, driven by factors such as increased data availability, enhanced computing power, and breakthroughs in machine learning algorithms. Among these advancements, the development of generative AI, capable of producing content across multiple domains, has garnered significant attention.

Generative AI, powered by foundation models trained on extensive unlabelled data, has led to the emergence of ‘General Purpose AI,’ capable of generating diverse content, including text, images, code, and more. The quality of output generated by these models often rivals that of human-generated content, blurring the lines between artificial and human creativity.

However, with the widespread adoption of generative AI comes a host of challenges and ethical considerations. The proliferation of AI-generated content raises concerns about the spread of disinformation and the potential misuse of AI for unethical purposes. In the realm of research, while generative AI holds promise for accelerating scientific discovery and improving research processes, it also poses risks to research integrity and raises questions about responsible use.

To address these challenges, the European Research Area Forum, in collaboration with various stakeholders, has developed guidelines for the responsible use of generative AI in research. These guidelines aim to provide researchers, research organizations, and funding bodies with a framework for utilizing generative AI ethically and effectively.

Key Principles:

The guidelines are built upon key principles drawn from existing frameworks, including the European Code of Conduct for Research Integrity and guidelines on trustworthy AI developed by the High-Level Expert Group on AI. These principles encompass reliability, honesty, respect, and accountability throughout the research process.

Recommendations for Researchers:

  1. Maintain Responsibility: Researchers are ultimately accountable for the integrity of content generated using AI tools and must remain critical of the output’s limitations and potential biases.
  2. Transparency: Researchers should transparently disclose the use of generative AI tools in their research processes, detailing how they were utilized and acknowledging any limitations or biases.
  3. Privacy and Intellectual Property: Researchers must exercise caution when sharing sensitive information with AI tools, ensuring compliance with data protection regulations and respecting intellectual property rights.
  4. Legal Compliance: Researchers should adhere to national, EU, and international legislation, especially concerning intellectual property rights and personal data protection.
  5. Continuous Learning: Researchers should stay updated on best practices for using generative AI tools and undergo regular training to maximize their benefits.
  6. Sensitive Activities: Researchers should refrain from substantially using generative AI in sensitive activities that could impact other researchers or organizations, such as peer review processes.

Recommendations for Research Organizations:

  1. Supportive Environment: Research organizations should promote, guide, and support the responsible use of generative AI in research activities, providing training and guidelines for ethical usage.
  2. Monitoring and Oversight: Organizations should actively monitor the development and use of generative AI systems within their institutions, providing feedback, and guidance to researchers.
  3. Integration of Guidelines: Research organizations should integrate generative AI guidelines into their existing research practices and ethics guidelines, fostering open discussions and consultations with stakeholders.
  4. Local Governance: Whenever possible, organizations should implement locally hosted or cloud-based generative AI tools that they govern themselves, ensuring data protection and confidentiality.

Recommendations for Research Funding Organizations:

  1. Promotion and Support: Funding organizations should promote and support the responsible use of generative AI in research, aligning funding instruments with ethical guidelines and legal requirements.
  2. Internal Usage: Funding organizations should transparently and responsibly use generative AI in their internal processes, ensuring fairness and confidentiality.
  3. Transparency from Applicants: Funding organizations should request transparency from applicants regarding their use of generative AI, facilitating ways for applicants to report its usage.
  4. Monitoring and Training: Funding organizations should monitor and actively participate in the evolving landscape of generative AI, funding training programs for ethical and responsible AI use in research.

The responsible use of generative AI in research requires a collaborative effort from researchers, research organizations, and funding bodies. By adhering to ethical guidelines and fostering a culture of transparency and accountability, we can harness the potential of AI to advance scientific knowledge while mitigating potential risks and safeguarding research integrity.

More at EU Website

Privacy Preferences

When you visit our website, it may store information through your browser from specific services, usually in the form of cookies. Our Privacy Policy can be read here.

Here you can change your Privacy preferences. It is worth noting that blocking some types of cookies may impact your experience on our website and the services we are able to offer.

Click to enable/disable Google Analytics tracking code.
Click to enable/disable Google Fonts.
Click to enable/disable Google Maps.
Click to enable/disable video embeds.
Our website uses cookies, mainly from 3rd party services. Define your Privacy Preferences and/or agree to our use of cookies.