Generative AI and Scholarly Publishing
This insights and signals report continues OSPO’s review of the evolving dialogue on the implications generative AI has for open scholarship / open access publishing.
This insights and signals report continues OSPO’s review of the evolving dialogue on the implications generative AI has for open scholarship / open access publishing.
Widespread debates about the future of artificial intelligence and the need for ethical frameworks and regulatory policies to mitigate potential harms, re-ignited in 2022 by OpenAI’s first release of generative artificial intelligence (AI) system ChatGPT, continue to receive attention by scholars and media alike. This Insights and Signals Report is the first in a series that will focus on evolving discussions centered around artificial intelligence (AI), particularly generative AI (genAI) and large language models (LLMs), and the implications these may have for open access and open social scholarship.
Policy Insights and Signals Reports scan the horizon in order to identify and analyse emerging trends and early signals for their potential to impact future policy directions in open access and open, social scholarship. This Insights and Signals Report is the third in a series that has focused on evolving discussions centered around artificial intelligence (AI), particularly generative AI (genAI) and large language models (LLMs), and the implications these may have for open access and open social scholarship. Items discussed in this report include an announcement from the Tri-Agency Presidents regarding an ad-hoc expert panel tasked with considering the use of genAI in the grant development and review process.
Research security—the ability to identify risks to research processes and outputs and take measures to mitigate them—is a longstanding concern for the research community and its stakeholders, from individuals to national governments. Although openness and collaboration are essential for advancing research, greater openness can also lead to greater risks. Securing digital data, knowledge, and other intangible outputs is especially challenging. This was made evident during the COVID-19 pandemic, when the pivot to virtual work environments and unprecedented levels of global collaboration and research sharing was accompanied by increased security threats (see “Open Scholarship and COVID-19”).