Lisez-le en français
This insights and signals report was written by Brittany Amell, with thanks to John Willinsky, John Maxwell, and William Bowen for their feedback and contributions.
At a Glance
Insights & Signals Topic Area | Federal Granting Agencies; Responses to generative AI and LLMs |
Key Participants | Canadian Federal research funding agencies |
Timeframe | 2023-2024 |
Keywords or Key Themes | Generative AI, open scholarship, trust, credibility, open access |
Summary
Policy Insights and Signals Reports scan the horizon in order to identify and analyse emerging trends and early signals for their potential to impact future policy directions in open access and open, social scholarship. They tend to highlight shifts in technology, public opinion and sentiments, and/or regulatory changes both within and outside of Canada. Like OSPO’s policy observations, insights and signals reports aim to support partners in crafting proactive, responsive, and forward-thinking strategies.
This Insights and Signals Report is the third in a series that has focused on evolving discussions centered around artificial intelligence (AI), particularly generative AI (genAI) and large language models (LLMs), and the implications these may have for open access and open social scholarship. Interested in other Insights and Signals Reports focused on AI? You can find them here and here.
Items discussed in this report include:
- An announcement from the Tri-Agency Presidents regarding an ad-hoc expert panel tasked with considering the use of genAI in the grant development and review process
- A summary of the guidance for the use of genAI in the grant development and review process, as proposed by the Guidance Proposed by the ad-hoc panel
- A response to this insights and signals report from INKE partner, John Willinsky (founder, Public Knowledge Press)
Experts Appointed to Panel by Tri-Agency Presidents, Tasked with Considering the Use of Generative Artificial Intelligence in Grant Development and Review
In November 2023, the Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences and Humanities Research Council of Canada (SSHRC), and the Canada Foundation for Innovation (CFI) announced the formation of an ad-hoc panel tasked with providing advice on the use of generative artificial intelligence in the development and review of research grants (NSERC, 2023). Eight experts with experience and expertise in ethics, research administration, and AI were appointed to the panel by the presidents of the three federal research funding agencies. They are:
- Mark Daley (Chair), Chief AI Officer, Western University
- Derek Nowrouzezahrai, Associate Professor, Université de Montréal, McGill University; Canada CIFAR AI Chair
- Aimee van Wynsberghe, Alexander Humboldt Professor for Applied Ethics of Artificial Intelligence, University of Bonn (Germany)
- Vincent Larivière, Canada Research Chair on the Transformations of Scholarly Communication, Université de Montréal
- Charmaine Dean, Vice-President, Research, University of Waterloo
- Richard Isnor, Associate Vice-President, Research, Graduate and Professional Studies, St. Francis Xavier University
- Rachel Parker, Senior Director, Research, Canadian Institute for Advanced Research (CIFAR)
- David Castle, Professor, School of Public Administration and Gustavson School of Business, University of Victoria; Researcher in Residence, Office of the Chief Science Advisor
The panel met several times over the course of November and provided their advice and recommendations to the three agencies in December 2023 (Science Canada, 2024a). Based on the advice and recommendations provided by the panel, “Draft Guidance on the Use of Artificial Intelligence in the Development and Review of Research Grant Proposals” was developed and shared with the research community who were invited to submit feedback on the guidance by June 14, 2024.
Summary of Draft Guidance Proposed by the Ad-Hoc Panel
The proposed Guidance applies to the use of generative AI. Acknowledging the difficulties of pinning down a definition for generative artificial intelligence due to rapidly evolving technology and research, the Guidance describes four properties that can be used to identify generative AI systems. These are:
- AI systems present a straightforward, often conversational, interface that makes deploying the power of the system accessible to a broad range of non-expert users.
- AI systems intrinsically enable iterative design and improvement processes.
- AI systems make available information extracted from enormous amounts of data, by systems using enormous amounts of computing power.
- The output of the AI system approaches a level of sophistication that may cause non-experts to erroneously identify the output as having been human created.
Reviewers and applicants are expected to review these properties to using AI tools in order to determine whether the tool uses generative AI. If the tool uses generative AI, this Guidance is said to apply.
The proposed Guidance also notes several core requirements underpin all applications and review processes for the three agencies and, therefore, any guidance and policy related to the use of generative AI. These requirements relate to accountability, privacy, confidentiality, data security, and intellectual property rights.
Citing honesty, accountability, openness, transparency and fairness as key values that anchor existing agency policies, the Guidance draws attention to the alignment between these values and the above core requirements, as well as to “the conduct of all activities related to research” more broadly (Science Canada, 2024b).
Two core concerns motivate the proposed Guidance provided to reviewers of grant applications—the potential for breaches of privacy and a loss of custody of intellectual property. These echo the concerns identified by the National Institutes of Health in their notice, “The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Review Process” (issued in June 2023), however the draft Tri-Agency Guidance stops short of explicitly prohibiting the use of generative AI. Instead, reviewers are advised to “proceed with caution” (Science Canada, 2024b) and to consult the four properties of generative AI (as outlined above and in the proposed Guidance), as well as the Tri-Agency policy related to conflicts of interest and confidentiality agreements.
The core concerns motivating the Guidance for grant applicants relate mainly to accountability, accuracy, transparency, authorship, and citation. More specifically, the draft Guidance indicates that applicants are responsible for the following:
- Stating if and where generative AI has been used to create application material
- Verifying the accuracy and completeness of information included in application materials
- Ensuring sources are appropriately acknowledged and referenced
However, as noted by the Canadian Association of Research Libraries (CARL) (2023) and detailed here in another observation, many of these responses focus on concerns relate to academic integrity, disclosure and citation. In addition to these concerns, CARL suggests it is also important to consider the potential for false or misinformation, as well as how datasets for LLMs are developed, copyright (including whether AI-generated work can be copyrighted), privacy, bias, and social impacts (including who has access to generative AI tools and who may not, whether for financial reasons or otherwise).
Key Questions and Considerations
In addition to CARL’s suggestions that policies around the use of genAI also consider the potential for false information, copyright, privacy, bias, and the development of the datasets used to train genAI models, it might be worth considering whether a focus on generative AI (as opposed to machine-learning algorithms, which are the class of algorithms currently used in genAI) or on the use of algorithmic interventions in general is too narrow of an approach.
For instance, some scholars have argued that platform companies—some of the earliest adopters of machine-learning algorithms in automated content moderation processes (Gorwa et al. 2020)—have become increasingly influential as global political actors (Broeders & Taylor 2017). However, this influence is often unacknowledged by state governments and the broader public, thus leaving policy efforts open to being undermined. Instead, Broeders and Taylor (2017) recommend the pollical role of platform companies and online service providers be recognized, and that national governments and international organizations approach providers through the language and channels typical of diplomatic efforts (p. 332).
Centering policy responses to AI within broader frameworks of ethics, accountability, and responsiveness – Broeders and Taylor (2017) suggest we start by extending corporate social responsibility frameworks to include political responsibility, as well as pairing these frameworks with external forms of political, legal, and popular accountability (p. 321).
Additionally, INKE partner John Willinsky (Founder, Public Knowledge Project and INKE partner) has also offered the following for consideration:
Given the work of the Public Knowledge Project on the topic of how copyright reform could do more to support scholarly publishing, we would point out the legal issues involved in the building of LLMs. This is not just a matter of “the custody of intellectual property rights” as there is a case to be made for scholarship possessing an intellectual property right for the widest possible contribution to knowledge. This can involve assembling open source LLMs dedicated to research purposes, or a consideration of the consequences of unduly limiting LLM’s access to research for the quality of these systems’ output and influence. All of which is to say that we would expect any panel on AI to have representation with copyright expertise. A good example of this sort of expertise, although not based in Canada, is found with the Program on Information Justice and Intellectual Property of the American University Washington College of Law, with its focus on copyright exceptions and the right to research, including academic texts and data mining research (see here).
References
Broeders, Dennis, and Linnet Taylor. 2017. “Does Great Power Come with Great Responsibility? The Need to Talk About Corporate Political Responsibility.” In The Responsibilities of Online Service Providers, edited by Mariarosaria Taddeo and Luciano Floridi, 31:315–23. Law, Governance and Technology Series. Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-47852-4_17.
Canadian Association of Research Libraries. 2023. “Generative Artifical Intelligence: A Brief Primer for CARL Institutions.” https://www.carl-abrc.ca/wp-content/uploads/2023/12/Generative-Artificial-Intelligence-A-Brief-Primer-EN.pdf.
Gorwa, Robert, Reuben Binns, and Christian Katzenbach. 2020. “Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance.” Big Data & Society 7 (1): 2053951719897945. https://doi.org/10.1177/2053951719897945.
Natural Sciences and Engineering Research Council of Canada (NSERC). 2023. “Federal Research Funding Agencies Announce Ad Hoc Panel to Inform Guidance on the Use of Generative Artificial Intelligence in the Development and Review of Research Proposals.” Government of Canada. November 8, 2023. https://www.nserc-crsng.gc.ca/Media-Media/NewsDetail-DetailNouvelles_eng.asp?ID=1424.
Science Canada. 2024a. “Advice from the Ad Hoc Panel on the Use of Generative Artificial Intelligence in the Development and Review of Research Proposals (2023).” Government of Canada. Innovation, Science and Economic Development Canada. April 10, 2024. https://science.gc.ca/site/science/en/interagency-research-funding/policies-and-guidelines/use-generative-artificial-intelligence-development-and-review-research-proposals/advice-ad-hoc-panel-use-generative-artificial-intelligence-development-and-review-research-proposals.
Science Canada. 2024b. “Draft Guidance on the Use of Artificial Intelligence in the Development and Review of Research Grant Proposals.” Government of Canada. Innovation, Science and Economic Development Canada. April 10, 2024. https://science.gc.ca/site/science/en/interagency-research-funding/policies-and-guidelines/use-generative-artificial-intelligence-development-and-review-research-proposals/draft-guidance-use-artificial-intelligence-development-and-review-research-grant-proposals.