Lisez-le en francais This insight and signals report was written by Brittany Amell, with thanks to INKE Partners, John Willinsky and John Maxwell, for their comments and contributions.

At a Glance

Topic Area Federal Granting Agencies; Responses to generative AI and LLMs
Key Participants Canadian Federal research funding agencies
Timeframe 2024
Keywords or Key Themes Generative AI, policy development, scholarly publishing, open scholarship, trust, credibility, open access

Summary

This insight and signals report extends an earlier report published on the Observatory regarding an announcement from the three federal research funding agencies regarding the development of guidance for the use of generative AI in the development and review of grant applications (originally published here in November, 2024). This guidance has since been finalized and released by the three federal funding agencies (CIHR, NSERC, SSHRC), and can be found here. Questions for further consideration are also presented, as are responses from INKE Partners John Willinsky and John Maxwell. In addition to the above, this insights and signals report also shares an example of one hypothetical way the three federal granting councils’ guidance might be adapted to create a working generative AI policy for a journal.

The Guidance

In November 2023, the Canadian Institutes of Health Research (CIHR), the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences and Humanities Research Council of Canada (SSHRC), and the Canada Foundation for Innovation (CFI) announced the formation of an ad-hoc panel tasked with providing advice on the use of generative artificial intelligence in the development and review of research grants (NSERC, 2023). Eight experts with expertise in ethics, research administration, and AI were appointed to the panel by the presidents of the three federal research funding agencies. (You can read the report we wrote about the announcement from the three federal research funding agencies here.) In November 2024, the guidance drafted by this panel was finalized and released. There are a few differences between the draft version and the finalized version. Notably, the use of generative AI tools for evaluating grant applications is explicitly prohibited, and applicants are now required to state whether gen AI has been used in the preparation of grant applications.  In addition, the guidance reminds applicants that they are responsible for the information included in their grant applications. As such, they must ensure it is accurate and complete, and that all sources are appropriately referenced. It also reminds reviewers that the use of generative AI tools could result in “breaches of privacy and in the loss of custody of intellectual property” (Science Canada, 2024).

How the Guidance defines generative AI

Rather than define what generative AI is, the guidance offers four points or properties that “applicants and reviewers should carefully review” prior to using a tool to determine whether it uses generative AI (Science Canada, 2024). These points are:
  • Generative AI systems present a straightforward, often conversational, interface that makes deploying the power of the system accessible to a broad range of non-expert users.
  • Generative AI systems intrinsically enable iterative design and improvement processes.
  • Generative AI systems make available information extracted from enormous amounts of data and computing power.
  • The output of the generative AI systems approaches a level of sophistication that may cause non-experts to erroneously identify the output as having been created by humans.

(Science Canada, 2024)

It’s important to note that these properties refer to generative AI systems,’which encompass a combination of components designed to be useful to humans, and include one or more AI models, which are the underlying algorithms trained to perform tasks and can be thought of as the ‘engines’ of AI systems (Bengio et al. 2025). This leaves open the possibility of whether tools, such as those like DeepL Translator, which employs a combination of machine translation neural networks and LLMs to transform input in one language into output in a different language, would be considered a generative AI system under the guidance. Also of note is the ambiguity around whether there is an implicit threshold held regarding the use of generative AI systems. For instance, would interacting with a generative AI tool at the brainstorming stage of a grant require disclosure? Further, it is not clear how applications disclosing the use of generative AI tools will be treated. Will they go to a special subcommittee of reviewers trained in teasing out what constitutes acceptable and unacceptable use of generative AI? Interestingly, in a report released earlier in 2024, the ad hoc panel carefully noted:

There will also be a need to inform grant applicants why this question [re: use of genAI] is being asked, assure that there is no negative implication for application review, and specifically if this information will be accessible to reviewers and the committee.

The most recent version of the guidance released by the Tri-Council does not include this recommended information, though it may be included in the future—the three funding agencies indicate their intention to review the guidance on a regular basis.

Responses from the INKE Partnership

John Willinsky (Khosla Family Professor Emeritus, Stanford University, and INKE partner) noted that the phrase ‘loss of custody of intellectual property is particularly thought-provoking, “as it is not typically part of IP discussions.” Willinsky writes:

I wouldn’t want to suggest to people that they are giving up their copyright claim to a work if they upload it, not that their IP claim applies to anything beyond the exact expression of their ideas. The “custody” part is interesting, as it is not typically part of IP discussions. . . . (The courts have yet to rule in any significant way on the “fair use” issue in the great many cases of GenAI suits.). . . . we may want to consider how (a) the very spirit of scholarship may be about contributing to the broad publicly accessible storehouse of intelligence and (b) how for that storehouse to be research-free, out of academics’ sense that someone is profiting from making their work into such a useful, coherent, and intelligible form and they won’t be sharing in those profits or perhaps even credited for their ideas (which again are not otherwise protected).

While the phrase (‘loss of custody of intellectual property) evokes a real concern, especially in an era where ideas and content are increasingly processed through opaque generative AI systems, the legal landscape remains unsettled. As Willinsky observed, Canadian courts have yet to issue definitive rulings on the boundaries of “fair use” in the growing number of lawsuits involving generative AI. This makes the phrase more speculative than authoritative—raising questions about what kinds of “custody” or ownership we are referring to in the academic context. Willinsky also noted that framing the question of generative AI “use” as a binary or conscious choice may be overly simplistic:

If I were to note anything if would be around how we are using GenAI now with almost any search, auto-correct, and in consulting research that has drawn on it. The ubiquity suggests that framing the “use” question as a matter of conscious choice may be a little misleading as guidance to researchers. So, one approach could be to compare [a working policy on the use of AI] to the well-established but still loosely defined standards for plagiarism when consulting and drawing on others’ work. . . . Using the plagiarism standard, you might further refine your advice to refer to situations in which one formally prompts a LLM for information, analysis, literature reviews etc. with the result either informing the work (or the brainstorming you refer to) as background compared to being directly cited and thus necessarily credited in a paper. This also sets aside worries about the use of web resources that may well be enhanced by GenAI.

Responding to this piece over email, INKE partner John Maxwell (Associate Professor of Publishing at Simon Fraser University) wrote:

I also think it might help [if the guidance on the use of AI] made a distinction between (a) the need for transparency and declaration (which follows the trend in scholarly communications more generally towards explicit “publication facts” and labels, etc.), and (b) the more substantive implications of the use of generative AI—for instance, whether an AI-written grant would actually be treated differently if its use was declared: what would an editor or a reviewer do, exactly, with the fact that generative AI had been used in the drafting of an article or a grant application? How would they treat it differently, and on what grounds, exactly?

Together, these reflections highlight the need for continued discussion around the use of generative AI in scholarly work—particularly around transparency, accountability, and the evolving norms of intellectual contribution. As the policy landscape continues to take shape, such discussions will be continue to be essential.

Further questions for reflection and discussion

The following questions may be useful as starting points on their own, or used in tandem with the sample working policy found in the appendix, to spark further reflection or group dialogue:
  • What constitutes meaningful “use” of genAI? For instance, would using a tool to brainstorm, simplify a sentence, or translate content require disclosure? In other words, is there a threshold for the use of genAI tools that, once crossed, would trigger disclosure?
  • Should different types of genAI tools (e.g., DeepL, which uses a form of neural netowkrs for translation vs. ChatGPT for writing support) be treated differently in the disclosure process?
  • How can funding agencies and other policy makers ensure that the disclosure requirement does not disadvantage applicants from under-resourced institutions or linguistic minorities who may rely more on genAI tools?
  • What happens after people disclose the use of genAI tools? For instance, do reviewers have access to information about genAI use in applications, and if so, under what conditions? What training or guidelines are being provided to reviewers to help them interpret disclosures of genAI use in a fair and transparent manner?

References

Bengio, Yoshua, Sören Mindermann, Daniel Privitera, Tamay Besiroglu, Rishi Bommasani, Stephen Casper, Yejin Choi, et al. 2025. “International AI Safety Report.” DSIT 2025/001. The International Scientific Report on the Safety of Advanced AI. Commissioned by the Department for Science, Innovation and Technology (UK) and the AI Safety Institute (UK). https://www.gov.uk/government/publications/international-ai-safety-report-2025.

Natural Sciences and Engineering Research Council of Canada. 2023. “Federal Research Funding Agencies Announce Ad Hoc Panel to Inform Guidance on the Use of Generative Artificial Intelligence in the Development and Review of Research Proposals.” Government of Canada. November 8, 2023. https://www.nserc-crsng.gc.ca/Media-Media/NewsDetail-DetailNouvelles_eng.asp?ID=1424.

Science Canada. 2024. “Advice from the Ad Hoc Generative AI Panel of External Experts.” Government of Canada. Innovation, Science and Economic Development Canada. January 12, 2024. https://science.gc.ca/site/science/en/interagency-research-funding/policies-and-guidelines/use-generative-artificial-intelligence-development-and-review-research-proposals/advice-ad-hoc-generative-ai-panel-external-experts.

Science Canada. 2024. “Guidance on the Use of Artificial Intelligence in the Development and Review of Research Grant Proposals.” Policies and Guidelines. November 18, 2024. https://science.gc.ca/site/science/en/interagency-research-funding/policies-and-guidelines/use-generative-artificial-intelligence-development-and-review-research-proposals/guidance-use-artificial-intelligence-development-and-review-research-grant-proposals.

Appendix: Sample of Guidance Adapted for Open Journal Contexts

To support broader discussion about the practical implementation of generative AI guidance, this appendix presents one hypothetical example of how the tri-agency’s recently released guidance might be adapted into a working policy for an academic journal. While funding bodies and journals operate in distinct contexts, both share overlapping concerns around authorship integrity, transparency, and responsible AI use. This sample policy is not meant to be prescriptive, but intended to illustrate how principles from the Tri-Council guidance—such as disclosure, accountability, and risk awareness—could inform editorial practice in scholarly publishing.

Usage of generative AI in submissions to the journal

As it relates to the use of AI in the writing and review of work submitted to [insert name of journal], [we / the editorial board] follow the guidance set out by the three federal granting councils (NSERC, CIHR, SSHRC). A slightly amended version can be found below. Introduction to the working policy We recognize that generative AI may be a valuable tool to authors and reviewers in the preparation of submissions, including the potential to improve efficiency, assist English and French speakers, and streamline the writing and review process. However, we strongly recommend authors and reviewers review the following points prior to using an AI tool in order to determine whether their usage of a tool constitutes the usage of generative AI. Identifying whether this working policy applies We rely on the four properties of generative AI systems described by the tri-agency ad hoc panel to help us in assessing whether a tool leverages generative AI and therefore, whether this guidance applies. These properties are:
  • Generative AI systems present a straightforward, often conversational, interface that makes deploying the power of the system accessible to a broad range of non-expert users.
  • Generative AI systems intrinsically enable iterative design and improvement processes.
  • Generative AI systems make available information extracted from enormous amounts of data and computing power.
  • The output of the generative AI systems approaches a level of sophistication that may cause non-experts to erroneously identify the output as having been created by humans.
We acknowledge that definitions and understandings of generative AI are fluid and continually evolving. As such, we intend to review this guidance on a regular basis so that we can update it to reflect emerging issues and trends. Guidance In addition to the above, we draw attention to the following:
  • Authors and reviewers are responsible for ensuring that information included in their submissions are true, accurate and complete, and that all sources are acknowledged and referenced. Applicants should be aware that using generative AI may lead to the presentation of information without proper recognition of authorship or acknowledgement.
  • Privacy, confidentiality, data security and the protection of intellectual property must be prioritized in the development and review of all submissions.
  • In the evaluation of submissions, reviewers should be aware that inputting submission related information into generative AI tools may result in breaches of privacy and compromise intellectual property rights. Examples include transmission of application text to online tools such as ChatGPT and DeepL, which may store and reuse the data for future enhancement of the tool.

Suggested Further Reading