PURPOSE OF GUIDELINES:  To provide guidance and regulation for the use of generative AI (GenAI) for graduate studies in the Department of Computer Science, particularly based on: the program/degree-level competencies that students must demonstrate to be awarded their degree,  practices acceptable to the discipline, and general principles of responsible conduct of research and academic integrity.

CGPS developed a framework for GenAI use which advises that transparency in use is the best practice at this time to balance the usefulness of generative AI with inherent complexities. The framework requires programs to provide guidelines regarding the use of generative AI in program guidelines.

Principles: Academic Integrity, Transparency and Disclosure, Data Privacy, and Confidentiality and awareness of bias in algorithms/training data.

GenAI in Coursework: For graduate courses in the Department of Computer Science, permission to use generative AI in coursework is up to each instructor.  Instructors are required to provide clear instructions on the permissibility of GenAI in completing coursework including acceptable uses of GenAI (if any) and use disclosure requirements.  Students must adhere to the instructions provided in each course syllabus.   

GenAI in Research: Generative AI can help in facilitating high-quality research and scholarly activities, and it has the promise to accelerate research productivity.  Regardless of how students gather, synthesize and report information and ideas, they are responsible for ensuring the accuracy of the information in their written work, and that all sources are appropriately cited in their work. Currently, Gen AI is fallible in several ways including perpetuation of biases from the data used to train AI models, generation of sources that do not exist, and failure to cite sources to the rigor required in graduate level scholarship. Students are expected to strive for high standards in their research and also in academic integrity. Hence, full transparency is required regarding the use of Generative AI in their research, and their writing.

At the end of their degree, there are a set of learning objectives that each student in a graduate program in Computer Science or Applied Computing must demonstrate (for MSc, for PhD). Students that rely too heavily on software to generate their written documents are at risk of not properly satisfying these learning objectives, and not being able to defend the work as rigorously as a student who writes their document and is aware of the intended meanings. As a general rule, an author should not cite a source unless the author has read the source.  Moreover, it is important to recognize that the legal environment around intellectual property and around copyright with respect to the use of GenAI is not completely clear. Writers should exercise caution in the use of GenAI in order to not infringe on copyright or other intellectual property protections, in case the generated text does so, or fails to properly attribute the work of others.

With this in mind, we developed the following guidelines for use of generative AI in thesis/dissertation work.

  1. Types of generative AI uses:
    • Prior to using any generative AI in thesis work, the student must seek unambiguous written permission (by email) from the supervisor(s). For MSc students, the supervisor(s) may seek additional approval from the Advisory Committee. This permission should indicate the types of use, including if GenAI was used to search, design, outline, write code, draft, write, edit, or generate images or other content types. It should also give an indication of the scale of its use, and this permission should be updated if more types of uses are planned.
    • Generative AI tools can only be used providing they are disclosed either through a transparency statement included in the thesis, or listed as part of the description of the methodology of the research. This disclosure should indicate the types of use, including if it was used to search, design, outline, write code, draft, write, edit, or generate images or other content types. It should also give an indication of the scale of its use. For PhD students, a disclosure regarding the use and planned use of GenAI should also be included in the PhD Proposal.
  1. Academic Integrity and Bias:
    • Students are responsible for their use of GenAI in researching background information including verifying all sources.
    • Students using generative AI (and AI in general) must be aware of the biased nature of training data and be able to respond to questions regarding the training data used.
    • Students may not cite reliance on GenAI as a basis for appeal if it is determined that they failed to adequately defend their thesis.
  1. Transparency and Disclosure:
    • Use of Large Language Model (LLM) generative AI software such as ChatGPT, Bing Copilot[1], and others, in the writing of reports for advisory committee meetings, thesis/dissertation proposals, thesis/dissertation is permitted, but must be disclosed using transparency statements.
    • Use of generative AI software for the production of illustrations, audio, video, or other types of audio-visual material must be described either in a transparency statement, or in the methodology section of the thesis.
  2. Privacy:
    • Students must ensure that any data provided to a GenAI tool will not contravene compliance with any of the Data Management Policy of the University of Saskatchewan, the University of Saskatchewan Human Research Ethics Policy, and any conditions of the ethics approvals obtained. For example, privacy may be violated if experimental human data is uploaded to a GenAI tool that is not endorsed by the University of Saskatchewan. Students making use of another researcher’s data must also ensure that they have permission or license to share them with the GenAI systems used. The Copilot application available through paws is endorsed by the University of Saskatchewan, but should be used in concordance with University of Saskatchewan Data Classification without using restricted data. 

Contravening these guidelines will constitute academic misconduct.

[1] As of July 1 2024, all graduate students will have access to copilot through PAWS.

 

PUBLISHING:

In our discipline students disseminate their scholarly work through thesis, publication in peer-reviewed journals, publication as book chapters, books, peer-reviewed conference papers, conference presentations, and public presentations.          

Most journals and publishers now have rules regarding the use of GenAI when publishing in their venues, and they can vary substantially, and they should be consulted before submitting a paper. For example, Nature will not allow the use of GenAI images, but Nature and all Springer journals will allow the use of GenAI text but its use needs to be documented in the methodology or acknowledgements sections. Elsevier indicates: “Elsevier’s AI author policy states that authors are allowed to use generative AI and AI-assisted technologies in the writing process before submission, but only to improve the language and readability of their paper and with the appropriate disclosure.” Students and supervisors are responsible for keeping up to date with current editorial/publisher policies where they intend to publish research to ensure compliance with those policies. GenAI tools may not be listed as co-authors on published words which is consistent with most publisher’s GenAI policies.

Failing to follow editorial rules with regards to publishing or peer review will constitute a violation of the responsible conduct of research policy.

 

REVIEWING: 

Students may be engaged in reviewing for journals.  Journals and granting agencies provide regulations on whether or not generative AI tools can be used in reviewing work. For example, NSERC (and the Tricouncil agencies) have draft guidance that indicates: “inputting application information into generative AI tools outside of a protected granting agency domain could result in breaches of privacy and in the loss of custody of intellectual property. This would place a reviewer in breach of the Conflict of Interest and Confidentiality Agreement for Review Committee Members, External Reviewers and Observers.” Similarly, Elsevier indicates “Reviewers should not upload a submitted manuscript or any part of it into a generative AI tool as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.” Further, “Generative AI or AI-assisted technologies should not be used by reviewers to assist in the scientific review of a paper as the critical thinking and original assessment needed for peer review is outside of the scope of this technology and there is a risk that the technology will generate incorrect, incomplete or biased conclusions about the manuscript.”

 

TRAINING:

Supervisors and students are encouraged to engage in workshops and training to develop AI literacy, focusing on ethical use, academic integrity, and critical evaluation of AI tools.

Library learning module (in GPS960) – Academic Integrity Module: Understanding Generative AI

TLE https://teaching.usask.ca/learning-technology/gen-ai/overview.php#top

GenAI DEFINITION: “Generative artificial intelligence (generative AI, GenAI,[1] or GAI) is artificial intelligence capable of generating text, images, videos, or other data using generative models,[2] often in response to prompts.[3][4] Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.[5][6]”  (Wikipedia)”

 

SOURCES: