Skip to content Skip to footer
Search
Advanced Search   |   Databases   |   eJournals

AI Course Policy Guidance


Rapid developments in the sophistication and availability of generative Artificial Intelligence (AI) tools has had a dramatic impact on the academic landscape. The following guidance is provided to support Pacific University faculty in navigating this new landscape, deciding and communicating their course policies, and supporting students as they come to grips with the capabilities and limitations of these tools.

Table of contents
Considerations as you develop your approach to AI
Communication and dialog with your students
Teaching students the strengths and weaknesses of AI
Approaches to constraining AI use
Encouraging responsible attitudes to AI with assignment design
Responding to misuse of AI tools
Crafting your AI use policy
Example of a low use policy
Example of a policy that accepts AI use for specific assignments
Example of a policy that encourages the use of AI tools
Beneficial uses of AI for learning

Considerations as you develop your approach to AI
Communication and dialog with your students

The current generation of AI tools are sophisticated enough that it is often not possible to detect their use in assignments with confidence. While it may be pedagogically important that students not rely on them for some tasks, it is often not realistic to enforce course policies with proof of misuse. Instead, educators must think of adherence to their policies as voluntary on the part of students. To foster this kind of cooperative culture, it can be most effective to focus on creating an atmosphere of trust and open communication with your students around learning goals and applications of these technologies.

A common approach is to open the semester with a discussion of the AI landscape, the considerations that students should be aware of in navigating that landscape, your policy for AI use for the class, and the rationale for that policy. Discussing the knowledge and skills that students will gain from your class, with and without the use of AI, can encourage them to become willing collaborators ready to work within the parameters set. Having this kind of discussion can also help students to feel comfortable coming to you with questions if they aren’t sure whether a usage is accepted or not.

Teaching students the strengths and weaknesses of AI

One of our responsibilities as educators is to give students the tools they need to navigate the contemporary professional and intellectual landscape, and understanding the role that AI plays in our respective fields will be part of that work. Seriously consider the ways in which AI might assist work in your field in the coming years, and how you might integrate those uses into your course design and policies to teach students to get the most out of these tools. For example, AI has been shown to be a powerful tool for role-playing patient interactions, generating practice case studies, and creating sets of revision questions based on chunks of input text or a specified web page. (See below for a list of initial suggestions for classroom use of AI.) Also, while this may represent a concern for some educators, AI is particularly effective for correcting grammar and even high-level stylistic language use, which can be especially valuable for speakers of a second language.

Conversely, we need to help students to be conscious of some of the challenges and ethical issues posed by AI.

  • Much publicity has been given to AI hallucinations (fabrications confidently presented as fact in the text generated by an AI tool). Any information given by an AI tool needs to be carefully fact-checked by consulting more reliable sources, and students may need to learn, for example, specific skills in finding, identifying and working with peer-reviewed sources in order to do this.
  • AI tends to reproduce any biases that might be present in the data it is trained upon, and even to exacerbate them, particularly since its responses give an impression of impartial authority.
  • Serious concerns have been raised about the use of intellectual property of human creators as training data for AI tools.
  • Similarly, the ways in which data are used by large AI companies is not always clear, and data input into AI tools should not be considered to be private. Students should not generally be required to make an account with an AI provider or to use an AI tool. If you are including work with AI tools in your class, consider offering alternative methods or asking students to work in pairs.
  • It can be tempting to take direction from AI tools, which can produce polished, convincing-seeming outputs, but by their nature they tend toward the generic, and doing so can stifle creativity.

It is a skill in itself to work with an AI tool in such a way as to get useful responses, and there are now abundant web resources available that are dedicated to ‘prompt engineering’—that is, crafting prompts to give the best chance of successful outcomes. Some general recommendations include expecting to follow up any given prompt with clarifications and adjustments to shape the responses you are getting, and also specifying the materials that you want the tool to draw upon in answering your questions.

Approaches to constraining AI use

A difficult question to answer in the current context is: How, if an educator does want to limit students’ use of AI in a particular context, are they to effect that limitation? It can be tempting to treat AI use in a similar way to ordinary plagiarism, but many of the tools and methods that have been effective in those instances are not effective in this new context. Instead, by focusing on the learning that we want to achieve through a particular task we can both design assignments that are less susceptible to AI completion, and also give students the motivation to be willing collaborators in that work.

At time of writing (summer 2024), evidence suggests that the many softwares designed to detect AI use are not effective enough to be used as evidence of misuse or plagiarism. Careful editing of AI-produced text, whether by a person or a secondary tool, can easily remove the hallmarks and cause it to escape detection. Far more importantly, these detectors are known to produce false positives (that is, flagging a student’s work as AI-produced when it was not), and an accusation made on the basis of a false positive could have serious consequences for student trust and well-being. As a result, AI detectors are not a reliable tool for enforcing AI use policies.

Other solutions do exist for artificially constraining the use of AI in assignments. Educators may ask students to complete particular assignments in class, or in a Google Doc such that the edit history can be viewed to see when chunks of text are pasted into the document. For exams, there exist softwares that can lock down a student’s device while the exam is in progress.

Encouraging responsible attitudes to AI with assignment design

Another approach can be to use careful redesign of assignments to deemphasize tasks that can be completed by AI, combined with transparency with students to increase student motivation and involvement in their own learning.

A good first step is to give serious consideration to the exact nature of the learning that you want students to achieve through a particular task, use this to shape the task, and clearly communicate those objectives to students. In some cases, it can be possible to develop an assignment that requires a degree of creative thinking or reflection on individual experience that only a human mind can execute. In other cases, explaining to students what the specific learning objectives are can create willing engagement with the task. To test their learning, you can develop collaborative activities such as asking students to explain the concept to a partner and to work together to correct any misapprehensions.

Some examples of assignment designs that discourage misuse of AI are as follows:

  • scaffolded assignments that include regular check-ins and emphasize process above polished product
  • projects that include in-class work in the planning and execution
  • assignments that prompt students to contribute their own reflections on the material and on their own learning
  • assignments that span different modalities, and in which students are expected to be able to discuss their progress and ideas

In addition, educators can demonstrate to students what responsible and effective use of AI tools can look like. Some examples of specific tasks where AI use can be helpful to students are as follows:

  • role-playing patient interactions, particularly when asking the AI interlocutor to inhabit a specific persona or attitude
  • generating practice case studies to develop analytical skills
  • creating sets of questions for students to test their own knowledge of course content, based on chunks of input text or a specified web page
  • redrafting text according to stylistic adjustments, or giving feedback on a textual input
  • testing their own knowledge by first writing out their own understanding of a topic, then asking the AI for its summary, and comparing the two

In each of these cases, it can be helpful for an educator to demonstrate to students how to use the tool in this way, including exploring which initial prompts to use and how to respond with follow-up questions.

Responding to misuse of AI tools

When unsanctioned student use of AI does occur, it can also be effective to think in terms of root causes, and to try to address them. For example, students sometimes use AI tools to generate textual answers to questions when they feel insecure about their knowledge of grammar. You can anticipate these worries and address them in your assignment and grading design: for example, you can be clear with students exactly how many points are awarded for correct grammar and spelling (which students often overestimate) relative to other aspects such as conceptual synthesis and reflective ability.

If you are concerned about a student’s possible use of AI, it can be effective to have a conversation with them about their engagement with your class and try to find out any reasons why they are seeking outside help.

Crafting your AI use policy

Your course AI policy should be specifically built to reflect the learning objectives of the class and the nature of your field. There may be existing guidance for AI-related policies at your program or department level. As with all course policies, a constructive tone can promote confidence, understanding and motivation among students more than a punitive tone, which can imply a pre-existing relationship of mistrust.

Below are some suggested elements to include in your course policy.

  • A clear definition of AI in your context, describing the different types that students may encounter and naming particular tools as examples.
  • A clear and succinct appropriate use statement, laying out which uses are accepted and which are not.
  • A rationale, explaining why particular uses are and are not accepted. In particular, where AI use is discouraged it can be very helpful to explain which aspects of their own learning a student would lose by using AI tools, and why they are important.
  • Brief examples of appropriate and inappropriate use in your context.
  • Where AI use is accepted, a clear set of guidelines as to how to cite or otherwise acknowledge that usage in the text of an assignment, with examples.

It is worth acknowledging that at this time, nearly everyone uses some form of AI when they write, since a broad range of commonplace tools like grammar checkers and predictive text are technically AI tools. For this reason, while there may be good reason to be cautious about AI use it is probably not realistic, or perhaps even desirable, to have an absolute ‘no use’ policy.

It is worth reiterating the appropriate policy in assignment descriptions as well as the syllabus to ensure that students are fully conscious of the requirements at the time of tackling the assignment. It can also be reassuring to students to acknowledge that this technology is new and many of them may not yet be sure what appropriate usage is, and that they will never be penalized in any way for asking you whether a particular usage is accepted or not.

Example of a low use policy

In [discipline/profession], we rely on [skills/competencies], which are skills that need to be strengthened and developed through practice. An objective of this class is to teach you [idea generation; analytical competency; critical thinking; fluent writing; etc.], and you will be given the opportunity to practice these skills without the use of AI through [list example assignments or activities as appropriate]. Using generative AI tools to do this work for you at any stage of the assignments in this course will inhibit your learning, and as a result, is not permitted for this class. For example, using AI to [draft sections of an essay; draft an essay outline; generate examples; create imagery; etc.] would not be appropriate use for this class since you would miss the opportunity to learn [conceptual synthesis; argumentation; etc.].

If you have questions about any part of this policy or whether something counts as use of AI, please reach out to me.

Example of a policy that accepts AI use for specific assignments

Generative AI is a powerful tool for [idea generation/text generation/image creation] but must be used in a critical and thoughtful manner. As a result, you will have the opportunity to practice responsible use of AI in [specific assignments], with appropriate attribution and reflection. For example, you may be asked to use AI to [examples of specific tasks]. In these cases, AI-generated text or images should make up no more than [percentage] of the work submitted, and you must clearly specify which parts are AI-generated.

For other assignments such as [examples], the objective is to practice [specific skills], and so the use of AI would inhibit your learning and is not permitted. The assignment description will clearly state whether AI use is accepted, how it may be used, and may specify which tools should be used.

Where AI use is permitted, an acknowledgement such as the following should be included in the text of the assignment:

“I acknowledge the use of generative AI in the completion of this assignment, namely the use of [AI tool]. I used this tool to [generate ideas; revise text; generate images; etc., naming specific instances], using [list the prompts used with the AI tool]. I worked critically with this input by [fact-checking; revising; analysing; etc.]. I take full responsibility for the accuracy of the materials submitted.”

If you have questions about any part of this policy or whether something counts as use of AI, please reach out to me.

Example of a policy that encourages the use of AI tools

In this course, students are encouraged to explore the capabilities and limitations of Generative AI for [idea generation/text generation/image creation]. You will have the opportunity to practice responsible use of AI in any assignment and in any way you choose, with the requirement that you fully acknowledge and describe your use of AI in the text of the assignment, and that you work with it in a responsible and effective manner.

Where AI use is permitted, an acknowledgement such as the following should be included in the text of the assignment:

“I acknowledge the use of Generative AI in the completion of this assignment, namely the use of [AI tool]. I used this tool to [generate ideas; revise text; generate images; etc., naming specific instances], using [list the prompts used with the AI tool]. I worked critically with this input by [fact-checking; revising; analysing; etc.]. I take full responsibility for the accuracy of the materials submitted.”

When working with an AI collaborator, the responsibility rests with you to ensure accuracy and mitigate bias in the end result.

In many cases it will be appropriate to document and reflect on your engagement with the AI, for example by showing the materials you were working with, documenting your revisions, and reflecting on the process. See the assignment description for any specific requirements.

If you have questions about any part of this policy or whether something counts as use of AI, please reach out to me.

Beneficial uses of AI for learning

Below is a growing, non-exhaustive list of activities for which generative AI can offer a particularly useful tool for students. We are always looking for more examples, and if you have used AI in an interesting way with your students, or have heard of good applications by your students or by other educators, please use our Uses of AI in Teaching form to tell us about it!

  • The AI tool is used as an interlocutor in role-playing patient interactions. In particular it can be interesting to ask the AI to inhabit a specific persona or attitude, and perhaps work through a couple of different possibilities to see how they might play out.
  • Generative AI is used to create practice case studies for students to use to develop their analytical skills, and to give feedback on their responses. A great benefit of AI is that it is able to generate as many case studies as a person wants.
  • Students tell an AI tool to work from a particular pasted chunk of text or from a web page, and to create a set of questions for the student to answer to test their own knowledge of the material.
  • The AI tool is used to generate some example text in a particular style, and then to redraft it under direction of the student, who can then quickly see the effect of the results; alternatively, the student pastes some text that they have written and asks the AI to redraft it with a particular stylistic adjustment, or to give feedback on some aspect of the text.
  • A student tests their own knowledge of a topic by first writing to summarize their own understanding of a topic, and then either asking the AI directly for feedback, or asking the AI to write its own summary of the topic and comparing the two.

Skip to content