Virtual Help icon Virtual Help

  • Chat with library staff now
  • Contact your library
Skip to Main Content

Generative AI Faculty Resources

Welcome to our resource hub for faculty exploring the evolving role of generative AI in education. Here, you’ll find tools, strategies, and guidelines to help you integrate AI thoughtfully and responsibly into your teaching. From understanding AI's potential and limitations to addressing concerns around academic integrity, we aim to provide insights that support both faculty and students. Be sure to also check out the companion page for students- opens in a new tab, designed to foster responsible and informed AI usage in their academic work.

Leveraging Artificial Intelligence

Since the release of ChatGPT in the fall of 2021, there has been a flurry of articles written on the pros and cons of using generative artificial intelligence (GenAI) in the classroom. While many embrace the changes that GenAI has the potential to bring to our understanding of authorship, plagiarism, and assessment (e.g., Compton, n.d.- opens in a new window), the International Centre for Academic Integrity’s Statement on Academic Integrity and Artificial Intelligence-PDF opens in a new window highlights many of the concerns raised in the literature. The use of text-generating tools in the classroom generally, or for writing activities more specifically, is an on-going debate worth considering.

When used with caution and discretion, however, introducing students to appropriate uses of GenAI tools may be helpful for mitigating academic integrity breaches and teaching core critical thinking and literacy skills. Consider how you might use GenAI to help students: Brainstorm topics, Identify keywords, Create or critique an outline, Identify sources (e.g., organizations, individuals, etc.), Summarize, Revise, Fact-check, Give or receive feedback, Develop metacognition & self-regulation (e.g., Brett D. Jones and David Hicks example- opens in a new window )

Strategies for Success
  • Discuss or demonstrate both the strengths and limitations of using GenAI.
  • Demonstrate and guide students in the permitted uses of GenAI for the assignment, if any (e.g. provide example prompts for brainstorming or identifying keywords).
  • In your assignment instructions, clearly reference the learning outcomes for the assignment in your rationale for inappropriate uses of GenAI.

Harm Considerations of Generative AI Tools

Generative AI tools can indeed make life and work a lot easier! Like any tool, however, there are many things to consider before using generative AI. You are responsible for educating yourself on the ethics of any AI you use and understanding the risks when you are permitted to use it.

While AI offers exciting opportunities to enhance accessibility and inclusivity, it’s crucial to recognize that it can also inadvertently exclude many individuals, particularly those who already face accessibility challenges. As we explore GenAI tools, it's essential to prioritize equitable access for all users. For example, although many generative AI tools are currently free, a growing number are starting to charge fees for access, which can create barriers for those who may not be able to afford it.

At the same time, GenAI tools can also function as accessibility aids for learners.

Examples:
  • Live Captioning Tools: These tools provide real-time transcription of spoken content, making audio and video content accessible to individuals who are deaf or hard of hearing.
  • Screen Readers: AI-powered screen readers convert text displayed on a screen into speech or Braille, enabling visually impaired users to access digital content.
  • Text-to-Speech (TTS) Software: TTS tools transform written text into spoken words, assisting users with reading difficulties or those who prefer auditory learning.
  • AI-Powered Sign Language Interpreters: These tools use AI to translate spoken language into sign language in real-time, facilitating communication for deaf and hard-of-hearing individuals.

One may think that technology is objective and neutral. Generative AI, however, is trained on real-world data and information, such as images and text scraped from the internet. This information is rife with human biases. Human biases may be embedded in the AI model during its creation, and biases in the datasets used for training can influence how it generates content. Additionally, AI can develop its own biases based on how it interprets the data, and user input may inadvertently guide it toward biased responses.

AI Bias: "also referred to as machine learning bias or algorithm bias, refers to AI systems that produce biased results that reflect and perpetuate human biases within a society" (IBM Data and AI Team, 2023). Some common biases include gender stereotypes and racial discrimination.

Recognizing these factors is essential for critically evaluating AI-generated content. By understanding potential biases in the data, model design, and user input, you can better assess the credibility and accuracy of the AI's output.

Examples:
  • Healthcare: Underrepresented data of women or minority groups can skew predictive AI algorithms. For example, computer-aided diagnosis (CAD) systems have been found to return lower accuracy results for black patients than white patients.
  • Applicant tracking systems: Issues with natural language processing algorithms can produce biased results within applicant tracking systems. For example, Amazon stopped using a hiring algorithm after finding it favored applicants based on words like “executed” or “captured,” which were more commonly found on men’s resumes.
  • Online advertising: Biases in search engine ad algorithms can reinforce job role gender bias. Independent research at Carnegie Mellon University in Pittsburgh revealed that Google’s online advertising system displayed high-paying positions to males more often than to women.
  • Image generation: Academic research found bias in the generative AI art generation application Midjourney. When asked to create images of people in specialized professions, it showed both younger and older people, but the older people were always men, reinforcing gendered bias of the role of women in the workplace.
  • Predictive policing tools: AI-powered predictive policing tools used by some organizations in the criminal justice system are supposed to identify areas where crime is likely to occur. However, they often rely on historical arrest data, which can reinforce existing patterns of racial profiling and disproportionate targeting of minority communities.

Development, training and use of GenAI models require systems that use a significant amount of energy and contribute to carbon emissions. They also consume vast amounts of water for cooling, and produce electronic waste generated from the hardware used in AI systems. While developers continue to seek more sustainable ways of operating, it is important to consider whether your use of AI is worth the environmental impact and to consider ways GenAI tools can be used more efficiently when possible.

As the reliance on AI tools grows, it's crucial to prioritize sustainable practices and consider the ecological footprint of these technologies to mitigate their impact on the environment.

Since GenAI is trained on real-world data, text, and media from the internet, the content it provides to users may be misleading, factually inaccurate, or outright misinformation (like deep fakes, for example). Because it’s unknown exactly where the data used to train AI originates and AI cannot specify its sources, its output may not be credible or reliable for academic use. The information provided may be implicitly or explicitly biased, outdated, or a “hallucination.”

AI hallucination: Gen AI fabricating sources of information even though it is meant to be trained on real-world data. IBM (n.d.) examines the various causes of AI hallucinations, indicating that common factors include “overfitting, training data bias/inaccuracy and high model complexity.”

To avoid using or spreading misinformation, verify the accuracy of AI-generated content using reliable sources before including it in your work.

Examples:
  • Incorrect predictions: An AI model may predict that an event will occur when it is unlikely to happen. For example, an AI model that is used to predict the weather may predict that it will rain tomorrow when there is no rain in the forecast.
  • False positives: When working with an AI model, it may identify something as being a threat when it is not. For example, an AI model that is used to detect fraud may flag a transaction as fraudulent when it is not.
  • False negatives: An AI model may fail to identify something as being a threat when it is. For example, an AI model that is used to detect cancer may fail to identify a cancerous tumor.

Like other digital tools, generative AI tools collect and store data about users. Signing up to use generative AI tools allows companies to collect data about you. This data can be used to make changes to tools to keep you engaged.

User data may also be sold or given to third parties for marketing or surveillance purposes. When interacting with AI tools, you should be cautious about supplying sensitive information, including personal, confidential, or proprietary information or data.

Examples:
  • User Inputs: Text or prompts you enter into the tool may be collected to train the AI model, which means any sensitive or personal information included in your prompts could also be stored.
  • Metadata: Information about your session, such as IP addresses, device details, and usage patterns, may be logged.
  • Interactions: Feedback you provide (e.g., upvotes, corrections) may be used to refine the model's responses.
  • Uploads: Any files or documents uploaded to AI tools may be processed and, in some cases, retained.
  • Account Information: For users with accounts, personal details like names, emails, and user preferences might be stored.

Note: Always be sure to review the terms and conditions of any application they use and be aware that these agreements often include a clause allowing the company to modify the terms at any time.

Academic Integrity & Copyright when using GenAI

As you guide students in navigating the world of generative AI, it’s essential to emphasize the importance of upholding academic integrity. Begin by reviewing your institution’s policies regarding AI tool usage in assignments, as these guidelines can vary. Familiarize yourself with your college’s academic integrity and generative AI standards to ensure your teaching aligns with institutional expectations. Encourage students to experiment with AI tools to enhance their learning while stressing the importance of critically evaluating AI-generated content for credibility and potential biases. Finally, remind students to properly disclose or cite AI-generated outputs in their work, promoting transparency and respect for intellectual property.

Things to Consider

Generative AI tools rely on information from the internet, which includes both accurate and reliable sources as well as outdated, biased, or incorrect information. These tools can sometimes provide inaccurate explanations or misrepresent sources, so it’s essential to critically evaluate their outputs—just as you would with any information source. Learn more about evaluating generative AI outputs on the AI Literacy & Evaluation Criteria page- opens in a new window.

Students should be aware that using generative AI tools typically requires creating an account. When you create accounts or interact with AI platforms, your personal information, inputs, and even behavioural data can be collected, which may be stored or shared with third parties.

Additionally, there’s a risk that sensitive information shared with AI could be used to further train the model, potentially leading to unintended data exposure. It’s important for students to be mindful of the information shared during account creation and to review the privacy policies and terms of use for any generative AI tool to understand how your data will be handled and to be cautious about sharing any personal or sensitive information.

When using generative AI tools, it’s essential to consider the purpose and intent behind your use to maintain academic integrity and ethical standards. Ask yourself: What am I using this software to create? For instance, if you’re using an image generator, are you trying to replicate the style of a specific living artist? This raises ethical concerns, as it may infringe upon that artist’s intellectual property. Similarly, consider whether you’re using AI to complete assignments or produce content without informing your instructor. Using generative AI tools in a class setting without your teacher’s knowledge can lead to misunderstandings and potential breaches of academic integrity policies. Always be transparent with your instructor about how you’re incorporating AI into your work, and ensure that your use aligns with your institution’s policies.

Generative AI can enhance various tasks, but it cannot replace essential skills like research, critical thinking, and evaluating information. While AI tools can quickly provide information, the ability to conduct thorough research—formulating effective questions, identifying credible sources, and synthesizing information—is vital. Critical thinking enables you to analyze AI-generated content for validity and discern bias or inaccuracies. Instead of replacing these skills, generative AI should be viewed as a supportive tool that enhances your research capabilities, allowing you to focus on deeper analysis and critical engagement with the material for more informed outcomes.

When you use generative AI tools to create content it’s essential to disclose that it’s AI-generated. In academic work, be sure to check your college's policies and let your instructor know how you’re using AI, as transparency is part of academic integrity.

Disclosure Statements

When using generative AI tools, it's important to include clear disclosure statements to inform your audience about the AI’s role in content creation. Unlike citations, which specifically credit the source of information, disclosure statements provide transparency about how AI has been used—whether it's for generating ideas, drafting content, or providing assistance. This distinction helps maintain ethical standards and fosters trust, ensuring that your audience understands when and how AI has contributed to the material.

There are many different ways to present disclosure statements, depending on how AI was used and the requirements of your institution. Whether brief or detailed, the important thing is that they clearly explain how AI contributed to your work, ensuring transparency and academic integrity. When in doubt, always consult your instructor about their expectations.

Recommended Format

tool, purpose, prompt

Example:

  • Microsoft's Copilot was used to generate ideas for this assignment using the following prompt: "[prompt]".
Remember to always ask your instructor about what is allowed in terms of AI use for your assignments and tests. Using AI tools without your instructor's permission can be considered an academic offence. Check your college's academic integrity policies- opens in a new window to learn more.

Generative AI tools often use material sourced from the internet, typically without explicit permission from original authors. Some argue that this practice can resemble "copyright laundering- opens in a new window" by creating derivative works based on existing material without directly infringing on copyright laws.

Things to Consider

The nature of generative AI (GenAI) content, and the increasing popularity of GenAI tools have led to important copyright questions including defining copyright ownership of generative AI works and the rights of creators whose works have been used to train AI models. The following provides an overview of copyright considerations to keep in mind when using GenAI tools. Due to the rapidly changing GenAI copyright landscape, the information presented below will change over time.

GenAI Input & Output

The two main areas of copyright concern revolve around generative AI input and output.

  • Input: Generative AI relies on large amounts of data to build its models and generate outputs. Some GenAI tools may have trained their systems by copying content without permission from the creators or copyright holders. This has raised copyright infringement concerns and has been the focus of several lawsuits.
  • Output: In Canada, copyright ownership of generative AI content is still uncertain which means that a creator may not be able to claim copyright protection for works they create using GenAI tools. At the same time, it’s challenging to provide attribution to the original authors whose works were used to train GenAI models.
Copyright Best Practices When Using GenAI Tools
  • Understand Your Rights and Responsibilities GenAI tools have terms of use that outline ownership and appropriate use of the tool. It’s important to review and understand the terms so that you are aware of your responsibilities and rights as a user. For example, some tools may claim ownership of the output, or make users responsible for copyright infringement that may result from using the tool.
  • Respect Copyright and Licensing Terms Always ensure that you have permission from the copyright holder or have verified that you can use a piece of text, image, or other content as input in GenAI tools. It’s also helpful to learn more about the data used to train the GenAI tool to identify potential copyright infringement issues.
  • Consider Using Public Domain and Openly Licensed Content Try to use prompts / inputs that are in the public domain or released under an open licence since these materials can be used more freely. For materials with an open licence, such as Creative Commons, make sure to review the licence terms to understand permitted use of the material. For more information on using Creative Commons content in GenAI tools, see Understanding CC Licenses and Generative AI- opens in a new window. Similarly, consider using GenAI tools that only include training data that are from the public domain and/or the tool's developers have licensed.
  • Use Original Prompts and Inputs Consider using original prompts and inputs instead of copying or adapting another creator’s work. Before using your original work, it’s important to review the tool’s terms of use to understand how inputs are handled within the tool.

Check your college's copyright guides- opens in a new window for more information about Generative AI & copyright.

College Resources

Explore these faculty resources tailored to support your college community in effectively incorporating generative AI into teaching and learning.

Generative AI Faculty Resources

Choose your college from the dropdown menu above.

Note: If your college isn't listed above, there are no faculty specific resources available at this time. For more student-centered information about GenAI, try our Generative AI module- opens in a new window.

Tell-Tale Signs of AI Generated Text

Tell-Tale Signs of AI Generated Text

Generic Style

Repetitive writing style that lacks a unique voice or perspective, and lacks specific details  

Missing Context

Lacks detailed information or a nuanced understanding of your specific topic 

Missing or Fake Sources

Citations are missing or the AI Tool may fabricate (e.g., “hallucinate”) citations. 

Overuse of Jargon

Overly reliant on certain works that are not commonly used in everyday language  

Inconsistencies

Statements may contradict one another or may be completely unrelated to the topic 

Outdated Information

Information is not always up-to-date and might contain inaccuracies 

AI Tools for Faculty

Generative AI offers a wide range of possibilities for enhancing teaching and streamlining instructional tasks. From drafting course materials to designing assessments, AI tools can save time and inspire creativity in the classroom. To help you explore these options, we've compiled a list of AI tools tailored for faculty needs.

For even more options, check out Futurepedia- opens in a new window, a comprehensive directory of AI tools that is frequently updated to include the latest innovations in AI technology. With these resources, you can discover new ways to support your teaching and engage your students.

Explore a variety of AI tools beyond Copilot and ChatGPT to enhance classroom learning and teaching practices.

  • AI Teaching Assistant Pro- opens in a new window
    A personal teaching and instructional design assistant developed by Contact North to support you with any topic or subject, offering assistance with tasks such as drafting multiple-choice tests, creating course descriptions, and generating ideas for outcomes and syllabi.
  • AudioPen- opens in a new window
    A voice-to-text tool that helps in writing blogs, lesson plans, and emails by converting spoken words into text
  • Curipod- opens in a new window
    Enables teachers to create interactive lessons quickly. It generates customized lessons with text, images, and activities like polls and word clouds
  • Eduaide.AI- opens in a new window
    Offers a lot of resource types for creating instructional materials, including lesson plans, discussion prompts, and assessments.

Helpful Resources

Attributions

Information on this page was adapted, with permission from "What Students Should Know About AI & ChatGPT- opens in a new tab" by Centennial College Libraries, "The Generative AI Toolkit- opens in a new window" by Conestoga College, the "Generative AI @ Fleming- opens in a new tab" guide by Fleming College, the "Generative AI Guide - opens in a new tab" and "AI & Copyright - opens in a new tab" by Seneca Polytechnic Libraries, and Designing Research Assignments that Enhance Student Research Skills- opens in a new window by Kathleen Oakey, a faculty librarian at Sheridan College.


Conestoga logo
Centennial logo
Seneca logo
Fleming logo