TRU Centre for Excellence in Learning and Teaching

Month: July 2025

CELT Summer PD Series – Post #2: Ethical AI use

For this second installment of our 2025 Summer Asynchronous Professional Development, we offer two activities to help you reflect on how you can navigate ethical considerations when using artificial intelligence (AI) for teaching and learning. While the tools continue to evolve, the ethical considerations associated with AI aren’t changing as quickly. Learning how your AI use – and your students’ AI use – intersects with privacy, data security, fairness, and more, increases your capacity for making responsible choices.


Activity 1 – Approaching Generative Artificial Intelligence Ethically
by Carolyn Ives, Diane Janes, and Brett McCollum

Whether it’s discussed in class or not, generative artificial intelligence (AI) has already transformed how many students work. If we want to make sure that transformation is a positive one, we can help learners think about ethical uses of AI, considering issues such as bias, fairness, privacy, data security, transparency, accountability, and the human dimension.

The World Economic Forum presents seven guiding principles for the use of AI in educational institutions:

  • Purpose: Explicitly connect the use of AI to educational goals
  • Compliance: Affirm adherence to existing policies
  • Knowledge: Promote AI Literacy
  • Balance: Realize the benefits of AI and address the risks (and they’ve included a helpful graphic outlining both the benefits and the risks)
  • Integrity: Advance academic integrity
  • Agency: Maintain human decision-making
  • Evaluation: Continuously assess the impact of AI. (WEF, 2024)

 

The link above expands on each of these principles, often with links to additional resources.

After reading the webpage linked above, choose one of the seven principles and journal your thoughts related to that topic. We encourage you to capture a few notes about your current understanding of the principle, your concerns connected to it, and some optimistic thoughts on how that principle can improve student learning in your classrooms. Next, add an event in your calendar for you to look back at your notes in exactly one year. In your own role as a learner, it will be interesting to see how your understanding, perceptions, or predictions change between now and next year.

If you are looking for more information on ethical engagement with Artificial Intelligence, there are several TRU resources available as well, such as TRU’s libguide titled Artificial Intelligence: A Guide for Students, including sections about appropriate and inappropriate uses for AI: https://libguides.tru.ca/artificialintelligence. The AI in Education website developed by the Learning Technology & Innovation team also includes a section with a Critical AI Framework with a section about ethical considerations: https://aieducation.trubox.ca/critical-ai-framework/. This website is also being updated.

You can also check out these links on AI ethics from other institutions:

Center for Teaching Innovation. (2025). Ethical AI for teaching and learning. Cornell University. https://teaching.cornell.edu/generative-artificial-intelligence/ethical-ai-teaching-and-learning

James Madison University Libraries. (2025, May 2). Artificial intelligence (AI) in education: AI and ethics [libguide]. https://guides.lib.jmu.edu/AI-in-education/ethics

Moquin, S. (2024, November 26). Ethical considerations for AI use in education [blog post]. https://www.enrollify.org/blog/ethical-considerations-for-ai-use-in-education

 


Activity 2 – Approaching Indigenous data sovereignty with respect when using AI
by Diane Janes and Carolyn Ives

AI use in Indigenous education and in reconciliation work for all educational experiences involves additional considerations for ethical and respectful practice. Through thoughtful design, learning activities are expected to address Indigenous data sovereignty, cultural protocols, and respectful partnerships. This includes prioritizing Indigenous-led initiatives, nurturing collaboration, and ensuring that AI development and implementation are guided by Indigenous knowledge systems and values. The links below are just a short list at the beginning of the conversations that are happening between Indigenous and allied educators on how to engage in AI; being respectful when engaged with Indigenous ways of knowing, as curriculum is designed and created using AI.

For this activity, we invite you to listen to the Radical AI Podcast interview with Jason Edward Lewis. Lewis is a Hawaiian and Samoan digital media theorist, Professor of Computation Arts at Concordia University, and University Research Chair in Computational Media and the Indigenous Future Imaginary. The podcast is longer than the material we often choose for this PD series (it’s about an hour long), but we think you will find it worth your time.

Additional resources on this topic include the following:

Bhattacharjee, R. (2024). Indigenous data stewardship stands against extractivist AI. https://www.arts.ubc.ca/news/indigenous-data-stewardship-stands-against-extractivist-ai/

Government of Canada (2025) Indigenous-Led AI: How Indigenous Knowledge Systems Could Push AI to be More Inclusive. Research stories: New frontiers in research fund. https://sshrc-crsh.canada.ca/funding-financement/nfrf-fnfr/stories-histoires/2023/inclusive_artificial_intelligence-intelligence_artificielle_inclusive-eng.aspx

Cardona-Rivera, R.E., Alladin, J.K., Litts, B.K., & Tehee, M. (2024). Indigenous Futures in Generative Artificial Intelligence: The Paradox of Participation. Chapter 14 In Teaching and generative AI by Buyserie, B. & Thurson, T.N.(Eds). https://uen.pressbooks.pub/teachingandgenerativeai/

Indigenous data sovereignty protocols, and risks of harm. (2024). Section 1C in Principles and guidelines for generative artificial intelligences (GenAI) in teaching and learning. University of British Columbia. https://it-genai-2023.sites.olt.ubc.ca/files/2024/08/Guidelines-GenAI_TL.pdf

Symposium: S4-194. (2024). Open Science, Indigenous Data Sovereignty and the Decolonization of Knowledge Systems. Audio from the Panel discussion. https://sciencepolicy.ca/posts/s4-192/

TheGovLab. (2021). AI Ethics Course: Indigenous Data Sovereignty by Maui Hudson. YouTube video. https://www.youtube.com/watch?v=g8qeZihLf1Q

CELT Summer PD Series – Post #1: AI Foundations

For this first week of Summer Asynchronous Professional Development, we offer two activities to help get you started with using artificial intelligence (AI) for teaching and learning. After all, learning about the possibilities and the limitations of AI helps you respond to the changes you are likely experiencing in your classroom.


Activity 1

by Dr. Wei Yan (CELT Coordinator)

AI has been around for a long time ever since the term “Artificial Intelligence” was coined in 1956. Generative AI came to mainstream in November 2022 when OpenAI launched ChatGPT based on GPT-3.5. It went viral almost overnight mostly due to its easy-to-use chat interface. It attracted 1 million users in just 5 days, which was the fastest adoption for any consumer software. Two and half years later, we see so many different types of Generative AI tools and their growing impact on our life and work.

Unless you are a tech nerd, you may not be familiar with how Generative AI works. Unlike traditional AI, Generative AI can generate new content, such as text, images, or music because it is powered by machine learning models, such as large language models. Generative AI is trained on massive amounts of data and learns patterns from them. During the training, the model architecture learns how to predict and refine its prediction. After the training, the model will construct new content based on the patterns and probabilities from massive data sets.

Generative AI does not understand language like us humans, but it is very good at predicting patterns. Generative AI is so good at language patterns that journal editors and manuscript reviewers only had an overall positive identification rate of 38.9% in identifying AI versus human writing (Casal & Kessler, 2023). Perhaps this is the biggest challenge we are facing in teaching and learning these days. Of course, there are larger social, legal, cultural, and environmental implications beyond our classrooms. Perhaps, more than ever, this is also a moment for us to reflect on what makes us unique as human beings.

For this activity, I would like to encourage you to take a Moodle course to fill your knowledge gaps regarding Generative AI. The GenAI Quickstart: Foundations for Faculty course was adapted from Concordia University by my colleague Jamie Drozda and the Learning Technology & Innovation team to provide you with the essential understanding to confidently navigate the GenAI landscape. This thoughtfully structured series of nine micro-modules allows you to learn at your own pace, in any sequence that suits your needs, with the flexibility to revisit content whenever you need a refresher. You can access the course using the password quickstart.

You will receive a certificate of completion once all nine modules have been completed. I have completed this course and earned my certificate, and I highly recommend it!

In addition to the above, you may also be interested in attending an upcoming workshop with Dr. Ajay Dhruv on using AI in the classroom:

Chalkboards to Chatbots: Future proofing teaching with AI.  Register here  (https://tru.libcal.com/event/3912275).

Reference:

Casal, J. E., & Kessler, M. (2023). Can linguists distinguish between ChatGPT/AI and human writing?: A study of research ethics and academic publishing. Research Methods in Applied Linguistics, 2(3), Article 100068. https://doi.org/10.1016/j.rmal.2023.100068


Activity 2

by Dr. Brett McCollum (CELT Director) and Matt Norton (AVP Digital Strategies and CIO)

Copilot is a Generative AI tool that TRU has reviewed for privacy impact. So long as you are signed into your TRU account when using Copilot, your data is not being ‘scooped up’ by the AI model. This is important when we think about intellectual property and student privacy. Also, your prompts and the AI responses are private to you.

So, where do you go to use TRU’s Copilot license on your phone? The easiest option is to download the M365 Copilot app onto your phone from the Apple App Store (for iPhones) or the GooglePlay Store (for Android Phones). QR codes for these options are shown here:


M365 Copilot App
Apple App Store (iPhones)


M365 Copilot App
GooglePlay Store (Android Phones)

After you install the app on your smartphone, sign in with your TRU Microsoft account (e.g. your TRU email and password). You may be prompted to complete two-factor authentication.

Now that you’ve signed in, you can begin a conversation with the large language model (a type of AI).

Here are some suggested starting points:

  • How can I discuss the topic of ethical issues of AI use in my course on (financial operations control in tourism; legal research; innovation and entrepreneurship; social work practice, etc.)?
  • I want you to provide me with a redesigned process for students to write a term paper that incorporates the use of Copilot and also reduces the chance that students will circumvent the intended learning outcomes associated with the assessment. Include a summary of tasks by week, a brief description of student and faculty roles, how copilot can be used if permitted for a task.
  • Please create a Sankey diagram showing interprovincial migration in Canada between 2012 and 2022. Use real data from the Stats Canada website. Provide a link to the data source that you use.

As you read over the output from Copilot, think of a follow up question to dig deeper into the topic with the tool.

Take note of what the AI does well, but also where it performs poorly. For example, in our practice, Copilot gave a good description of what the Sankey diagram should look like and it provided a link to the Stats Canada website. However, the image it generated had only labelled axes but no data!

Continue practicing with Copilot on your phone or computer. The more detailed your questions are in terms of disciplinary terminology, the better you will prompt the large language model to draw upon relevant information and potentially get high quality outputs. If you are unsure of how to do a deeper dive with your prompting, simply ask Copilot how to do that and ask for illustrative examples for your topic – use the tool to learn the tool. Remember to be logged into your TRU account so that your data privacy is maintained.

Powered by WordPress & Theme by Anders Norén