designing an ai policy for my class

posted on Jan 23, 2025
danny caballero

ai in my classes

Over the last several semesters, I have taught courses in physics, computing, statistics, and data science. In each of these courses, students have used different forms of AI to complete their assignments. I have noticed students using AI in a variety of ways from asking it for syntactical help, to using it for brainstorming ideas for a project, to using it to complete an assignment entirely.

In the past, I talked briefly with students about the use of AI in their work, and I encouraged them to reflect on their use. My policy was largely hands-off, with the expectation that students would use AI ethically and responsibly. When I encountered students using AI in ways that I thought were unethical or irresponsible, I would talk with them about their use and encourage them to reflect on their choices. I would allow them to complete the assignment again, and would score their work on the second attempt, but with a lower total possible score.

While nominally this approach worked, I found that the increasing access to these tools, and their increasing sophistication, made it even more attractive to use them in even more ways – some of which seemed truly problematic.

why should i care?

AI is an exploitative, extractive, and oppressive technology. It is also a technology with great potential promise. But, the billionaire tech class and their enablers are trying to inject AI into every aspect of our lives. Devices and tools that were once simple and required little thought are now infused with AI. And not even useful AI, but AI that is mostly designed to collect data, interactions, and behaviors.

And to what end? To sell us more stuff. To control us. To manipulate us. To extract more value from us. To make us into consumers, not citizens. To make us into serfs to their whims.

Now, you might say, “Danny, you’re being dramatic.” And I would say, “What’s new?”

But, look around your home, your work, your community, and so on. How many devices promote themselves as ‘smart’ or ‘intelligent’? How many advertisements do you see, hear, or interact with that are powered by AI? How many of your interactions with the internet are mediated by AI?

The answer is: a lot. And increasingly more and more.

The president of the United States has indicated that AI is a national priority. He recently announced a 100,000,000,000 USD investment in AI, which has now become a 500,000,000,000 USD investment in the most recent reporting.

I truly hope this does not come to pass.

Let me be crystal clear.

Investing this much money in AI is a huge mistake.

  1. If there’s 500,000,000,000 USD to spend; hell if there’s 100,000,000,000 USD to spend, we should be spending it on healthcare, education, housing, and food security. We should be spending it on the people who need it most, not on the billionaires who control these companies. There are tangible and immediate needs that could be addressed with this money.
  2. This investment is a clear signal that we will continue to destroy the planet, exploit workers, and extract value from some of the most vulnerable in our society. This is what we are communicating as our values when we invest in AI irresponsibly. There is simply not enough energy production in the United States for that kind of build out – in fact, many of these hyper-scale data centers are powered by the dirtiest energy sources available – with companies lobbying for exemptions from environmental regulations and building in places where energy is cheap and dirty, and suing citizens who try to stop them.
  3. This investment is a decision to continue to allow the billionaire tech class to control us, to manipulate us, and to extract value from us. This is a decision to make us consumers without agency, not citizens with power.

how does this relate to my class?

I have been an educator for nearly two decades. I have taught high school, undergraduate, and graduate courses. I have developed and design professional development for teachers, and I have worked with students and teachers to develop curriculum and assessments. My entire career has been about teaching and learning.

My students are the most important part of that work. They inherit the world that we prepare them for. They reflect on the values we express in our teaching. They are critical partners in their education. And we must treat them as such.

I cannot in good conscience present AI to my students as a neutral tool. It is not. It is a tool that is designed to exploit, extract, and oppress. It is a tool that will demand more energy, more water, and more resources. It is a tool that will be used to control, manipulate, and extract value from them.

At the same time, AI is not going away; it has great potential to help students learn, to provide new forms of scaffolding and support, and to offer accessible pathways to understanding. However, AI development has quickly outpaced our ability to respond to it; leading to reporting that we are at a minimum unprepared for its use in higher education and likely completely clueless about it use and unable to respond to it.

co-designing a policy

I am teaching Classical Mechanics, a course I taught as a postdoc at CU-Boulder for a couple years, and one that I taught last year at MSU. This year, I made an intentional choice to co-design a policy with my students for the use of AI in our course.

There were several reasons why I made this choice:

  1. A policy that I develop alone is inconsistent with my values of democracy and equity and with my practice of offering students agency in their learning. A classroom policy is an agreement between myself and my students, and it should reflect our shared values and goals.
  2. Many of my students have not had space to think critically about their use of AI, and I wanted to provide that space. They are barraged with messages about the utility and necessity of AI, and I also wanted to provide a counter-narrative that complicated that message.
  3. My students have always shown me incredible insights into their own learning and the learning of their peers, and I wanted to leverage that expertise. If we don’t listen to their expertise, we are not only doing them a disservice, but we are also missing out on the opportunity to learn from them.
  4. I wanted to model for my students a different way of thinking about AI, one that is critical, reflective, and intentional. I wanted to show them that we can make choices about our use of AI, and that we can make those choices together.
  5. I hope to seed the idea that AI domination does not have to be our future. We can develop policies, practices, and technologies that are equitable, democratic, and just. We can develop uses of AI that are ethical, responsible, and sustainable.

where did we start?

I provided students with a brief overview of these competing views of AI, and pointed out the energy consumption of these technologies with a very rough back-of-the-envelope calculation. The slides I presented are online if you are interested. But the key slides appear below.

slide 1

slide 2

slide 3

After some discussion and conversation, I set my students to work on answering the following questions on their own:

  1. What are ways that you think that AI can be used productively in our classroom?
  2. What are ways that you think that AI can be used unproductively in our classroom?
  3. What do you think are acceptable uses of AI in our classroom?
  4. What do you think are unacceptable uses of AI in our classroom?
  5. How should we document the use of AI in our classroom?
  6. Once we define a policy, how should we collectively enforce it?

Then students worked in small groups to discuss their answers to these questions. They were asked to come to a consensus on their answers, and to reply to a survey to share their answers with me. Their responses were thoughtful, insightful, and critical. They are summarized below.

acceptable use cases proposed by my class:

  • All uses are OK
  • Brainstorming, getting ideas, finding information
  • Asking for help, clarifying concepts, elaborating on ideas
  • Outlining, structuring, and editing writing
  • Fixing errors, debugging code, checking solutions

unacceptable use cases proposed by my class:

  • No use is OK
  • Asking directly for answers and solutions
  • Using AI to complete the entire assignment
  • Using AI to write papers or reports
  • Turning in work that is not your own

ways of documenting AI use proposed by my class:

  • Summarizing the use of AI and how it helped
  • Documenting the use of AI in the assignment
  • Providing prompts, responses, and outcomes
  • Detailed documentation including screenshots and date/time of use

ways of collectively enforcing our policy:

  • It is not possible.
  • Honor system; hold old your friends accountable
  • Collective policy helps us all; encourage honesty and integrity
  • Report violations to Danny
  • Fail the assignment if you violate the policy
  • Fail the course if you violate the policy

I used their ideas to develop four proposals for our policy. They are voting on these proposals by ranking them in order of preference.

ranked choice vote on our ai policy

  • Proposal 1: We adopt a policy that does not allow AI use at all.
    • Violation results in a failing grade on assignment.
    • Repeated violations result in failing the course.
  • Proposal 2: We adopt a policy that allows AI use for brainstorming, help, and editing.
    • AI cannot be used for direct answers or completion of assignments.
    • We expect documentation of AI use, but it can be informal.
    • Violations are discussed with Danny; the first violation requires a redo of the assignment, and repeated violations result in a failing grade.
  • Proposal 3: We adopt a policy that allows AI for use in nearly any way.
    • We require detailed documentation of use; this means screenshots, prompts, responses, and outcomes.
    • Violations are discussed with Danny; the first violation requires a redo of the assignment, and repeated violations result in a failing grade.
  • Proposal 4: We adopt a policy that allows AI for use in any way with no documentation required.
    • Violations of the policy are limited to sharing answers or solutions with others.

I’m not sure which policy will be selected; they have until Monday to vote. But having them co-design this policy has been a useful initial exercise.


comments powered by Disqus