r/Professors 1d ago

Administration Enabling AI Cheating

So, my provost just announced that the "AI Taskforce" had concluded, and a "highlight" of their report involved:

Microsoft Copilot Chat, featuring Enterprise Data Protection, is an AI service that is now available to all students, faculty, and staff at UWM. https://copilot.cloud.microsoft

Cool. So the University is now paying Microsoft to enable students to better cheat with AI?

WTF?

31 Upvotes

54 comments sorted by

36

u/[deleted] 1d ago

[deleted]

18

u/liznin 1d ago

The sad part is I doubt any of the admin will see consequences for this. They'll just jump ship in 3-5 years and tout their improved student retention and four year graduation rates when finding another job.

10

u/Huck68finn 1d ago

I wouldn't allow cheating. No way. We don't sell our integrity when we take a job. 

Also, just bc students have access to Co-Pilot doesn't mean they're allowed to use it in lieu of doing the work themselves. Your admins are tacitly approving AI use bc they're too lazy and cowardly to address the issue, but they don't have the nerve to come right out and state it, so I would deliberately "misunderstand" the implications of the committee statement. 

Can you put this on your Dept meeting agenda? You can't be the only one who's concerned 

7

u/TaliesinMerlin 1d ago

I would double down on the rhetorical features of writing.

OK, so now you can't ask how someone generated their text. Does it sound generic? Are the sources fabricated? Is the main claim or analysis really shallow? Does it fit a 5 paragraph form where the ideas don't really connect? Does it fail to engage an audience? Grade down heavier for that than ever before. Then find ways to reward rhetorically interesting texts where, yes, they make mistakes, but it sounds genuinely in their voice and it's clear they're trying things with their ideas.

Yes, it's always possible that someone will use GenAI in a way that generates a genuinely interesting essay. But most of the time the GenAI work has at least two of the above problems I named. So I'd raise the bar.

3

u/tvilgiate 1d ago

I agree with this… If anyone used AI on the last essay I graded (US history class) I’m pretty sure that it didn’t really help them. It defaults to vague writing with indirect framings and generic arguments. At least I’m pretty sure if I teach drug history at some point, I can think of questions that can’t be answered accurately by an LLM/where the response by the LLM will contradict what I will say in lecture.

2

u/magicianguy131 Assistant, Theatre, Small Public, (USA) 16h ago

Correct. I have my students respond to chapters. The ones who use AI are vague recaps of the chapter or play, which I explicitly call out as not what I want.

I tell them that AI summarizes, but only you can respond. Its obvious.

2

u/billyions 2h ago

Exactly. We need to raise the bar.

Let tools handle some basics - and encourage them to build human-powered outcomes beyond what we could have asked before. This is the key.

5

u/norbertus 1d ago

I'm at an R1 state school and had a conversation with the director of the first year program to the effect of "what do we do about this"

One thing we discussed -- on a less pedagogical level, and more in terms of "what is this doing to us" -- is that these AI programs are going to turn us into curators more than creators. So, one thing left for us is to teach editing.

But i totally feel you and the frustration with all of this, and how tone deaf the administration is about how they're handling it, not having a clue what it is like wasting our time evaluating literal mindless machine output.

Sometimes its really easy to tell if something is AI -- like, I once got a paper about how a dance performance by Yvonne Rainer in the 1970's was a cyberpunk novel by Bruce Sterling.

A step more sophisticated than that is when I see a paper where ever paragraph is very evenly measured, same words per sentence, same nuber of sentences per paragraph, with flawless grammar and no detail. These are so formulaic they can often be detected by intuition and confirmed by running my own prompt through the AI.

But in a few years, there are going to be tools readily available that can be customized to introduce errors, or mimic a student's writing style from things they did in high school.

1

u/JohnHammond7 23h ago

But in a few years, there are going to be tools readily available that can be customized to introduce errors, or mimic a student's writing style from things they did in high school.

It's already here. Go ahead, try it yourself with ChatGPT. You can upload samples of your writing and tell it to mimic your style. Or you can do exactly what you described, you can instruct it to add some errors to look more human. I can almost guarantee you've already read dozens of AI generated papers and had absolutely no idea. This notion that there is a meaningful, detectable difference between the two is completely outdated.

1

u/ElderSmackJack 16h ago

I can tell pretty quickly by reading it. There’s an uncanny valley element to it that makes it obvious to me. It’s just not human. I can’t describe it, but there’s no voice. Usually there will be other tells, like fabricated sources, “in this essay, I/we will,” and of course, the checking software.

All of the above tend to align once I get my first “this isn’t human” thought.

1

u/JohnHammond7 14h ago

Sure, some are more obvious than others, but the point is you don't know when one gets past your detection skills. You can't know.

1

u/ElderSmackJack 7h ago

Not yet, but it’s super easy to forget just how young this technology is. Give it time to settle, and I fully expect the checkers to work in tandem with the programs. I figure education will adapt and work it into certain places and keep it out of others.

Either that, or we’ll all just become diploma mills where no student does anything.

0

u/norbertus 2h ago

There’s an uncanny valley element to it that makes it obvious to me. It’s just not human

I can see it too, but when I need to justify an F, things get more complicated...

1

u/JohnHammond7 2h ago

I can see it too

How can you say this so confidently? You sound like a border patrol agent who proudly proclaims, "no drugs get through my checkpoint." How would you know about all the ones that you've missed?

2

u/sventful 15h ago

Sounds like you should learn about ungrading.

1

u/JohnHammond7 2h ago

Yes, I think this is going to become a lot more popular in the next few years, even necessary in some cases. There will be essentially no way to assess student learning, so they're going to have to assess themselves for some things.

2

u/billyions 3h ago

Now that we have tools that can pass the Turing test and generate text that is hard to differentiate from humans, we may not be teaching writing much longer.

The question is: what is the next level of skill we need that only humans can do? It's not an easy question, but that's where we need to head.

Students who submit AI-generated content are not doing enough to earn a living wage. We get that for free - it's of little value.

Those who can leverage AI tools to do uniquely valuable things will have value. Good grades don't bring opportunities - only skills do. Students need to learn this and take it to heart. It may not be writing from scratch anymore - we need to think bigger.

1

u/trickstercreature 1d ago

Same hat? Mine wants to reduce the composition course to a series of prompts that will “eventually” become the students own work.

1

u/uttamattamakin Lecturer, Physics, R2 1d ago

Alright then. You may as well create a rubric and train a chatbot to use it, providing examples at each level. Then, feed the writings into the AI. If it’s acceptable for students to cheat in this manner, why should we invest our valuable time reading automatically generated content that lacks substance?

I suggest assigning students the task of writing a one-paragraph prompt for a language model. Then, evaluate both the quality of the prompt and the output generated by it. I anticipate that half of them would still find a way to cheat, even on that assignment.

I wrote a draft of this post then:
To help my writing process, I used these Grammarly AI prompts: Prompts created by Grammarly - "Improve it" - "Make it sound academic"

9

u/DrMaybe74 Writing Instructor. CC, US. Ai sucks. 1d ago

I think we may work for the same diploma mill college.

5

u/solresol 1d ago

It's OK -- they have chosen the worst possible AI service. It's impractical and mostly useless. They probably didn't pay very much for it.

3

u/uttamattamakin Lecturer, Physics, R2 1d ago

It likely came free with Microsoft 365. A subscription to chat GPT would be useful.

2

u/ay1mao 1d ago

This is so disappointing.

2

u/Practical-Charge-701 19h ago

Say goodbye to civilization.

1

u/ay1mao 1d ago

This is so disappointing.

1

u/billyions 2h ago

The real world uses tools. It serves no purpose to handicap our students.

They must learn skills valuable enough to earn a living with the free and low cost tools available to everyone.

1

u/banjovi68419 9h ago

Administrators are idiots. So is everyone else though. We're cutting our own throats and higher ed won't exist soon. Why would citizens pay money to 1) CLEARLY not learn and 2) waste time when they could just do everything with AI in like... 8th grade.

1

u/billyions 2h ago

Higher Ed will continue.

But it's very true that certain courses - those that teach skills that businesses and organizations can get for free or very low cost will not be viable.

We need to figure out what value our students (using these tools) can offer and teach that. We will adapt and evolve.

1

u/MaleficentGold9745 3h ago

Yep. I tried to bring up AI cheating with an administrator this week trying to explain that my grades have been inflated, and the response was, "See, you're just an excellent instructor!" LOL

2

u/billyions 2h ago

Content that reads like a free tool wrote it can be assessed accordingly. No one needs it.

Remind students that if they want to replace themselves with a free tool, so will everyone else.

2

u/MaleficentGold9745 2h ago

Right?! That's the selling point that our administrators are trying to push. This is the future! Teach them how to use the tool, or they will be out of a job! The whole thing is just such a shitshow I don't blame my peers from retiring early

1

u/billyions 2h ago

We can tell when it's just cut and paste from the AI. It's those students who are losing out.

There's a way to use these tools without just replacing ourselves.

We can do more, more efficiently than ever before. We need to set our sights higher.

What can we ask of them that was not possible before?

1

u/MaleficentGold9745 2h ago

None of that is possible if students can't read, write, or possess any critical thinking skills. That's not what's happening, and I think it is the disconnect between the faculty and administration.

Edit, sorry talk to text shenanigans!

2

u/billyions 1h ago

True. Literacy is critical if we don't want to be taken advantage of. Reading - and knowing what you want to write - is key.

I wonder if students could be tasked with figuring out how we recognize ai-generated content? Can they back it up with details?

Can they map the patterns? Count the emojis? Compare a living wage to $20 a month?

Propose the most valuable human skills over the next 3 generations?

Address the impact on environments of our key industries?

Compare pros and cons of isolationism vs international collaborations?

Evaluate mortgage decisions?

Logistics of global food supply?

Track the numbers of pollinators geographically and predict the impacts over the next two generations?

Estimate ice mass and sea levels in New York City?

Develop production and distribution means for the next communicable disease?

Evaluate the costs of hiring four employees vs two ai-enabled employees?

Review regulations that help / hurt the availability of affordable housing? ( Tax advantages for primary residences vs vacation or rental properties)?

Chart literacy and life expectancy rates across nations? Propose explanations?

Analyze the language, themes, characters, or other that makes Shakespeare so compelling? Quantify them.

Propose a business, develop a comprehensive plan, website, branding, and social media or marketing plan?

It's not like the world is short of challenges.

We just need to engage our students in the various and many ways they are needed and truly valuable.

They want to be worth much more than $20/month - and we / the world needs them.

1

u/billyions 3h ago

It's a disruptor. Skills that can be done by a machine will increasingly be offloaded to machines.

What can we ask of our students - in any discipline - that was too much to ask before these tools become available?

It's changing work, it's changing lives, it's changing education.

Before, we might have asked students to prepare a budget. Now we can ask for a comprehensive business plan includeing a budget. We might have asked for a creative logo design - now we ask for a full set of branded artifacts. The ones where students actively push back and drive the process will be much better than a passive delegation.

Those who excel will still win. Those who can be replaced, will be replaced. It's as true for our students as it is for us. How can we leverage tools to move our field forward?

0

u/uttamattamakin Lecturer, Physics, R2 1d ago

Writing needs to evolve similarly to mathematics in the past. Before calculators, mental math ... remembering your times tables and division tables to 12x12 was essential for understanding the subject. Now there is a lot more emphasis on problem solving and understanding numbers more deeply. (The "new math" that some think is useless).

Language models (LLMs) function like writing calculators, so we should implement a writing exam where students compose one to two-page essays using pen and paper. This should be paired with lessons on the strengths and weaknesses of LLMs, teaching students to view them as tools, not replacements for their own thinking. It's important to show that LLMs recognize their own limitations.

To help my writing process, I used these Grammarly AI prompts: Prompts created by Grammarly - "Improve it" - "Shorten it"

Improve as in I wrote the post then let a LLM clean it up. IMHO that shouldn't count as having cheated... students should have to "show their work" in the form of the raw prompt version.

6

u/Practical-Charge-701 19h ago

LLM’s are not like calculators, and that’s a dangerous myth. Calculators are accurate. In fact, they’ll give you the exact same answer to the exact same problem. They don’t just make up answers to paper over ignorance.

2

u/norbertus 2h ago

That's a really good point.

Additionally, nobody gets confused as to whether or not a calculator has a mind.

LLM's pose an ontological problem that calculators don't, mainly: deception is part of their design goal.

LLm's are designed such that a reasonable person might mistake their output for the result of a person with a mind.

1

u/uttamattamakin Lecturer, Physics, R2 17h ago

In the sense that they offload some of the thinking to a device they are like calculators. As for calculators always being accurate etc etc. They weren't always like that.

0

u/Front-Possession-555 14h ago

Accurate to the extent that the user knows what the right answer is. The analogy is apt because I, a complete math dummy, might be able to get some result with a calculator but I couldn’t tell you an integer from an axis. It had numbers in it so it must be right if the calculator says so??

And that’s where I think there’s a lot of room for improvement at the assessment level. Writing profs know style, syntax, and grammar, then get mad that AI produces accurate output based on what’s being assessed. What I hear when I hear complaints about “grading a bunch of AI” is the assumption that students know the right answer and just choose to be “lazy” about it. I doubt a math prof—when faced with a bunch of students using calculators and getting wrong answers—would ban calculators. In fact, the math profs I know require additional proof of the completed work because they know assessing output alone is meaningless.

3

u/v_ult 22h ago

LLMs are not capable of “recognizing” things

-3

u/uttamattamakin Lecturer, Physics, R2 20h ago

You know what I meant they respond like they understand that they have limitations. They know that they don't have eyes and that they're basically just language models.

1

u/v_ult 20h ago

It’s just important to push back against tech bro speak like LLMs know or understand or realize things

1

u/uttamattamakin Lecturer, Physics, R2 17h ago

"Tech bro speak"? It's just a way of colloquially describing it. The real inside "tech bro" way of talking about this is to call it a hallucination. People accept that word due to a negative connotation.

-1

u/v_ult 17h ago

I said what I meant

1

u/DionysiusRedivivus FT, HUM, CC, FL USA 6h ago

How is doing math with or without a calculator or abacus for that matter analogous to formulating a sentence in conversation? Because do that several times and record it in squiggly lines and voila - you have writing. There’s a big difference between doing math and being able to communicate verbally.

2

u/uttamattamakin Lecturer, Physics, R2 6h ago

Beyond just calculating with numbers math is a form of communication. Once you're talking about geometry and algebra and math beyond that still you're talking about a language. Once you are at those levels you're talking about a language that will be universal.

E=Mc2

You could probably show that to a space fairing alien squid and because they would know the same basic relationships they would know what E stood for, and have some idea of Mc2 was.

Indeed for a species very different from us but intelligent as we are math would be the very first thing we would communicate with.

0

u/DionysiusRedivivus FT, HUM, CC, FL USA 6h ago

How’s that compare to a child saying “pass the juice, please.” Or “get out of the road, there is a car coming?” Articulating a physics formula to warn me of an impending collision is neither functional nor intuitive (within a cultural group).
Mathematical and scientific models are specialized and descriptive subsets of communication, but by no stretch are they skills developed as toddlers which simply need to be transposed from noise to equivalent symbols and syntax by children (granting shapes and numbers are names imposed on recognized patterns and not functioning as formulas).

3

u/uttamattamakin Lecturer, Physics, R2 5h ago edited 5h ago

Both the mathematical equation and the sentences that you uttered communicate an idea. Granted, using a vector equation to represent passing juice would be overkill, but one could do it.

You sound like the conehead aliens from the old Saturday Night Live sketch doing it.

🍹(0,0) ==>🍹(10cmx ,20cmy)

See a mathematical vector equation that accomplishes asking someone to pass the juice. Maybe for fun, I can try writing a differential equation to describe the motion.

Math is just another language, but I could show that equation to someone from China, and they'd have some idea what it meant.

1

u/billyions 2h ago

We write Alexa "skills".

Human languages follow rules that can (and have been) programmed and codified.

When we ask the weather, it's AI that provides the report.

These tools are passing the Turing Test - it's hard to tell whether we're interacting with a human or machine.

The best will use tools to master skillful communication. The least will become obsolete.

Communicators are needed as much now as ever. We can teach masterful skills, directed at real world challenges.

1

u/billyions 2h ago

Exactly. They type much faster than I can.

When a person is driving the process, it is possible to generate a much better outcome, faster by using AI. It is also equally possible to get completely off track and waste hours.

The ability to generate novel, usual artifacts remains in the hands of the humans. We need to incorporate tools and challenge ourselves to do more, of more value, or at a faster pace than before.

No one teaches pony express delivery services anymore. As always, whole industries/areas will adapt, dwindle, or thrive.

-1

u/banjovi68419 9h ago

Comparing AI to calculators is bafflingly wrong. Like I don't feel like the same species as you right now.

2

u/billyions 2h ago

They are though. Statistical calculators that process input language, parse it, and suggest a recommended response. It's math. Lots of math trained on a large corpus of data.

They write code to generate spreadsheets.

They write code to generate code.

They generate human-sounding text. Overly symmetrical machine-like text, but legible.

When competent humans leverage tools, they will typically outperform both the tools by themselves - and humans that don't use tools.

The game will be won by those who leverage tools to do more than they ever thought possible.

1

u/uttamattamakin Lecturer, Physics, R2 6h ago

Yes I am like one of those species of the past unafraid of the fire while you were afraid of the fire. Which one of those species was the better one?

1

u/JohnHammond7 2h ago

It's okay. These people can continue living in the dark. It will just mean more jobs for us in the future while they're still screaming with pitchforks.