site banner

Small-Scale Question Sunday for June 29, 2025

Do you have a dumb question that you're kind of embarrassed to ask in the main thread? Is there something you're just not sure about?

This is your opportunity to ask questions. No question too simple or too silly.

Culture war topics are accepted, and proposals for a better intro post are appreciated.

2
Jump in the discussion.

No email address required.

IMO this comment is way too uncharitable. It's like, 80% solvable, but you're right that solving it requires work. But it's actually a decent amount of work. I'd hesitate to call it laziness. I think a lot of people underestimate the typical teacher workload. Many teachers would probably do much better work and especially more efficient work if you increased pay by 30%, staffing by 30%, and reduced class sizes by 20%. (Part of this could be offset by slashing the administrative/pseudo-support staffing by 60% or more, but this still might require a net investment). This would give them much more time to plan lessons (instead of rolling out the greatest hits over and over without adapting to the times) and importantly, assign (and create) tests and homework assignments that are AI-resistant, if not AI-proof. It's just that these types of assignments and assessments are much more time-consuming to create and grade, plus as I mentioned the requirements to create them custom-tailored to your class and curricula make for the need to constantly be tweaking them (which again, most teachers don't have sufficient time budget to properly perform).

With that said there are certainly some school districts and even some teachers who are scared to fully grade work, but IMO most of the resistance is more from administration or parents, even, than the teachers themselves. A lot of teachers probably would prefer to hand out bad grades more, not less, current philosophy alleging this is psychologically damaging somehow notwithstanding.

IMO this comment is way too uncharitable...I'd hesitate to call it laziness.

To clear this up, I didn't call it laziness, I just listed that as a possible pragmatic blocker. My point is that it's trivially solvable in technical sense. It's really really easy to think of ways to evaluate students or have them practice learning in scenarios that AI cheating could be mitigated. It's not remotely unsolveable in that sense. But there are, to your point structural and indivdual reasons that make implementing such a solution harder.

I have sympathy for these defenses, but not infinity. If it's something any homeschool parent could solve without any innovation, then the school system needs to be able to react to in order to remain a legitimate concept. We can't just 'oh well...' cheating at scale. It needs to be treated as existential to schooling, if it's really this widespread.

There is no legitimate reason an institution of learning, can remain remotely earnest about it's mission as a concept, and still allow graded, asynchronously written reports.

Now of course many of the blockers to reacting to this are an outgrowth of similar challenges schools have faced for decades: The conflicted, in-tension-with-self mission of schooling in general. as described in the excellent book, Somone Has to Fail. Schools simultaneously trying to be a system of equality and meritocracy will fail at both.

But AI has stopped the buck passing; like so many other things, AI is a forcing funciton of exponential scale. I think if the can gets kicked any further, ever single semester, every single assignment, the entire idea of schooling massively delegitimizes itself.

I think you could honestly do it much more easily then that; for example, you could keep all of your existing assignments, but simply tell people that you will be asking some number of random students a question about their essay/assignment/whatever at the start of the class in which you return their assignments. There's been a recent study which shows a lot of people do not retain a lot of information when they use AI to write essays for them. This would catch a good chunk of AI submitted assignments with very minimal work.

If they "cheat" and use AI anyways, but memorize enough of their assignment to answer a question? Mission accomplished; the nominal goal is to teach students the information, so we shouldn't actually care about how they learn it.

There's been a recent study which shows a lot of people do not retain a lot of information when they use AI to write essays for them.

A teacher I know says that the kids (except the really smart ones) use Chat GTP for everything and don't give the impression that they even read the output beyond a cursory look to make sure it was in the general ballpark of answering the question, so this shouldn't be too hard.

I used ChatGPT once to do a required writing task that I thought was useless and didn't want to do. I did edit it for some semblance of accuracy, but did not exactly read it, nor do I remember what it said. If I thought it mattered or was a useful thing to do I would have written it myself, I like writing essays, including college essays.