This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.
Jump in the discussion.
No email address required.
Notes -
Everyone Is Cheating Their Way Through College (NYMag)
link-archive link
Article describing what was predictably coming to college campuses since GPT3 got released. The narration follows some particularly annoying Korean-American student trying to make quick bucks from LLM-cheating start-ups and a rather dumb girl who can't follow basic reasoning, which makes the read a bit aggravating and amusing but overall the arch is not surprising. Recommended for a quick read. Basically all the grunt work of writing essays and the intro level classes with lots of rote assignments seem to be totally destroyed by cheap and easy high quality LLM output.
Some interesting highlights for me:
I saw this article and was saving it to write an effort post, and now you beat me to it. A shame, but I guess I should put the outline to use anyway.
My intent would have been to use this article to highlight my concern about the AI revolution, and share my perspective on a topic I've never really gone into.
I am on record on being a skeptic / doubter on AI singularity fears (or hopes). I broadly think the 'the winner of AI is the winner of all' is overstated due to other required dynamics for such a monopoly of power/influence to occur. I think other technology dynamics matter more in different ways- for example, I think the drone revolution matters more than the AI revolution for shaping geopolitical contexts in the decades to come. I think that AI technologies under human control are more likely to do something irrevocably stupid than AI-controlled technologies deciding to paperclip everything and somehow having the unique ability to compel all other AI to align with that.
I do think it's fine to characterize AI as a significant disruptive technology, even if I think the inherent limits of LLM are more relevant to certain fields (especially anything novel/emerging without substantial successful learning material) than is commonly appreciated. Something doesn't have to be world ending to be a major disruption. I just think it's one of many, many major disruptions in the decade to come, and not even necessarily the worst. (Though disruptions do compound.)
What scares me isn't the AI singularity, but the AI-educated youth.
Specifically, I fear for- and fear from- people who might otherwise have learned critical thinking skills in how to not only search for answers, but organize and retain answers, to things they didn't know at the start. The example in the articles covered people using AI not only in lieu of finding a solution, but even knowing what the solution was. (The students who didn't know their own essay's response.) I don't think AI is bad for students because the answers AI provide are bad, necessarily. Getting an answer from AI isn't that different from getting an answer from a first-few pages search of google. (Even before they were the same thing.) It's more that if you don't even know how to do a tailored good search, or you don't know where other alternative answers are, you can't compare even that result. And if you're not retaining the solution- if you don't understand 'why' the solution is correct- the student is missing the opportunity. What's the point of passing a test if you, the student, haven't learned?
And I think the process of learning is important. In fact, I think learning the process of learning is among the most important things to learn at all. How to find an answer you don't know. How to distinguish good answers from bad answers. How to detect and distinguish bias from error from manipulation. How to generate a new solution to a complex problem when there isn't a proven solution at hand, or if the old solutions aren't accessible because [reasons]. And finally, how to both organize and communicate that in a way that other people can use. 'Knowing' a lot is not enough. 'Communicating' it can be just as important. All of these are skills that have to be practiced to be developed.
AI can compromise critical thinking and skill development. AI can compromise learning how to look for answers. AI can compromise how to retain the answers. AI can compromise the ability of people to respond to unclear situations with incomplete information or no baselines. AI can compromise the ability of people to convey their ideas to other people.
I had a great big screed on how I think AI is ruining youth... and then I looked back to that first mention of google, and asked myself 'what is so different?'
I grew up in an era where the pre-AI internet promised unparalleled information access. An era where seemingly infinite libraries of fiction (fan or otherwise) were open to anyone with an internet, with more to read than a lifetime of book purchases. Access to other people's opinions would break people out of their small-minded closed-worlds. The truth was out there, and the internet would help you reach it. In one of the earlier versions of Civilization, the Internet was considered a world wonder, and would give the civilization that developed it first (eventual) access to any technology that at least two other states knew.
But I also grew up in an era where people bemoaned that google was ruining the ability of people to find anything not on the internet. Documents that were never digitized, people who never wrote down their thoughts, the subtext that comes from investing things in person rather than from a distance. You can think you know how hilly a hike is from reading it, but a picture of it is worth a thousand words, and actually hiking it yourself in the heat and humidity and while carrying dozens of pounds of equipment is something else. It's hard to capture the sublime beauty of nature, and thus understand why people would value nature preservation for its own sake, if you don't go out to it.
(Then again, I did go into it. I also didn't like it. My sympathies were never exactly with anti-industrial environmentalism after that.)
And it's not like the pre-AI google-internet wasn't directly facilitating cheating. Who here was ever introduced to SparkNotes? The best friend of anyone who didn't want to actually do the required reading, but still needs a talking point or essay about a famous book. It advertises itself as a 'study guide' site these days. It condensed hundreds of pages into a few small pages of summary, and that was Good Enough.
Similar points could be made about cheating. I remember when facebook was not only young, but mostly a college student thing. And I remember how schools wrestled with students sharing answer sheets to quizzes, past essays, and so on. Even if I didn't partake, I know people did. Were they getting substantially more critical thinking skills than the modern AI exploiter just because their cheating methods were a bit more taxing on time or effort?
Maybe. But then, what's so different between the pre-AI/post-internet student cheating, and the pre-internet student cheating?
Were cheating circles any less of a thing in eras where colleges had notorious stories of famous historical figures basically fooling around until last-minute cramming? Were those cramming sessions really imparting the value of critical thinking not only to the Great Figures of History, but their less memorable peers?
Or information. If you're getting all your politics from AI, that was pretty dumb. But then, I remember when it was (and still is) a common expression of contempt to dismiss people who watched [bad political TV station], or read [biased partisan news paper], or listened to [objectional radio figure] rather than the other alternatives.
But were the people who were turning into [good political TV station] being any more critical thinking for listening to the 'correct' opinion shows? Or was it just 'my noble voters know I speak truth through their own critical thinking, yours are misled by propaganda that critical thinking would negate'? Were radio listeners decades prior any less mono-tuned for having even fewer alternative stations to listen to? Were regional or municipal newspapers any partisan when there was less competition outside the influence of political machines? Were their readers any more objective critical thinkers when there were fewer easy alternative options?
Has there ever been a golden age of critical thinkers, schooled to think well, untainted by the technology of its era, or the character of its students?
Or has critical thinking been consistent across history, with most students of any era doing the least possible to get through any required courses, and missing the point along the way?
And- by implication- some minority of critical thinkers existing and emerging regardless of the excuses of the era? And often out-competing their contemporaries by the advantages that come with critical thinking?
The more I think of it, the more convinced of the later. Most people in history wouldn't have been great critical thinkers if only they had access to more or even better information. They'd still have taken the easiest way to meet the immediate social pressure. Similarly, I doubt that the Great Critical Thinkers of History would have been ruined by AI. Not as a class, at least. They already had their alternative off-ramps, and didn't.
Critical thinking can always be encouraged, but never forced. The people who do so are the sort of people who are naturally inclined to question, to think, or to recognize the value of critical thinking in a competitive or personal sense. The people who actually do so... they were always a minority. They will probably always be a minority.
So on reflection, my fear about bad students isn't really warranted by AI. There has always been [things degrading critical thinking] that the learners of the era could defer to, or cheat with. If I'd been born generations earlier, I'd have had an equivalent instinct 'warranted' by something else. My fear is/was more about the idea of 'losing' something- an expectation of the critical thinking of others- that probably never existed.
Realizing that made me fear the effects of AI a bit less. As silly as it sounds to put my updated prior in this way, and the sillyness is the point here, there was no golden age of critical thinking and enlightened education that just so happened to be when I was maturing. Just as [current year] wasn't the first time in human history moralistic college students felt ideal social morality was obviously achievable, a downgrade of critical thinking didn't start after I left college either.
So when I read that article about the south korean kid who viewed Ivy League not as a chance to learn in an environment of unparalled access to quality minds and material, but as a change to meet his wife and co-founder of some company, I shouldn't- don't- despair. Instead, I shrug. As it was before, so it shall be again.
Two centuries ago, his mindset would have been right at home in his home country. He would probably only have cared about the material the nominally-meritocratic gwageo civil service exams assessed (including classical literature) to the degree it let out-compete other would-be competitors and join the yangban, a relatively comfortable aristocratic-social class. If he had the ability to cheat at the civil service exam and get away with it, I imagine he would have.
I doubt the social sanctity of meritocrat exams would have bothered him anymore than the espoused value of critical thinking in a progressive academic institution.
What is so different?
Google never just gives you the answers, but instead helps you find human written (at least the old google) articles and resources that may be useful. And depending on the quality of the official course materials, the googled sites may be surprisingly useless.
Yes, cheating has always existed, and if you really want to cheat, instead of using google to find the answers, you can also find the older kid who saved all the homeworks from last year for you to copy from.
So I think in general Google is not cheating unless explicitly banned, and very different from ChatGPT. ChatGPT just does the homeworks and spits out the answer.
But that's an issue of trust - idk how old you are, but there were conversations like this in the early oughts about search engines dumbing everyone down and removing the need to think for themselves, then a few years after that it was Wikipedia. Each time the same objections were made - it reduces the need to think for yourself, it reduces your ability to find information for yourself and it leads to people stating inaccurate and frankly idiotic statements as fact.
But eventually people realised they couldn't trust google or Wikipedia entirely and we developed epistemic hygiene around them. The same will happen with ai, and I know it will, because my mum - who is by no means tech savvy or even especially research savvy - gushes about ai, but her gushes are 'I love how it gives me all the opinions up front and doesn't hide the ones the establishment doesn't like' (paraphrased) and 'it's no doctor, but it's a God send when I need a sanity check.' (paraphrased) If my mum has developed epistemic hygiene around ai, so can students, and they will.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link