site banner

Culture War Roundup for the week of March 18, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Slow news day? Guess I'll ramble for a bit.

Scientists shamelessly copy and paste ChatGPT output into a peer-reviewed journal article, like seriously they're not even subtle about it:

Introduction

Certainly, here is a possible introduction for your topic:Lithium-metal batteries are promising candidates for high-energy-density rechargeable batteries due to their low electrode potentials and high theoretical capacities [1], [2]. However, during the cycle, dendrites forming on the lithium metal anode can cause a short circuit, which can affect the safety and life of the battery [3], [4], [5], [6], [7], [8], [9].

This is far from an isolated incident - a simple search of Google Scholar for the string "certainly, here is" returns many results. And that certainly isn't going to catch all the papers that have been LLM'd.

This raises the obvious question as to why I would bother reading your paper in the first place if non-trivial sections of it were written by an LLM. How can I trust that the rest of it wasn't written by an LLM? Why don't I cut out the middle man and just ask ChatGPT directly what it thinks about lithium-metal batteries and three-dimensional porous mesh structures?

All this fresh on the heels of youtube announcing that creators must now flag AI generated content in cases where omitting the label could be viewed as deceptive, because "it would be a shame if we (youtube) weren't in compliance with the new EU AI regulations", to which the collective response on Hacker News was "lmao okay, fair point. It would be a shame if we just lied about it."

It would be very boring to talk about how this represents a terminal decline in standards and the fall of the West, and how back in my day things were better and people actually took pride in their work, and how this is probably all part of the same vast conspiracy that's causing DEI and worse service at restaurants and $30 burgers at Five Guy's. Well of course people are going to be lazy and incompetent if you give them the opportunity. I'm lazy and incompetent too. I know what it feels like from the inside.

A more interesting theoretical question would be: are people always lazy and incompetent at the same rate, across all times and places? Or is it possible to organize society and culture in such a way that people are less likely to reach for the lazy option of copy and pasting ChatGPT output into their peer-reviewed journal articles; either because structural incentives are no longer aligned that way, or because it offends their own internal sense of moral decency.

You're always going to have a large swath of people below the true Platonic ideal of a 100 IQ individual, save large scale genetic engineering. That's just how it goes. Laziness I'm not so sure about - it seems like it might be easier to find historical examples of it varying drastically across cultures. Like, the whole idea of the American Revolution is always something that blew my mind. Was it really all about taxes? That sounds like the very-slightly-true sort of myth they teach you in elementary school that turns out to be not-actually-true-at-all. Do we have any historians who can comment? Because if it was all about taxes, then isn't that really wild? Imagine having such a stick up your ass about tax hikes that you start a whole damn revolution over it. Those were not "lazy" men, that's for sure. That seems like the sort of thing that could only be explained if the population had vastly different genetics compared to contemporary America, or a vastly different culture; unless there are "material conditions" that I'm simply not appreciating here.

Speaking of material conditions, Brian Leiter recently posted this:

"Sociological idealism" was Charles Mills's term for one kind of critique of ideology in Marx, namely, a critique of views that, incorrectly, treat ideas as the primary cause of events in the socio-economic world. Marx's target was the Left Young Hegelians (whose heirs in the legal academy were the flash-in-the-pan Critical Legal Studies), but the critique extends much more widely: every day, the newspapers and social media are full of pontificating that assumes that ideas are driving events. Marx was always interested in the question why ideologists make the mistakes they do.

Marx's view, as far as I can tell, was that ideas (including cultural values and moral guidelines) should be viewed as casually inert epiphenomena of the physical material and economic processes that were actually the driving forces behind social change. I don't know where he actually argues for this in his vast corpus, and I've never heard a Marxist articulate a convincing argument for it - it seems like they might just be assuming it (but if anyone does have a citation for it in Marx I would appreciate it!).

If Marx is right then the project of trying to reshape culture so as to make people less likely to copy and paste ChatGPT output into their peer-reviewed journal articles (I keep repeating the whole phrase to really drive it home) would flounder, because we would be improperly ascribing the cause of the behavior to abstract ideas when we should be ascribing it to material conditions. Which then raises the question of what material conditions make people accepting of AI output in the first place, and how those conditions might be different.

Since all scientific papers are published in English despite 90% of the world’s scientists not speaking English as a native language (and even many of the 10% aren’t great writers) we should assume pretty much all ESL written work in English will be heavily LLM-generated from now on. That they forgot to delete the intro is bad, but it’s not really the same thing as, say, an author generating a book by LLM because the value - if there is any - will be in the data, not the abstract per se.

Given the already high rates of data fabrication inside but especially outside the West, I’d assign very little weight to any data from a paper where the authors, reviewers, and editors don’t even check for howlers like the ones quoted.

More broadly, speaking from the sausage factory floor, I can say that the trend in high-level publishing in the humanities increasingly seems to be towards special issues/special series where all papers are by invitation or commissioned. This creates some problems (harder for outsiders to break in, easier for ideologue editors to maintain a party line), but in general seems like an acceptable stopgap measure for wordcel fields to cover the next 5-10 year interregnum where LLM outputs are good enough to make open submission impossible, but not quite good enough to replace the best human scholars.

The thing is, those fields were already close to pure BS. That they can put off the transition from human-generated BS to machine-generated BS for a few years doesn't really matter to anyone outside the field.