site banner

Culture War Roundup for the week of March 18, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Slow news day? Guess I'll ramble for a bit.

Scientists shamelessly copy and paste ChatGPT output into a peer-reviewed journal article, like seriously they're not even subtle about it:

Introduction

Certainly, here is a possible introduction for your topic:Lithium-metal batteries are promising candidates for high-energy-density rechargeable batteries due to their low electrode potentials and high theoretical capacities [1], [2]. However, during the cycle, dendrites forming on the lithium metal anode can cause a short circuit, which can affect the safety and life of the battery [3], [4], [5], [6], [7], [8], [9].

This is far from an isolated incident - a simple search of Google Scholar for the string "certainly, here is" returns many results. And that certainly isn't going to catch all the papers that have been LLM'd.

This raises the obvious question as to why I would bother reading your paper in the first place if non-trivial sections of it were written by an LLM. How can I trust that the rest of it wasn't written by an LLM? Why don't I cut out the middle man and just ask ChatGPT directly what it thinks about lithium-metal batteries and three-dimensional porous mesh structures?

All this fresh on the heels of youtube announcing that creators must now flag AI generated content in cases where omitting the label could be viewed as deceptive, because "it would be a shame if we (youtube) weren't in compliance with the new EU AI regulations", to which the collective response on Hacker News was "lmao okay, fair point. It would be a shame if we just lied about it."

It would be very boring to talk about how this represents a terminal decline in standards and the fall of the West, and how back in my day things were better and people actually took pride in their work, and how this is probably all part of the same vast conspiracy that's causing DEI and worse service at restaurants and $30 burgers at Five Guy's. Well of course people are going to be lazy and incompetent if you give them the opportunity. I'm lazy and incompetent too. I know what it feels like from the inside.

A more interesting theoretical question would be: are people always lazy and incompetent at the same rate, across all times and places? Or is it possible to organize society and culture in such a way that people are less likely to reach for the lazy option of copy and pasting ChatGPT output into their peer-reviewed journal articles; either because structural incentives are no longer aligned that way, or because it offends their own internal sense of moral decency.

You're always going to have a large swath of people below the true Platonic ideal of a 100 IQ individual, save large scale genetic engineering. That's just how it goes. Laziness I'm not so sure about - it seems like it might be easier to find historical examples of it varying drastically across cultures. Like, the whole idea of the American Revolution is always something that blew my mind. Was it really all about taxes? That sounds like the very-slightly-true sort of myth they teach you in elementary school that turns out to be not-actually-true-at-all. Do we have any historians who can comment? Because if it was all about taxes, then isn't that really wild? Imagine having such a stick up your ass about tax hikes that you start a whole damn revolution over it. Those were not "lazy" men, that's for sure. That seems like the sort of thing that could only be explained if the population had vastly different genetics compared to contemporary America, or a vastly different culture; unless there are "material conditions" that I'm simply not appreciating here.

Speaking of material conditions, Brian Leiter recently posted this:

"Sociological idealism" was Charles Mills's term for one kind of critique of ideology in Marx, namely, a critique of views that, incorrectly, treat ideas as the primary cause of events in the socio-economic world. Marx's target was the Left Young Hegelians (whose heirs in the legal academy were the flash-in-the-pan Critical Legal Studies), but the critique extends much more widely: every day, the newspapers and social media are full of pontificating that assumes that ideas are driving events. Marx was always interested in the question why ideologists make the mistakes they do.

Marx's view, as far as I can tell, was that ideas (including cultural values and moral guidelines) should be viewed as casually inert epiphenomena of the physical material and economic processes that were actually the driving forces behind social change. I don't know where he actually argues for this in his vast corpus, and I've never heard a Marxist articulate a convincing argument for it - it seems like they might just be assuming it (but if anyone does have a citation for it in Marx I would appreciate it!).

If Marx is right then the project of trying to reshape culture so as to make people less likely to copy and paste ChatGPT output into their peer-reviewed journal articles (I keep repeating the whole phrase to really drive it home) would flounder, because we would be improperly ascribing the cause of the behavior to abstract ideas when we should be ascribing it to material conditions. Which then raises the question of what material conditions make people accepting of AI output in the first place, and how those conditions might be different.

Weirdly, I think ChatGPT could make papers better.

Let's be honest, prior to ChatGPT, most papers were still total garbage. Even if they had useful things to say, (which most didn't) the need to write in some sort of garbled academic-ese made them a chore to read at best.

There's a comic where a person says "Wow, with Chat-GPT I can turn a list of bullet points into a whole email". And the person on the other end says "Wow, with Chat-GPT I can turn a whole email into a list of bullet points".

If academics were serious about spreading knowledge, papers would either be presented in a few short pages, or in the style of a textbook, trying to explain complicated information to a reader with imperfect information. In the past, many papers were actually quite short. Nowadays, no one is enough of a Chad to submit a short paper. They have to fill it out with a bunch of bullshit nobody reads. If they deliberately make simple concepts sound complicated all the better.

Why not have ChatGPT do all that, and then the reader can use ChatGPT to know the correct parts to ignore?

Maybe I just suck at introspection, but I honestly don't think my papers are any more complicated than they have to be. I'll cop to some pro forma filler, but the introductory filler actually would be useful to someone that has some general domain knowledge but isn't well versed in the specific area. The discussions suck and probably actually are a waste of time (more study is required indeed). Nothing is deliberately confusing though and I don't think the introduction, methods, or results would be unintelligible to a layman with a passing understanding of the field.

You're one of the good ones!

If you're trying to be understood, a good rule of thumb is this: Dumb it down further than you think you need to. Pretty much everyone overestimates the intelligence/patience/contextual awareness of their reader.

Or as we say in computer land, it's easier to write code than to read it.

I can understand, however, that this can go against the need of the academic to sound intelligent. But it seems like you aren't motivated by that. Anecdotally, I think your writing on themotte is very clear.

I think a lot of this is variance across disciplines. I was an immunologist and my impression was that the field wasn't generally overrun with bad writing, or at least not the kind of bad writing that I associate with obscurantism. I just went back and tried to take a fresh-eyed look at my most cited paper (which is now old enough that it is almost fresh to read it again) and the thing that would probably be worst for someone outside the field is the alphabet soup nature of cytokine nomenclature. I don't think there's anything to be done about that though, there really just are a lot of cytokines that have conflicting roles in different contexts, differential regulation that's tricky to understand, and names that all kind of sound that same if they're not your old pals.

Other fields trend to either side of this. If I go pick up a physics paper, I'm in over my head pretty quickly (although not if I go to the Nature Physics website where I'm met with titles like Racial equity in physics education research and Towards meaningful diversity, equity and inclusion in physics learning environments on the home page). This isn't because of the authors putting on a show though, their material really is complex and requires a fair bit of background knowledge to avoid getting swamped pretty quickly. In contrast, the Journal of Sociology is silly, resulting in a more performative approach to the work, such as it is.

I'm sure someone has already done it, but something I've been bouncing around a bit is the idea of irreducible complexity in different thought domains. Some things are complex simply because they really are complex, there just isn't any simple way to understand them that doesn't become lossy. Other things really aren't all that complex, but the people in the profession both benefit from complexity and personally enjoy adding it on (much of law seems this way to me when I look at arguments). This shouldn't be read as saying that people in these fields are stupid - unfortunately, it's quite the opposite, they're clever enough to add many layers of complexity to something that should be intelligible to anyone that's interested.

people in the profession both benefit from complexity and personally enjoy adding it

This is an accurate description of software development for the past 10 years.