site banner

Culture War Roundup for the week of March 18, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

Slow news day? Guess I'll ramble for a bit.

Scientists shamelessly copy and paste ChatGPT output into a peer-reviewed journal article, like seriously they're not even subtle about it:

Introduction

Certainly, here is a possible introduction for your topic:Lithium-metal batteries are promising candidates for high-energy-density rechargeable batteries due to their low electrode potentials and high theoretical capacities [1], [2]. However, during the cycle, dendrites forming on the lithium metal anode can cause a short circuit, which can affect the safety and life of the battery [3], [4], [5], [6], [7], [8], [9].

This is far from an isolated incident - a simple search of Google Scholar for the string "certainly, here is" returns many results. And that certainly isn't going to catch all the papers that have been LLM'd.

This raises the obvious question as to why I would bother reading your paper in the first place if non-trivial sections of it were written by an LLM. How can I trust that the rest of it wasn't written by an LLM? Why don't I cut out the middle man and just ask ChatGPT directly what it thinks about lithium-metal batteries and three-dimensional porous mesh structures?

All this fresh on the heels of youtube announcing that creators must now flag AI generated content in cases where omitting the label could be viewed as deceptive, because "it would be a shame if we (youtube) weren't in compliance with the new EU AI regulations", to which the collective response on Hacker News was "lmao okay, fair point. It would be a shame if we just lied about it."

It would be very boring to talk about how this represents a terminal decline in standards and the fall of the West, and how back in my day things were better and people actually took pride in their work, and how this is probably all part of the same vast conspiracy that's causing DEI and worse service at restaurants and $30 burgers at Five Guy's. Well of course people are going to be lazy and incompetent if you give them the opportunity. I'm lazy and incompetent too. I know what it feels like from the inside.

A more interesting theoretical question would be: are people always lazy and incompetent at the same rate, across all times and places? Or is it possible to organize society and culture in such a way that people are less likely to reach for the lazy option of copy and pasting ChatGPT output into their peer-reviewed journal articles; either because structural incentives are no longer aligned that way, or because it offends their own internal sense of moral decency.

You're always going to have a large swath of people below the true Platonic ideal of a 100 IQ individual, save large scale genetic engineering. That's just how it goes. Laziness I'm not so sure about - it seems like it might be easier to find historical examples of it varying drastically across cultures. Like, the whole idea of the American Revolution is always something that blew my mind. Was it really all about taxes? That sounds like the very-slightly-true sort of myth they teach you in elementary school that turns out to be not-actually-true-at-all. Do we have any historians who can comment? Because if it was all about taxes, then isn't that really wild? Imagine having such a stick up your ass about tax hikes that you start a whole damn revolution over it. Those were not "lazy" men, that's for sure. That seems like the sort of thing that could only be explained if the population had vastly different genetics compared to contemporary America, or a vastly different culture; unless there are "material conditions" that I'm simply not appreciating here.

Speaking of material conditions, Brian Leiter recently posted this:

"Sociological idealism" was Charles Mills's term for one kind of critique of ideology in Marx, namely, a critique of views that, incorrectly, treat ideas as the primary cause of events in the socio-economic world. Marx's target was the Left Young Hegelians (whose heirs in the legal academy were the flash-in-the-pan Critical Legal Studies), but the critique extends much more widely: every day, the newspapers and social media are full of pontificating that assumes that ideas are driving events. Marx was always interested in the question why ideologists make the mistakes they do.

Marx's view, as far as I can tell, was that ideas (including cultural values and moral guidelines) should be viewed as casually inert epiphenomena of the physical material and economic processes that were actually the driving forces behind social change. I don't know where he actually argues for this in his vast corpus, and I've never heard a Marxist articulate a convincing argument for it - it seems like they might just be assuming it (but if anyone does have a citation for it in Marx I would appreciate it!).

If Marx is right then the project of trying to reshape culture so as to make people less likely to copy and paste ChatGPT output into their peer-reviewed journal articles (I keep repeating the whole phrase to really drive it home) would flounder, because we would be improperly ascribing the cause of the behavior to abstract ideas when we should be ascribing it to material conditions. Which then raises the question of what material conditions make people accepting of AI output in the first place, and how those conditions might be different.

Weirdly, I think ChatGPT could make papers better.

Let's be honest, prior to ChatGPT, most papers were still total garbage. Even if they had useful things to say, (which most didn't) the need to write in some sort of garbled academic-ese made them a chore to read at best.

There's a comic where a person says "Wow, with Chat-GPT I can turn a list of bullet points into a whole email". And the person on the other end says "Wow, with Chat-GPT I can turn a whole email into a list of bullet points".

If academics were serious about spreading knowledge, papers would either be presented in a few short pages, or in the style of a textbook, trying to explain complicated information to a reader with imperfect information. In the past, many papers were actually quite short. Nowadays, no one is enough of a Chad to submit a short paper. They have to fill it out with a bunch of bullshit nobody reads. If they deliberately make simple concepts sound complicated all the better.

Why not have ChatGPT do all that, and then the reader can use ChatGPT to know the correct parts to ignore?

Maybe I just suck at introspection, but I honestly don't think my papers are any more complicated than they have to be. I'll cop to some pro forma filler, but the introductory filler actually would be useful to someone that has some general domain knowledge but isn't well versed in the specific area. The discussions suck and probably actually are a waste of time (more study is required indeed). Nothing is deliberately confusing though and I don't think the introduction, methods, or results would be unintelligible to a layman with a passing understanding of the field.

You're one of the good ones!

If you're trying to be understood, a good rule of thumb is this: Dumb it down further than you think you need to. Pretty much everyone overestimates the intelligence/patience/contextual awareness of their reader.

Or as we say in computer land, it's easier to write code than to read it.

I can understand, however, that this can go against the need of the academic to sound intelligent. But it seems like you aren't motivated by that. Anecdotally, I think your writing on themotte is very clear.

Do you have any specific examples in mind of academic writing that you think is needlessly complicated? Most accusations of intentional obfuscation are overblown, I think.

It’s normal for specialized fields to develop their own jargon. I’m in a few niche (non-academic) hobbies and newcomers often accuse us of intentional obfuscation. But to the experienced regulars our words are perfectly clear.

Pretty much everyone overestimates the intelligence of the reader

Academics write for fellow academics, people much like themselves with a similar educational background and usually a similar intelligence level. So they have a pretty good idea of what their readers will find clear and what they won’t.

There’s a problem today where some sub-sub-fields are so specialized that the audience of fellow specialists who are actually capable of understanding the work becomes very small, but I think that’s ultimately a separate issue.

the need of the academic to sound intelligent

This might be foreign to some people, but, using big words is fun. Reading and understanding a complex piece with lots of big words and dense references is also fun (if it’s well written to begin with of course). It’s not always a nefarious plot to bolster one’s social status. Some people just really enjoy reading and writing large amounts of complex text, and unsurprisingly those tend to be the kind of people who go into careers in academia where they get paid to do just that. So I absolutely don’t fault someone for not squeezing all his content into the smallest number of words possible. As the popular saying goes: let him cook.

Intentional obfuscation - sometimes. Far more I observe obfuscated language caused by the authors being sloppy and/or avoiding speaking plainly if they didn't understand something.

Most common: Enamored with big words yet trying to meet the journal word count limit, a big word is used in a way the meaning of the sentence becomes imprecise. Sometimes they have obtained a minor result, but big words are used to make it sound more important than it is. (Others will misunderstand and take the big words a a face value.)

Sometimes the authors are sloppy to extent that they understand meaning of some concept differently than others and never bother to make it explicit. Often the difference in understanding is a genuine difference in scientific opinion, but sometimes (especially in a run-of-a-mill study) it is because the authors failed to understand something. Sometimes the authors have followed "best practices" but do not understand the arguments for the best practices, producing slightly nonsensical approach. Sometimes authors claim to have found a $thing when they actually found $anotherthing. A mistake or misunderstanding is seldom admitted.

Sometimes the authors are sloppy reading or understanding the previous literature: when I see a paper cited in support of simplistic oneliner statement, these days I am never certain the cited reference supports the statement as clearly as implied ("It is known that system of soothing provides excellent results, thus we followed the approach of Tarr and Fether (1845)" -> go read Tarr and Fether, there is no single coherent system of soothing described, but three, and if you ignore the discussion but look at the results, the implications are unclear. Sometimes I suspect malice, more often I suspect laziness -- they never read Tarr and Fether, but they read something else that claimed to use the method of Tarr and Feather and misunderstood it.)

I see a lot of Science by Obfuscation. It's frustrating, because when I'm asked to review one of these papers, I don't know on the front end whether it's garbage or is genuinely using interesting and esoteric techniques from another area of literature that I'm just not familiar with. The latter is a real possibility that I have to spend a lot of time figuring out. Thankfully, I've only very rarely had to throw up my hands and tell the editors that I personally can't figure out what they're on about, and that maybe someone else would be a better reviewer. Unfortunately, the vast vast majority of my other experience is that once I can cut through their language to figure out what they're actually doing, I realize that it's really just dumb simple under the hood, and usually they don't really have any "contribution" over what has come before.