site banner

Culture War Roundup for the week of March 18, 2024

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

This raises the obvious question as to why I would bother reading your paper in the first place if non-trivial sections of it were written by an LLM. How can I trust that the rest of it wasn't written by an LLM?

Presumably because the paper includes an experiment and experimental results that are presented accurately, and allow you to learn something new about the field.

I mean, seriously. It's idiotic that a scientific career is gated behind having to write formulaic papers like this, a sane world would have the people who are good at devising and running novel and useful experiments do that, not spend half their time trying to write summaries (to say nothing of grants and lectures).

The numbers are the useful thing in the paper, if the experimental method and results are presented accurately then who cares whether the intro was written by an LLM. This is one of the few cases where tech like that could solve an actual problem we have, of scientific careers being gated behind being a competent and prolific writer.

Of course, the fact that prompt-related text was left in may signal a level of incompetence or rushing that casts doubt on the quality of the actual science, and that's a fair worry. But if that's not the case, then great, don't waste scientist's time on writing.

Why would I believe the paper that starts with a generated introduction had a real experiment behind it, and the results section was not also generated by an LLM?

The only thing keeping the science honest is the replication of experiments. If it is very cheap to describe and publish experiments that never happened, but running a real experiment to verify is costly, why would anyone try to replicate any random experiment they read about?

Unless someone comes up with a solution to reorganize the Science (or the eschaton is immanentized), I think the medium term equilibrium is going to look like even more weight given to academic credence-maintaining networks of reputation, less weight to traditional science (publishing results and judging publications on the merits of their results).

Of course, the fact that prompt-related text was left in may signal a level of incompetence or rushing that casts doubt on the quality of the actual science, and that's a fair worry.

There are few jobs that don't have some amount of admin work associated with them. Generally you have to communicate about what you're doing with other people, and communication requires words.

Officially my job is to write code, but over half of an average day for me is spent writing emails and summaries and talking on the phone with people, because I need to talk about what I've been doing, what still needs to be done, and what the best way to get it done is.

Science can't progress if people just churn out experiments in silence and dump out big tables of numbers. If you want to say, argue that the balance of available evidence points towards dark matter theories instead of MOND, or if you want to argue that string theory is no longer a viable research program, then you need to use words. There's no way around it. Even if you do have someone who's silo'd away from the administrative processes as much as possible, they still need to communicate using words at some point.

Could you just put the content of your paper/argument in bullet point format and feed it into an LLM to clean it up and make it sound nice? That wouldn't be the worst thing in the world, but it would depend heavily on the specifics of each individual case. Almost all of the actual content would have to already be present in the input you give to the LLM, which means you're still going to be writing a lot of words yourself. If the LLM does a non-trivial amount of thinking for you, then it raises questions of plagiarism and academic dishonesty.

an actual problem we have, of scientific careers being gated behind being a competent and prolific writer.

I can't see how this is an actual problem.

It's hard to imagine a competent scientist who is somehow so bad at writing that he can't clear the bar for your typical academic science journal, because verbal ability is highly correlated with IQ in general.

As for "prolific", that seems like even less of an issue, because the limiting factor in how many journal articles a scientist can publish is definitely not the amount of time it takes to write the words.

Presumably because the paper includes an experiment and experimental results that are presented accurately, and allow you to learn something new about the field.

There are a whole lot of important and insightful scientific papers (in hard sciences) that don’t deal with experiments at all. Eg. This seminal paper that forms the basis of the entire field of digital signal processing and all modern long (and many not so long) distance communications.

When I’ve had to read papers (some hundreds of them) because of my studies or career, only a small minority have dealt with experiments and almost none with experiments that would have been feasible to reproduce without major investment in time and / or resources.

Consider also the vast majority of theoretical papers that have been published but you didn't read. Why people read seminal papers and vast majority of other published papers lie forgotten? Usually the papers that become seminal have special something that makes them useful and applicable in practice, and that applicability is discovered by testing against the reality. In experimental sciences, the testing against reality comes from running and reporting formal experiments. Sometimes in the form of explaining past observations and experiments. In engineering, people might not bother reporting experiments, but they integrate the useful results and principles in their products (which usually must be functional in the physical reality). In pure theory land, the mathematical proofs take the place of experiment (very difficult to come up with, often difficult to verify).