site banner

Culture War Roundup for the week of September 4, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

7
Jump in the discussion.

No email address required.

I Want To Believe (in Marx's Labor Theory of Value)

Content warning: this post contains MARXISM. If seeing Marx's massive beard or even hearing his name is too traumatic for you, stop reading now.

...

Recently I found one interesting article, not interesting in itself, but how it illustrates arguments about psychological necessity of faith and belief frequently discussed here.

Yes, it is Marxist article written by professional Marxist in Marxist journal. Last chance to avert your eyes from forbidden lore is now.

...

Yes, it is very obscure, but if post about civil war in furry community can pass there, this might too.

If you are interested how I got there, the route was Anatoly Karlin -> devcroix -> journal article by distinguished academic historian -> academic journal dedicated to Marxist theory

Was Stalin a Marxist? And If He Was, What Does This Mean for Marxism?

(tl;dr: yes he was, it means lots of things for Marxism, none of them nice)

This is not the article I wanted to share.

This is the article.

Unfree Labour and Value Productivity: Challenges for the Marxian Labour Theory of Value by another academic, not distinguished enough yet to deserve his own Wiki page.

So what is it all about?

Labor theory of value(LTV), the cornerstone of Marxist thought. If LTV fails, whole Marxism crashes to the ground.

Narrator voice: it failed, it was debunked many times, starting in 1890's. Somehow, it had no effects on world historical events of 20th century.

So, what exactly is this article about?

This paper explores the question: does unfree labour produce value?

According to Big Beard Man's theory, it does not. (Practical Marxists later strongly disagreed, but this is not topic of this article)

Since the direct purpose and the actual product of capitalist production is surplus value, only such labour is productive [...] as directly produces surplus value.

But why is it? (except that Marx said so) What is the distinction between wage and slave labor, slave and animal labor, animal and machine labor?

Author examines these distinction, and finds them rather arbitrary.

No need to read 40 pages of Marxspeak(I hadn't either), this table summarizes the argument and the dilemma.

there is no theory-internal logical barrier to believing that wage labourers do produce value but unfree human labourers do not, that human slaves produce value but animal slaves do not, or that animals produce value but machines do not. All of these options lie within the space of open possibilities.

So, Marxist author in this article deboonks cornerstone of Marxist philosophy and watches the whole thing tumbling down in its own footprint like the towers on Nine Eleven.

This had been done many times before, this is not the importance of this article, the importance is in his last sentences.

At times, Marx is adamant that wage-labour is an absolute sine qua non for the creation of surplus-value, and I have a hunch that this is the view he should stick with

(long Marx quote)

But I do not know how to affirm this tenet except as an article of faith.

It is not about materialism and science, it is about faith.

The author still has faith, still needs to believe, still wants to "stick with Marx", still wants to "affirm" the tenet he just destroyed, still considers himself Marxist and begs desperately fellow professional Marxists to help him (these are the only people who would ever read this journal, I am possibly first non-Marxist to stumble on this article)

This is completely natural human behavior. Rationalist credo "That which can be destroyed by the truth should be" is deeply abnormal for human beings.

Are you laughing at him? This is exactly the same thing as all who people who wish wistfully "if only I had faith in God" "if only I could belong to Church".

Rationalist credo "That which can be destroyed by the truth should be" is deeply abnormal for human beings.

I am a very weird human being. When I first read this stuff on LessWrong as a teenager I remember being very annoyed by how smug they seemed about "hey, breaking news, you should believe true things and not false things." As an adult, yeah, fine, if anything they were understating it. I'm not pretending to be some kind of rational agent, but I don't explicitly come out and try to believe false things, what the fuck?

But I still don't understand how people can do this and it still frightens me that it's not even uncommon. They still know on the inside that it's still not true, right?? There's no Men-in-Black neuralyzer that comes along if you pretend you don't know it long enough... right?? Why does he want to believe something that isn't true?

I am a very weird human being. When I first read this stuff on LessWrong as a teenager I remember being very annoyed by how smug they seemed about "hey, breaking news, you should believe true things and not false things."

I have similar peeve, but because of exactly the opposite reasons: this whole credo is obvious bullshit. Even rationalists like Yudkowsky do not really practice it, take as an example his annoyance with Roko's Basilisk idea leaking or his secrecy around methods how he can get out of the box pretending to be AI. Why doing that, just set the truth free. If it destroys countries or even the whole humanity, then it should be destroyed, right? The cold truth is defined as the highest value so what is the problem.

Anyways, there are many ways how one can save "belief in untruth". One way is to defer to an authority: I cannot evaluate if Many Worlds or Spontaneous Collapse interpretation of quantum mechanics is true, but I think expert A is trustworthy so I take his word for it. But in a way this is is belief about expert and not belief about the thing, so it is cheating a bit. Another way to do that is to have epistemic humility, Scott Alexander himself once remarked how he was able to argue untrue points very effectively toward people with less knowledge and his takeaway was to be a lot more skeptic when it comes his own views as he could also have been misled. Ironically rationalists themselves accept this premise, their whole shtick is how AI can lie to reach its goals. Similar idea is also expressed by yet another rationalist glib of it all adds up to normality, which basically urges you to be skeptical about too "weird" conclusions and sticking to your intuition a bit, even if evidence seems strong.

Now given the utilitarianism of rationalists I do not trust them at all, there is nothing preventing them to lie to me to reach their goals of maximizing utils or whatever. In fact they are quite upfront about this. The third one is right there in the rationalist Bayesian thinking idea. All it takes for me to defend any belief is to set my prior to very low value so it is incredibly hard for it to be flipped in my lifetime. And I can still signal my sophistication: my credence of idea X being true shifted a bit in light of new circumstances and recalculated posterior, but I still find it unlikely for X to be true. That is unless Scott Alexander or Yudkowsky or other gurus of rationalist faith say otherwise, then my posterior will shift dramatically.

Now maybe this all sounds too harsh, I do not really mind it as much. But one really has to treat rationalism as yet another pretentious internet fad, as an infotainment. There are very useful things I learned and for it I am very thankful. But I think dropping the guru sounding shit or weird stuff like defending value of insect life or polyamory or any of the awfully convenient overlap of supposedly cold rationalists with hippie/techbro Silicon Valley culture and ethos is advised. But sometimes I think I am not harsh enough - listening to Yudkowsky lately I would not be surprised if he founded some Unabomber style cult set out to bomb datacenters to prevent AI apocalypse, which would be logical step if they really believed in the Truth of apocalypse so firmly and unshakenly. So there is that.

Anyways, there are many ways how one can save "belief in untruth". One way is to defer to an authority: I cannot evaluate if Many Worlds or Spontaneous Collapse interpretation of quantum mechanics is true, but I think expert A is trustworthy so I take his word for it. But in a way this is is belief about expert and not belief about the thing, so it is cheating a bit. Another way to do that is to have epistemic humility, Scott Alexander himself once remarked how he was able to argue untrue points very effectively toward people with less knowledge and his takeaway was to be a lot more skeptic when it comes his own views as he could also have been misled. Ironically rationalists themselves accept this premise, their whole shtick is how AI can lie to reach its goals. Similar idea is also expressed by yet another rationalist glib of it all adds up to normality, which basically urges you to be skeptical about too "weird" conclusions and sticking to your intuition a bit, even if evidence seems strong.

I think the much more rational approach is to be clear to yourself about where your beliefs come from, especially if they didn’t come from your own investigations. If I’m convinced that Ukraine is doing well on the counteroffensive, that’s fine, as long as I’m clear with myself and others I try to convince that I’m basing this on news reports and not on defense analysis or talking to people on the ground or even talking to people in the military. I think personally it’s almost as important as the idea of putting some sort of certainty quantification on statements.

I believe in “many worlds” at about 60% certainty, but I’ll be honest that my understanding comes from listening to science communicators explain it on TV. Putting it that way at least gives you (and me if I’m willing to be honest about the implications) a baseline of knowing just how seriously to take my claims based on this belief.

I think that all this language about how one is rationalist but one should also put numerical credences (ideally down to decimals) to one's beliefs and how one should be careful about context of information and source of my views and how in the end it should all kind of feel "normal" - it all is the usual way of how rationalists say a simple thing everybody knows in a complicated way. Man, practice some source hygiene, work on your thinking and trust your intuition a bit. On most beliefs one would be in line with majority of informed people.

Paradoxically it is always the weird shit where rationalists are touting their supposed first principle revolutionary approach, where they are espousing mantras like that which can be destroyed by the truth should be. I am talking about things like saving ants or taking drugs or defending some sexual deviancy or other defense of some weird shit that nerds really want to rationalize. Guess what, my intuition screams "red alert".

Information in areas where you are not a specialist will always be no different than any other well informed person. That’s sort of the point. A lot of rationalists seem to take information from blogs and video and so on. This is fine. For most purposes, cosmology as explained by popular science communicators is just fine. Where it becomes a problem is when you use that geeky layman understanding and pretend it’s more than it is. It leads to a kind of arrogance where you assume you know all the relevant details without doing a deep enough dive to really know what’s going on.

Likewise, while I don’t think it’s necessary to go down to decimals of certainty, I do think, especially when reasoning about things, to have some idea of just how sure you are about a given conjecture. If you’re not pretty darn sure then it shouldn’t be the lynchpin of that argument or prediction. If you’re pretty sure, fine, give it importance, but I’d never advise making a major decision based on something that you’re less that 80% confident in.