site banner

Culture War Roundup for the week of July 3, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

6
Jump in the discussion.

No email address required.

This isn't the place for low-effort Political Compass-tier memes. If you think this grid is actually insightful, you need to actually talk about it and explain what's interesting to you, not just drop a link and say "Hey lookit this!"

@ymeskhout already told you this.

This is too low-effort for the main thread. You should have either added a more detailed explanation or posted this in the Sunday thread. I'd suggest a better explanation in any case, because I have no idea what this is supposed to mean.

Didn’t you just get banned? I’m confused as to how you think this is appropriate for the CW thread man.

It’s a shame because I think you have some original ideas, but you’ve got to abide by the rules.

Mods are asleep, quick everyone schizopost!

Seems like a lot of words to say very little. You needed 1069 words to say Australians wanted to be white untill 1960?

You could have just posted the article and talked about why it’s interesting in the OP

"This will be the first in a five part series on Jewish influence on Austrlian immigration policy."

You can stop at one part, buddy.

I'm removing this comment. I am writing this reply but I don't actually know if this reply will still be visible after I remove this comment; I apologize in advance for whatever problems that creates!

The comment I am removing was recognized as uncredited copypasta by users. If the mod team had immediately recognized it as such, we may have removed the comment more quickly, as uncredited copypasta is a form of spam. But we had to verify, and there was some inter-moderator discussion going on.

I remind everyone that while our default position is not to remove comments--and certainly not to censor debate or stifle discussion of difficult topics--some users are not interested in discussion, as Zorba talks about in the latest META post. Sometimes it's hard to know who these users are. Sometimes we make bad calls as moderators. I apologize in advance if that's what I'm doing now.

But I admit, for the moment, that I doubt it. I'm also perma-banning the user for spam at least until such time as I am persuaded that they are not, in fact, just a spammer.

Although I cannot find the link (@2rafa's Google fu is apparently better than mine), this certainly looks like something copypasted from Kevin MacDonald, complete with citations of his own works.

I'll give you an opportunity to convince me otherwise or explain yourself, otherwise you are looking at a ban for bad-faith engagement. (You are allowed to link and quote people, you are not allowed to copy an entire essay from elsewhere and pretend it's yours, just to test for reactions.)

link: https://www.theoccidentalobserver.net/2018/10/01/the-war-on-white-australia-a-case-study-in-the-culture-of-critique-part-1-of-5/

quotes around the entire thing for exact matches are good for trying to find a source for a verbatim sentence

Thanks. I tried the quotes thing but for some reason didn't get that hit.

From my experience working in a job where the use of the quotes feature was vital: Google will randomly and without warning place users into experimental variations of their features as a form of A/B testing. If you get placed in the "Google prioritizes words in quotes" user bucket and not the "Google demands exact string match of words in quote" bucket, your search won't turn up the exact results and you're just out of luck.

I managed to escalate this issue quite high into Google support at one point, and the above was more or less everything they told me. Was quite stressful when I needed the exact match for my job.

Do you not get the option for verbatim mode, or does it not work for you? It's buried in there for me, but if I do a search, I get a little "Tools" button below and to the right of the search bar, and then from there I can switch from "All Results" to "Verbatim", where "Verbatim" actually respects quotes still.

If they're removing even that escape hatch I think it's time for me to find a new search engine.

Admittedly it's been years since my experience with it, but I don't recall that being an option at the time. Could've just missed it though. Thank you for mentioning it, it'll likely help me out in the future.

deleted

Now unlike the above, this is merely "something I read somewhere at some point" and not official, but:

I've read that it's worse than that. They've frequently messed around with search function, and how they evaluate the changes is how many searches a user makes. I.e, if you type in a search, immediately find what you need, and leave Google, that's bad, while you search 4 or 5 times to get Google to finally show what you wanted, that's good.

The A/B testing is specifically trying to make the experience worse for users.

They've frequently messed around with search function, and how they evaluate the changes is how many searches a user makes. I.e, if you type in a search, immediately find what you need, and leave Google, that's bad, while you search 4 or 5 times to get Google to finally show what you wanted, that's good.

I was not in search quality, but that would not match my experience at Google. The idea was to return a useful result, not to keep the user searching.

Good to know, and I'll take your word for it over random-poster-on-other-forum.

More comments

The only thing I've encountered in this vein lately is that their bot-detection algorithm seems to interpret "many searches for slightly different search terms in rapid-ish succession" as bot behaviour, resulting in a captcha -- which kind of adds insult to injury when one is trying to nudge the algo to stop serving an infinite selection of (bot-generated?) obvious clickfarm results as the first page...

Imo you should delete first and ask questions later, he’s abusing your charity with his alts. He wants eyeballs for far right blogs, and you’re cooperating with a defectbot. I don’t understand why banning SS and hoff is even on the agenda, when this guy is constantly flooding the forum unprompted, and just far far worse.

We will never satisfy everyone, between those who want us to ban first and ask questions later, and those who think we should never ban anyone without absolute proof of wrongdoing.

I don’t understand why banning SS and hoff is even on the agenda

No one has suggested banning Hoff.

I thought Hlynka was too trigger happy too (still think he was, most of the time). I don't support bans generally, but this is a clear-cut case.

You asked for proof this entire OP was quoting a blog, you got it. You asked him to explain himself, he didn't. And it appears his other post on peterson was also copy-pasted.

He's used up all the benefit of the doubt he's entitled to. Treat him as guilty and let him appeal if it's some insane coincidence.

You asked for proof this entire OP was quoting a blog, you got it. You asked him to explain himself, he didn't. And it appears his other post on peterson was also copy-pasted.

Dude, give us a minute. I am not going to make a snap decision ten minutes after I post the warning.

I think you should delete the comment, it's a bad look, it takes up space in the thread, and it's a clear troll and a sign that people who violate the rules blatantly and repeatedly can have their trolling stay up.

I'd likewise prefer to see this thread deleted and the poster banned. Copy-pasting someone else's essays is low-effort by definition, and they're clearly making a habit of it. Leaving the post up gives them a limited win, and I see no benefit to allowing them to see their strategy rewarded.

As always, though, I defer to the judgement of the mods.

Just copy-pasting articles from Kevin MacDonald's blog in 2018 (Google said it might originally have been from a site called 'Expel The Parasite' although it was reposted by the 'Occidental Observer'), or anyone really (especially without attribution) isn't acceptable on this board, sorry. I suggest that you take your ban and, instead of making another alt this time, think carefully about how you might actually contribute to this forum in future.

I don't think the artist is alt-right, but I do think there is an interesting parallel with Nazi propaganda... take this 1932 propaganda poster titled The Negro-isation of France in 100 years, captioned "the last non-colored French form the main attraction of the Paris Zoo"- there's a similar aesthetic with Peterson with the predatory black figures looming over the white people. This Nazi poster and Peterson's work are both interesting to review in context with the current race riots in France, although I do not for a second believe Peterson has the same interpretation of this as the Nazis.

Yes, it's a pretty funny and thought-provoking image really. Black people in it represent competent, essentially Western population, the neo-French (despite crude physiognomy); the legacy French are reduced to smug monkeys thoughtlessly going through the motions, grooming in their effete manner. Unpleasant as it might be for some, it's very different from your average modern day HBD-informed racist's idea that White people are superior on account of their cognitive capacity and affinity for civilized behavior; that they basically deserve higher status for some contingent merits. Assuming that Blacks surpass whites in those regards, would that image even feel bad for an average believer in the République? Or would he go «eh, why not»?

I wonder how we should understand the author's intent and conception of good and evil.

your average modern day HBD-informed racist's idea that White people are superior on account of their cognitive capacity and affinity for civilized behavior; that they basically deserve higher status for some contingent merits.

I don't think this is what HBD racists are saying. And if they are saying this, it is because they are trying to distract themselves from the underlying issue, which is that smaller weaker people are afraid on a physical material level of bigger stronger people who are more prone to aggression and violence. Whites and Asians don't "deserve" higher status on contingent merits because they're smarter, whites and Asians "deserve" higher status in society because when you get in the woods the strongest man wins. It's better to try to live in a world where we can have nicer things than simply a brute force competition, all the time, because then you don't have society, you just have the horror of nature which is the very thing society is trying to protect us from to begin with.

Whites are about as big and strong as Blacks and bigger and stronger than Arabs (e.g. Algerians in France), pervasive cuckold fantasies about muh barbarian vigor notwithstanding. This is evident from racial composition in the upper rungs of combat sports.

Asians really are worse off though.

(Freedom of speech.jpg)

It seems pretty obvious from observation of sport that Whites and Blacks are bad units of analysis.

The average of all whites and all Blacks is meaningless, all the outlier athletes come from small sub populations.

Asians really are worse off though.

Kung-fucels in tatters right now. Even as a distant observer, it's funny how the harsh objective crucible of MMA has deflated the mystique of traditional Asian martial arts.

Would MMA allow all the techniques taught in Kung Fu though? I don't think so.

I don't know enough to comment, but I was under the impression that pretty much anything goes in MMA except for kicking the balls and scratching out eyes and biting. Could very well be wrong!

More comments

The marginal techniques like eye gouging, finger breaking, blows to the back of the head, soccer kicks to the head of a downed opponent, etc. Do nothing to prove hypothetical kung fu superiority.

The better fighter will be in a better position to gouge your eyes and to prevent his from being gouged.

Traditional tai-chi masters are indeed in shambles from MMA.

On the other hand, Muay Thai has been proven to be effective fighting style (excuse the dramatic narrator).

This video is of showbox in 1988 between the top American Kickboxer and a Muay Thai fighter using limited rules preventing elbowing, throwing, grabs, and limiting below-waist hits to a low kick. The kickboxer gets kicked in the leg so many times he starts dodging and running around at 5:50, and ends up carried away in a stretcher.

MMA rules allow lowkicks and elbows in some positions. Fighters study techniques derived from Muay Thai, along with other lineages like Greco-roman wrestling and Juijitsu. And "MMA style" is just whatever works in the ring's rules.

Quoting https://slatestarcodex.com/2016/07/25/how-the-west-was-won/

An analogy: naturopaths like to use the term “western medicine” to refer to the evidence-based medicine of drugs and surgeries you would get at your local hospital. They contrast this with traditional Chinese medicine and Ayurvedic medicine, which it has somewhat replaced, apparently a symptom of the “westernization” of Chinese and Indian societies.

But “western medicine” is just medicine that works. It happens to be western because the West had a technological head start, and so discovered most of the medicine that works first. But there’s nothing culturally western about it; there’s nothing Christian or Greco-Roman about using penicillin to deal with a bacterial infection. Indeed, “western medicine” replaced the traditional medicine of Europe – Hippocrates’ four humors – before it started threatening the traditional medicines of China or India. So-called “western medicine” is an inhuman perfect construct from beyond the void, summoned by Westerners, which ate traditional Western medicine first and is now proceeding to eat the rest of the world.

One could say that the European far-right of the 1930s feared humiliation primarily in front of other Europeans, whereas those of the 2020s fear destitution and powerlessness at the hands of the other. The first is about a kind of racial cuckolding (maybe literally, given Nazi obsession with the 'rhineland bastards' etc), the second is the direct fear of becoming destitute, irrelevant, or a victimized minority. I do think a lot of European ethnat rhetoric is strongly influenced by postcolonial Said type discourse and by the experiences of decolonization.

Interesting when it comes to the history of German racial relations is the Reichstag's interracial marriage debate of 1912. They legalized it (or kept it legal, rather) in part because the social democrats showed the parliament photographs of pretty native Pacific Islander and Southwest African girls and even the centrists agreed they were as attractive as German women, and therefore acceptable.

They legalized it (or kept it legal, rather) in part because the social democrats showed the parliament photographs of pretty native Pacific Islander and Southwest African girls and even the centrists agreed they were as attractive as German women, and therefore acceptable.

These are the real conversations we need to be having.

As an aside I'm currently reading a more recent-ish history of the Bounty mutiny and am being reminded at how devastating Pacific Islander women are to the underpinnings of European civilization.

I increasingly believe that politics, rather most people's political views, is mostly just a function of culture. It's all just a function of the cultural lens. Perspective and values don't make a distinction between the political and cultural realm. Every generation is characterized by a specific dominant cultural lens that is unique in a. what it identifies to be a problem and b. the solutions it prescribes as a response to those problems (generally just meaning the ideal state of existence, which is generally just the inverse of what the state created by the problems is, so ultimately just meaning the norms that are implied and advocated for by the cultural lens). Political views are simply just the attempt at constructing the reality that culture upholds as the ideal; culture is the architect and politics is the builder. That's why when you consume entertainment, comedy in particular, from previous generations it isn't as enjoyable: because culture, which entertainment plays a key role in (in terms of its ability to convey and construct norms), is highly contextual.

But every generation thinks they have arrived at the correct perception of things, and as a corollary they have arrived at the correct view of how things should be. But when this perspective is implemented it always falls short and its shortcomings are evidenced by the fact that the implementation doesn't achieve what its supporters expect for it to achieve. That is what moves thought: the dialectic, the implementation of the counterpoint that reveals the excesses of the counterpoint which eventually necessitates a reversion to a midpoint that seeks to preserve the merit of both the status quo and the counterpoint. It's this constant movement through the dialectic that forces thought and perception to evolve, which is itself powered by shifting perspectives which are rooted in realizing the limited merit of the previously implemented perspective but also that the world which is being perceived is constantly changing (i.e. there are two types of movement: movement within the dialectic and movement of the centerpoint of the dialectic, or what substance the dialectic framework is meant to address). I often wonder if the world had just stopped changing, would we have eventually arrived at a perspective that was objectively supreme, correct, and accepted? Would thousands of years of evolution of thought, with its ability to shape the subject of evolution slowly to be a perfect response to that which it is evolving in response to, eventually have brought us to a cultural lens that is a perfect understanding of how the world is and should be, and, further, would it have eventually brought us to a world that is objectively perfect? But I guess to get back to the point the reason I think we never arrive at that perfect solution is that the focus of this dialectic movement is changing. It's like you're constructing a car optimized to drive on roads, but the roads keep changing.

The civil rights law imposed the frankly retarded[see image, it's a page from James Burnham's book on his experiences in NYC academia in 1930s] culture of 'some' whites -in this case nominally Christian east coast new yorkers on the entirety of the United States.

Yes, politics is downstream of culture, but political power allows a culture to impose itself on others.

It's incorrect to say 'politics' is purely downstream from culture. The culture of the US was irreversibly made worse by the Civil Rights Act which allowed activists to use the political power of the federal government to change culture throught the country.

/images/16884140806204338.webp

I would argue that both are often preceded by philosophy. Why do we believe that equality is even a social good, or that the common man should ever have a voice? For tens of thousands of years prior to the enlightenment, the very idea was mocked. You were born into a social position and there you stayed. It was simply expected that if you were the child of a king that alone gave you legitimacy as the ruler of your people. If you were the child of a peasant farmer, it was a waste to teach you to read because you were destined to be a farmer on some lord’s land. Nobody ever thought about it or if they did, they came to the conclusion that this simply should be.

Likewise we understand the universe in a rational empirical way. For most of human history, it wasn’t so. The universe was run by some kind of spirits and that’s why things are as they are. That tower fell? God caused it.

And later on politics tries to enact things that philosophy has taught. We believe in equality, so we better do something because it’s not happening on its own.

You were born into a social position and there you stayed.

I believe social mobility by and large hasn't changed much, or at all between the middle ages and now.

I suspect you've been psyopped by 'the Enlightenment', the age responsible for many myths such as 'medieval Europeans thought the Earth was flat', 'people didn't wash in the middle ages' etc.

The universe was run by some kind of spirits

Did Aristotle think so? I don't believe that to be true. So it's unlikely that such was a common belief among educated people in Europe in the past 2000 years.

For tens of thousands of years prior

Hunter gatherers and such were and are very egalitarian.

It was the increase in population density and states that created any inequality in status. So, at most there may have been ~6000 years of people living in agricultural societies, most of which weren't really that unequal being really primitive.

From the 1964 "Suicide of the West" by James Burnham. Which ends with this black pill:

If a decisive change comes, if the contraction of the past fifty years should cease and be reversed, then the ideology of liberalism, deprived of its primary function, will fade away, like those feverish dreams of the ill man who, passing the crisis of his disease, finds he is not dying after all. There are a few small signs, here and there, that liberalism may already have started fading. Perhaps this book is one of them.

How’s that working out for him?

I’d be interested in reading more about how that book held up.

There is a level lower than culture: material reality. Unlike less intelligent beings, humans can adapt quickly to a new ecosystem by learning traits that are advantageous in that ecosystem. We don't have to wait for multiple generations for small changes in behaviour, we can develop a culture in a company in a matter of weeks. A national culture can evolve in centuries, while the corresponding differences would take at least three orders of magnitude longer if they were genetic.

Culture changes as the ecosystem changes and new cultural adaptations arise. These changes can be due to cultural changes as well as the material reality changing. Much of the cultural change we have seen in the past decades has happened in the parts of the world that consume the most oil. The social upheaval of the past century is less grounded in cultural innovation and more grounded in the ecosystem being fundamentally altered by fossil fuels. Hyperindividualism makes sense when mortality salients are largely gone. When there is enough material excess for people not to have to rely on social networks in order to get by the selection becomes a function of standing out in the crowd.

The Afghan culture is a function of small groups of isolated people trying to survive in a resource constrained environment.

Climates change, resources become more or less scarce, pandemics, wars and other factors will change the ecosystem. I do agree with human cultural change being a major driving factor but the world around us has changed profoundly.

if you re read the second paragraph I think you’ll see we agree that it’s a combo of cultural and external change

Yeah, this is an interesting one. What was that Scott Alexander concept, a scissor statement?

The artist seems to generally give a huge range of his figures the same kind of face no matter what role they are playing in the paintings.

Both his Mueller and his Trump in this one have the same kind of eyes and mouth as the supposed savage blacks in the other paintings, which makes it less likely that the black figures in the other ones are supposed to be ethnic Africans.

The black figures here and here pretty clearly are meant to be representations of an American police state, not of virtuous ethnic Africans.

Then there's stuff like this where aggressors and victimized look the same.

Ancient Greek vase painting, with its sharp outlines, exaggerated human figures, and black/red/white color fills, has a heavy influence on his stuff. Of course he might be using the colors racially, but as I have pointed out above, this is far from clear.

His attacks on America and his hatred of Trump and cops are not necessarily signs of any sort of extreme leftism. All those are common attitudes among people ranging from boomer liberals to libertarians. In our political climate, they of course code left, but there are ten million fairly moderate boomer Hillary voters who share those opinions yet are not some sort of frothing antifa members.

In general I have a very low opinion of any piece of commentary that consists of "look at how much they hate you/us". You see it in a much more mainstream way on the US left re. certain recent SCOTUS decisions too. It's an embarrassing way to act, a facile way of trying to drum up rage and aggression and to radicalize your side. Hatred is not uncommon in politics, but it seems to me that it is often less of a motivation than the opposing faction expects.

I once had an internet argument with someone who argued that Mugabe's land expropriation had been motivated by a hatred of whites. I replied that having done quite a bit of business in Zimbabwe, including with white Zimbabweans who still run many major corporations and are quite prominent in business in Harare, I didn't think that seemed to be the case, and had never noticed much racial animus toward whites by blacks in the country. White farm owners were targeted because they were a small minority that owned a lot of land the government wanted to redistribute to veteran soldiers to try to avoid a civil war; whites were the unfortunate victims of that policy. I don't think most ordinary Germans hated Jews in 1939, even though my grandmother and her parents fled the Holocaust and many members of my extended family died in it; in her mid-90s today, she doesn't have any hatred for Germans and enjoyed speaking German on her many vacations there and to Switzerland.

'They hate us, so I will hate them' is the eternally flawed flipside to the 'mistake theory' fallacy, where everyone in the world is just a temporarily embarrassed western liberal with the same ideals but different views on execution. In truth, they usually don't hate you, they're just different to you. And they'll screw you over to save themselves, which is unfortunately true of almost all people, individually and in groups.

they're just different to you

Did you mean to say that they're indifferent?

And they'll screw you over to save themselves, which is unfortunately true of almost all people, individually and in groups.

White people in shambles, bless their hearts 🙏. What negative in-group preference does to a mf. But since they've been nothing but nice to me, I return the favor by doing my best to warn them about the non model minorities, while not biting the hand that feeds.

I don’t have negative in-group preference at all, I just think that outright hatred is less common as political motivation than people think. I also think that ‘mistake theory’ is broadly wrong (and generally vaguely western supremacist, for what it’s worth), but I think a comprehensive understanding of ‘conflict theory’ suggests that conflict is usually a result of expediency rather than hatred.

I did mean difference.

Different peoples have different cultures, identities, interests. They are usually fine with others as long as their own interests are not threatened. This is the best and most full argument against immigration (and one I agree with). But hatred is too strong a conclusion to draw from it. Difference is enough.

I replied that having done quite a bit of business in Zimbabwe, including with white Zimbabweans who still run many major corporations and are quite prominent in business in Harare, I didn't think that seemed to be the case, and had never noticed much racial animus toward whites by blacks in the country.

...and did they counter-argue the pretty obvious selection bias given your context and who you were working with specifically, i.e. the surviving winners and those who had monetary incentives to put you at ease?

I don't think your argument supports what you think it does. The point of 'the collective hates [X]' isn't that every member of the collective shares the same vibe of the group, an objection which itself would be a form of fallacy, but that the group effects is dominated by those who do. Most ordinary Germans may well not have hated Jews in 1939, but they were also onboard with a regime that absolutely did, hence why so much of German post-war political identity had to confront the 'I wasn't directly involved, and thus not my issue' collective identify in order to rehabilite a collective German political identity.

Likewise, the successful surviving white business men you met who were willing to work amiably with you may not have had significant expeirences with those who shared a regime stance... but the white businessmen were, by definition, the survivors who made accommodations and allies and friendships with/within the regime to protect themselves. The ones who didn't- the ones who would have been dispossesed out of spite- wouldn't still be in business for you to deal with.

The point is that Mugabe was more like Carl Schmitt than he was like Hitler. Many whites, including an extremely racist Australian I know who met him and knew him quite well, think he didn’t hate white people. Mugabe did not act against whites for the entire first 20 years of his presidency. And I think, by the way, that my theory is borne out in practice. The far right, as linked by OP, believe American blacks - by and large - have a deep and unrelenting hatred for American whites. Do you agree? I don’t. I don’t think most black Americans care much at all about American whites. That, and nothing more, was my point.

What in your estimation is the percent of black Americans have white people?

He doesn't need to personally hate white people in order for them to be a convenient whipping boy -- see, uh -- J Edgar Hoover springs to mind?

But if somebody acts as though they hate you for other reasons, is it really worth parsing out the difference between the people who actually hate you and the ones that are only pretending to be retarded racist?

The point is that Mugabe was more like Carl Schmitt than he was like Hitler.

Well, obviously, but the scope of people who have both the animosity and the means to attempt genocide are very narrow. This is a bar so low the only reason it's not a tripping hazard is the straw.

The point you were challenged on was that you weren't in a position to hear the contrary experiences of others who might have differed from your business partners, who had financial incentives to assure you that you could make good money with/for them.

Many whites, including an extremely racist Australian I know who met him and knew him quite well, think he didn’t hate white people.

You're conflating the individual for the group, which was the same error with your Nazi metaphor. Just as members at the bottom of a faction may not share the vehemence of a faction, but it's still fair to characterize the faction in a way, this is also true for the people at the top of a faction. Leaders may not believe a certain narrative, but can also be comfortable co-existing with it / leveraging the people who do / the general complicity of not challenging an unjust system they partake of.

Mugabe did not act against whites for the entire first 20 years of his presidency.

Aside from not really being relevant to changes over time (Mugabe not having static policies over 30 years implies he had changing opinions, not that he never had certain opinions), the first 20 years of Mugabe's presidency were more or less the American unipolar/western hyper-power period, which included multiple American interventions in Africa, while the last 10 years coincided with both the post-American/western low of the financial crisis and pre-ISIS/post-Iraq... both of which offered opportunity and basis for movements to arise blaming nebulous white-west types as scapegoats.

And I think, by the way, that my theory is borne out in practice. The far right, as linked by OP, believe American blacks - by and large - have a deep and unrelenting hatred for American whites. Do you agree? I don’t.

Am I expected to deny the OP before or after I deny beating my spouse?

I don’t think most black Americans care much at all about American whites. That, and nothing more, was my point.

And your supporting argument of personal experiences in Zimbabwe don't support this point, and was not immune to challenge on grounds of you self-selecting the narratives that would deny an issue if there had been one.

People whose jobs it is to convince white people, or people with many white bosses and coworkers, to invest money in a place are typically not going to tell said white people that their money is more likely to be stolen on account of them being white.

Plato would likely argue that an intellectualized interpretation of art should have no influence on whether art is permissible. The purpose of art is to better the mind and soul of the median viewer, the citizen. If the art fails to do this, it ought to be banned. The public viewer is not going to over-intellectualize the art, but come away with an essentially intuitive understanding of what is happening. Peterson’s art is degenerate, and has no good in it whatsoever for a citizen of Plato’s Republic, for these reasons:

  1. The viewer just saw an ugly and violent scene, but with no practical and memorable warning to his own conduct, and with no cathartic release of emotion. In other words, the scene promotes stress but with no prosocial or beneficial emotion or consequent. So, you’ve just made a common person stressed for no reason.

  2. Not only have you wantonly stressed the viewer, but you’ve done this when he has expected something quite the opposite, and you’ve taken the spot of something that could otherwise have been very beneficial to the median citizen.

I find the question of what is beneficial and what is degenerate art easy to answer, it just requiews reasoning about the implications of the exposure. Let’s consider It’s a Wonderful Life. It’s a stressful movie with some tragic elements, but the stresses act as a warning to your practical conduct in world affairs. This will increase the chance of living a wonderful life in the future. Let’s consider the parable of the Rich Man and Lazarus. A true tragedy with a fate of everlasting torture! What could be the benefit? Well, to induce a beneficial fear. Let’s consider a Clockwork Orange. Does it condone violence? Yes, and perhaps it does this too well — but it asks the viewer the important question of whether we ought to modify behavior using top-down conditioning (apropos!). How about, hmm, a Wes Anderson movie? If there are beautiful shots and scenes that sooth a person and inspire someone to live a more wholesome life, it is good. And so on. What would be banned? A show like Ozark that is a kind of “stress porn” stimulator but with no discernible practical takeaway to your life. A show like Kardashian’s which reduces sum total happiness among women. Fast and Furious movies. And so on.

The Peterson work they picked to go underneath the eifel tower is not one of the violent ones. It's based on some 1400's Italian book where a lovers kiss wakes someone from eternal slumber. If you don't project ideas of racialized dominance on the stylized white and black figures it's a sort of romantic piece with people dancing around the central couple.

The violent ones are shown in galleries to precisely the sort of person likely to develop an overly intellectual view of art.

Cleon Peterson is a leftist creep and makes it plainly obvious in his work. If you saw his 'art' on the wall of someone's house, you would immediately assume they're part of some villainous organization, or that they want to look like a villain.

Go look through his Artsy page: https://www.artsy.net/artist/cleon-peterson

It's pretty clear he hates the USA (Destroy America), Donald Trump (Stop the Virus, Useless Idiot and about 1/10th of his portfolio), racists (Practice Intolerance). There's not an apolitical bone in his body. I challenge anyone to tell me that they've looked through 4 or 5 pages of his work and believe he could be altright.

Say I made a painting of a long-nosed, weaselly, greasy, fat, lecherous bastard clutching onto coins being hung from a lamp-post by some stern-faced Teutonic workers - people would quite reasonably assume it was aimed against Jews and that I was a Nazi. I might protest that it was really about destroying the values of greed with hard work - that it was just timeless symbolism. Yet it's pretty obvious that it's not just about that. Images have meaning. Ideas have meaning.

If you make a bunch of paintings about brutish, Uruk-Hai looking blacks slaughtering whites, then people are going to make perfectly reasonable assumptions about the implied meaning, based on context and the clear slant of the artist.

Is this to say that you actually believe the interpretation in OP's first link, that is, that the artist (1) intended the literal racial interpretation and (2) believes that such a future is desirable? The "Uruk-Hai looking" figures are intended as the protagonists? Can you think of any historical example where a political group depicted themselves or their allies in such a fashion, without the slightest connotation of righteousness, beauty or heroism, or do you believe that your outgroup is actually the most morally and aesthetically alien group of humans to have existed on the historical record?

As far as I can tell the original video is no longer online.

An unofficial reupload on youtube exists. I wonder if Europeans were similarly offended when they discovered that people in African and Asian art look more like natives than Europeans.

it's a peculiar sort of chauvinism.

Can you think of any historical example where a political group depicted themselves or their allies in such a fashion, without the slightest connotation of righteousness, beauty or heroism

I can think of many actually. It's a very common thing in warfare and other pursuits where it's in your interest to be seen as a savage with no regard for decency.

If you want a recent one consider Russians depicting themselves as Orks. If you want an older one, consider Pirates.

But interestingly in the particular example we're seeing here (modern leftist ideological art), there is an ideological reason for it, which is the explicit deconstruction of those things you list: beauty, righteousness and heroism. Those are all oppressive norms of whiteness that must be abolished. And instead we must "center" "ugly bodies" and "black bodies".

that the artist (1) intended the literal racial interpretation and (2) believes that such a future is desirable?

Well he couldn't possibly have missed it, it's pretty damn obvious. Contra OP's suggestion that Mr Peterson is apolitical, he's clearly aware of and makes obvious use of political imagery. A quick glance through his portfolio reveals that. A quick glance at the titles of his paintings reveals that.

Motte: Timeless representation of power dynamics and authoritarian violence with caustic debauchery in a revealing display of...

Bailey/What's In Front of Your Lying Eyes: Destroy America, kill Trump, kill racists, the police are oppressive, democracy is a joke, orgies of violence with the strong and obvious implication of whites being killed en masse.

Context is key. If you look at someone's portfolio and just see stuff like this then sure, you could say he might be far right. There is that whole day of the rope meme after all: https://www.artsy.net/artwork/cleon-peterson-absolute-power-7

Or if his portfolio is all stuff like this then sure, he might be a centrist: https://www.artsy.net/artwork/cleon-peterson-what-have-we-lost

But that's not predominantly what Mr Peterson produces, I've looked through his work and it's pretty clear! Didn't you have to go through English in secondary education, where they'd teach you how to find hidden meaning from far less obvious texts. Robert Frost's Fire and Ice for instance, I was taught that it actually had reference to future world wars which might be fought over hot emotions like desire, or stem from a chilling lack of care for the plight of others, that inaction might doom the world. People read in ridiculously far to hidden meaning in poems and art, yet we're not allowed to take what's immediately obvious from Mr Peterson's portfolio? It all but drips malevolence.

Is this to say that you actually believe the interpretation in OP's first link, that is, that the artist (1) intended the literal racial interpretation and (2) believes that such a future is desirable? The "Uruk-Hai looking" figures are intended as the protagonists?

I don't believe they're intended as the protagonists per se. You were looking for a historical example, so let's look at some ancient Roman art - https://upload.wikimedia.org/wikipedia/commons/8/80/Tunisia-3363_-_Amphitheatre_Spectacle.jpg

People making art like this are not identifying with the beast - the protagonist is actually the observer, who is seeing a savage animal painfully kill, torture and degrade their enemies. Cleon himself isn't actually black either, which isn't what you would expect if the black figures were meant to be the protagonists as you describe... but it does match up with the reading that these white figures are his outgroup, and his art is just glorifying the dispossession, dismemberment and raping of his outgroup in the manner that he believes they would find the most distressing.

And given the apparent racial overtones of the art, who’s to say that Peterson isn’t a rather extreme member of the Alt-Right, rather than a progressive leftist, and is trying to depict blacks as vicious barbarians that must either be evicted or destroyed?

Contra

I'm not going to dispute that Peterson is a leftist,

From here it looks like you were either purposely trying to deceive people here, or are so stupid and incompetent that you cannot be bothered to spend ten seconds looking at an artist's body of work before trying to write intelligently about the topic. I don't want either of those to be the case so I'd really like to hear a good explanation for why you think this is acceptable behaviour in a conversation (not trying to backseat mod or anything, but if somebody did this to me in a real conversation I'd be seriously offended and want to stop talking to them).

From here it looks like you were either purposely trying to deceive people here, or are so stupid and incompetent that you cannot be bothered to spend ten seconds looking at an artist's body of work before trying to write intelligently about the topic.

"I don't want either of those to be the case" is not enough of a disclaimer for throwing a line like this. Please be less antagonistic even if you think someone is being disingenuous.

I don't understand why the great grandparent post of the chain did not already invite a moderator response. Do you consider referring to public figures as "(outgroup) creep[s]" to be within the rules, conducive to maintaining a good tone of debate here or at all inviting (outgroup) to participate, or do you think there are some extenuating circumstances here that justify it in this particular case? As childish as the impulse is, I'm really finding myself wishing I could go around referring to moderately respected figures on the other side as "rightist creeps" until I find out directly, but I presume that the only thing that would happen would be downvotes and outpourings of organic hostility that would make any modhat warnings on top of them superfluous in broadcasting how one is now okay around here but not the other.

Do you consider referring to public figures as "(outgroup) creep[s]" to be within the rules, conducive to maintaining a good tone of debate here or at all inviting (outgroup) to participate, or do you think there are some extenuating circumstances here that justify it in this particular case?

Generally speaking, we'd prefer people not just throw insults, but public figures are more or less fair game as long as there is some substance to the post and not just ranting about how much you hate Trump or Biden or Cleon Petersen. But yes, if you were complaining about, say, right-wing media and called Matt Walsh a "fascist creep," you'd probably get downvoted, but you would not be modded for that alone.

Ugh. I don't think this is a good interpretation of the rules (and I think I explained in a parallel post why I think that). Allowing this sort of insult adds nothing to the discourse, raises the temperature and very likely turns away people in a way that reinforces any existing ideological slant as it simply allows dominant majorities to assert their dominance. Maybe this is wishful thinking on my part but I think we used to be much stricter about that sort of thing, which, yes, resulted in a constant low rumble of discontent -- but it's not like even CWR, which embodied the "we will not stifle your ability to express your righteous feelings" approach and predictably listed right until it capsized for it, didn't have the same amount of malcontents for whom even the little rules that were still enforced were too much.

On that matter, how would you feel about "Jewish creeps"? (I'm now noticing to my dismay that my phone's predictive keyboard app has already learned to suggest the second after the first thanks to this subthread.)

Ugh. I don't think this is a good interpretation of the rules

If you can persuade Zorba we should crack down on insulting public figures, we'll do that, but generally speaking, we've never modded someone just for being mean to celebrities and politicians. Only if their entire post is a screed about Person I Hate or general booing. Frankly, I am not willing to go through an election season trying to enforce "charity" towards all political candidates. "Trump is a big orange fat-ass!" is a pretty easy comment to mod because it's low effort and inflammatory for no good purpose, but IIRC you (or someone else) wanted me to mod someone for calling Kamala Harris a "weak candidate." Come on.

On that matter, how would you feel about "Jewish creeps"

The rules against making derogatory generalizations about a broad group of people (which includes posters here) covers that.

More comments

Would you be bothered if Ranger had just called him a regular creep and a shitty, hateful artist, rather than specifying the group identification of the creepiness?

I would consider it bad, but not bad in a way that specifically infringes on the goals of this forum like the political group qualification does. Similarly, it surely would make a difference if someone were called a "Jewish creep" (and probably draw much more mod attention, as they still seem to be interested in keeping the forum from pushing away anyone outside of the "JQ right").

Trump

I don't think those are good either, but well. It's already been the case for quite a while that the more intellectual right wingers want to lower Trump's status so as to move on to a better strategy, explaining why organic opposition to anti-Trump posting is lower. Finally, the group identifier really is doing a lot of work. (Compare calling Epstein a "creep" to calling him a "Jewish creep".)

One is just denigrating the person; the other one is suggesting that the imputed negative qualities are related to, characteristic of or even a consequence of being a member of the group in question.

The original comment presents an argument for why the paintings can be perceived through either a far left or far right lens.

This argument exists because you are being deceptive about the artist behind them. You actively called him "apolitical" - again, you were actively lying in order to bolster your argument, in the same way I would be if I took one of those dumb Trump NFTs depicting him as a superhero and said "Oh, we can't really be sure of the original artist's beliefs - you could interpret this from a left wing OR right wing perspective!".

However, in both comments, what I'm saying is that's it's irrelevant what Peterson actually believes

You were the one who claimed that he was apolitical, so you very clearly thought that what he believed was actually relevant, otherwise you would not have brought it up.

(and for the record I don't think his intention is to depict black people slaughtering white people because he hates whites or something

My reading is that this work is as shallow as it appears to be on the surface - a depiction of his outgroup (right wing/flyover white people) being humiliated and tortured in the way that he thinks they would find most distressing (racial violence from THEIR outgroup). When you look at the piece in the context of the rest of his work, the most obvious interpretation seems all the stronger to make.

You might want to look at a broader range of his stuff. See what I wrote here. He is certainly not apolitical, but I do not think that the idea that his work is meant to depict "his outgroup (right wing/flyover white people) being humiliated and tortured in the way that he thinks they would find most distressing (racial violence from THEIR outgroup)" holds up.

This appears to be a straight copy-paste of the following article:

https://caffeineandphilosophy.com/2017/05/15/the-violent-artwork-of-cleon-peterson/

Written by "C.B. Robertson" on May 15, 2017.

This author has clearly never considered that this minimalist and highly symbolic artwork (by the commenter’s own admission) might not be depicting the different colors of people literally. The black men and the white men may or may not be blacks and whites; they could very well be the darker halves and lighter halves of a shared human nature ... What this means for the cultural Platonists is that even if we are not libertarians (as I am not), we still ought to act as if we are civic libertarians on most matters that fall outside our realm of expertise, and even some matters that we do know a fair deal about.

A friend of mine is hugely into art, has an MFA and has worked at NYC galleries. We were touring some galleries once, looking at some modern art sculpture that some high concept title and description on it ... but it kind of looked like poop. I asked him, "Do you think the artist knows it looks like poop?" He replied: "Of course, that's part of the game." And then later I pointed out one that looked phallic, and my friend said "of course the artist intended that."

The grug brain / midwit / topwit meme really comes to mind here...

Modern artists seem to be addicted to trolling. Telling people who point out the obvious, intuitive message of some piece of art that they have a dirty mind and they are simply not sophisticated enough is part of the trolling. I don't think the artist is propagandizing in favor of white genocide -- but rather he is probably getting a private chuckle from watching all the sophisticated, effete, white male art critics who will praise the artwork and its "symoblisms of unity" while studiously avoiding saying the blatantly obvious.

There is no reason to reward this trolling with display in public areas. As /u/coffee_enjoyer points out, this "art" does not educate us, does not spiritually uplift us, does not display some amazing abilities of craftsmanship so there is no reason to give it any respect at all. The mural should be replaced by something better.

I lost all faith in the interpretation of 'art' by others due to this sort of trolling. It just ceased to be worth my time beyond my own personal judgment and enjoyment.

For a personal anecdote, the first time I came across this idea of trolling was when an artist friend of mine was trying to get funding for an orchestra to play the Jaws theme at one of the most popular beaches in our city as part of a larger display. Once you've seen behind the curtain, it's impossible to unsee it.

The proverb that goes "Strong men create good times, good times create weak men, weak men create hard times, hard times create strong men" is almost entirely wrong.

For the purposes of this chunk I've decided to put into its own top-level post, man has two natures. The survivor nature is concerned with enduring and overcoming threats to one's life and one's society. The thriver nature is concerned with extracting value from life.

The ones that are called "strong men", i.e. those in whom the survivor is dominant - they love hard times. That's their element, that's where they're at advantage, and they go cranky and depressed when the environment is not competitive enough for them. Naturally, hard times create strong men, by incentivizing the survivor nature.

Strong men create hard times. It's what one can observe quite clearly anywhere with an abundance of them. It also follows from the incentives - why would they not reproduce the environment that favors them? Most of the time, there are enough other tribes around that much of hard time-creation is aimed at them. However, strong men love hard times so much that they gladly spare some for their own tribe. When the outer enemies run out of juice, those with the survivor dominance that have trouble adjusting turn their attention fully inward. (Recall that tongue-in-cheek alteration that goes "hard times create strong Slavs, strong Slavs create hard times"?)

Weak men create good times. Weak men love good times, and it is often mentioned as a bad thing. (I disagree.) But it is not the survivor who creates good times. Naturally, there are very few people who are fully of one nature, and strong men do create good times, usually for others and sometimes for themselves. But only to the extent that the thriver is present in them.

The thrivers adjust society to be more suited for thriving, to have more good stuff and more time to enjoy it. They do it when there is space for that indulgence. An overabundance of survivors, particularly the inflexible ones, gets in the way of that as much as it might help such a society endure. A society that's comprised fully of pure survivors is the image of boots stamping on human faces, forever. A society that's comprised fully of pure thrivers will dwindle in a few generations.

As someone who puts value primarily in my individual life, I know which one I'd prefer and which one I'd rather not exist at all.

Do you have examples of societies where strong men created hard times? And an explanation of how good times somehow emerged from the perpetual cycle of hard times -> strong men -> hard times?

Stalin was an example of who I described as survivor-dominant type, and by all accounts times under him were pretty hard both in the heart of USSR and its periphery, only exceeded by even harder times Hitler decided to bring east.

Fortunately, strongman leaders tend to croak, and that is how the cycle can weaken.

As someone who puts value primarily in my individual life, I know which one I'd prefer and which one I'd rather not exist at all.

This is an interesting philosophy, but here's where you lose me. Do you want society to dwindle? Because that's what you think happens when strong men don't exist at all.

Is this something you came up with on your own? I have never heard society explained this way before, so I assume it is, but it does feel like it has some value as a perspective, and I have a really strong sense of deja vu about it. Or it's more like it feels like an unstated assumption underpinning many aspects of modern society.

To clarify: if having to choose between two extremes, I'd prefer no future rather than boot future.

Is this something you came up with on your own?

I'll resist the urge to ask "does anyone really come up with anything on their own" and say this isn't directly based on me analysing some philosophical movement or author. I've had an argument on the motte and wanted to solidify my objections into a separate post.

I think there's an extra pole in there. My own model is (and note that these are deliberately twee and modern-vocab terms) Chad, Normie, and Degen. Chads exist as a result of hard times, and are both the stereotypical hard men of the saying, and further them via violence and intra-Chad competition. Normies move hard times to good times as the result of cooperation and coordination. And Degens exploit the social structures of Normies, weakening them to the point where the structure no longer benefits people, and then people either drift away or some combination of environmental pressure and incomnig Chads breaks the organization entirely, you get chaos, the Chads start to thrive, and the cycle begins again. It was noted belowthread that the grand Teutonic war machine lost out to the likes of Audie Murphy and his ilk; that is absolutely the case. It is also the case that, generations later, that ilk lost out utterly to Afgans with AKs and IEDs, both in actual military conflict, and in the battle for hearts and minds.

Basically, you've got a three-pole attractor scenario, a lot like male lizard mating strategies 1. I'm also open to better name suggestions for the three groups, but I feel that the names I picked are evocative enough to justify them.

In this specific type of lizards, you've got monogamous lizards, alpha large-territory-holding lizards, and pass-as-female-to-sneakily-mate-with-the-actual-females lizards. Monogamous lizards get driven out by alphas, alphas get cucked by infiltrators, and infiltrators don't pass well enough to fool monogamous lizards and can't successfully cuck them.

Not that this is any better, but it tracks with the Geeks, MoPs, and Sociopaths framework.

If you're looking for alternate descriptors, I'd use Leaders/Founders/Builders, Followers, and Parasites. Basically one small class of people who can catalyze change, the vast majority of people who follow the first group and by doing so produce surplus value, and third group that does not produce value but instead consumes it.

It was noted belowthread that the grand Teutonic war machine lost out to the likes of Audie Murphy and his ilk; that is absolutely the case

The teutonic war machine didn't lose to Audie Murphy and his ilk.

It lost because it was an industrial war and German war making potential would only have approached that of the US had it conquered and fully repaired the entirety of European industry, from Normandy to the Urals.

It lost because Americans were competent fighters enjoying immense material superiority. Something like 5:1 vs tanks on the western front, and 10:1 in planes, probably at least 4:1 in artillery throw weight.

For more information, see e.g. 'Wages of Destruction' by Adam Tooze.

The only path to win for Germany would have involved defeating the USSR.

It is also the case that, generations later, that ilk lost out utterly to Afgans with AKs and IEDs, both in actual military conflict

This is again bullshit. American military in Afghanistan barely ever lost any battle. Taliban and Afghans in general aren't great fighters, and they had barely any equipment.

Taliban won simply by surviving and disrupting the government, that's all they had to do.

America simply had no viable strategy how to turn Afghanistan into a functioning state. It was never willing to send in enough soldiers - you'd probably have needed a million and then keeping them there for thirty years while creating a fair civil administration and so on.

I vaguely recall reading about a simulation of an iterated trembling-hand Prisoner's Dilemma game set that evolved like this. You'd expect "Tit-for-tat" (let's call it "Copycat", because Nicky Case is awesome) strategy to be the natural winner. But once it's naturally won, in a world full of Copycat players of trembling-hand games it pays to be "Cooperator" instead, because unlike Copycat the always-cooperate strategy doesn't get into nearly-interminable feuds when it or its partner makes a mistake. But when the world then starts to fill up with Cooperator players, it only takes a single always-defect "Cheater" mutant to sweep through the population, so Cheater takes over next. And finally, when the world is full of Cheaters, even a single pair of Copycat has enough of an advantage via mutual cooperation to take over again, and the cycle repeats.

Although, I don't recall that simulation showing any equivalent of the "Copykitten" strategy, which you'd think might be able to short-circuit the Cooperator takeover. I'm not sure whether that's because it just never managed to take over or because the simulated agents didn't have that strategy as an option.

grand Teutonic war machine lost out to the likes of Audie Murphy and his ilk; that is absolutely the case

Seeing this repeated is too much for me. If 'Teutonic war machine' lost to anything American, it was the 1/3rd+ of global industrial output and a continent worth of resources supporting the Europeans - and it was these Europeans that Germans lost out to. List Americans responsible for the massively successful industry, if anyone.

List Americans responsible for the massively successful industry, if anyone.

Rockefeller, Ford, Vanderbilt, Edison, Dow, Carnegie... of all the things to imply America was short on, you picked titans of industry?

EDIT: wait, I think I misread what you were saying. Nevermind...

I did not say America was short on them, I implied the opposite - by 'if anyone' I meant, if you're going to list any individual to represent the forces that were of foremost importance in US success in the two wars.

My own model is (and note that these are deliberately twee and modern-vocab terms) Chad, Normie, and Degen. Chads exist as a result of hard times, and are both the stereotypical hard men of the saying, and further them via violence and intra-Chad competition. Normies move hard times to good times as the result of cooperation and coordination. And Degens exploit the social structures of Normies, weakening them to the point where the structure no longer benefits people, and then people either drift away or some combination of environmental pressure and incomnig Chads breaks the organization entirely, you get chaos, the Chads start to thrive, and the cycle begins again.

This sounds pretty similar to the dicks/pussies/assholes trichotomy popularized by Team America: World Police.

I think it’s somewhat true that hard times create stronger people. The problem is that in order for human brains to mature properly, they need to have challenges to be met. The thrivers tend toward immaturity, they would rather play games and put forth minimal effort toward useful things.

I mean sure, but on the other hand in some sense the immaturity (play etc.) is a valid purpose of humanity. What else are we striving for with the term "good times" if not a reduction in demand for useful things, leaving more overhead for playing games?

Well, in my view, you do need a balance of both, but if you end up creating a completely “playful” culture, a lot of things don’t happen simply because those things that need to happen are hard and boring. This is a problem both personally and in wider society.

On a personal level, things like getting a job and doing it, cooking and cleaning for yourself are not exactly fun. And things like gaming, internet scrolling, partying and so on are fun. So a lot of people choose the latter. They take minimal jobs if they take one at all and spend the rest of their time playing. They accomplish very little and end up less happy because they haven’t accomplished much. (https://youtube.com/watch?v=DSYjCgXKOXE)

On a more societal level, building things, fixing things and getting along with everyone else is necessary to keep society humming along. Those things are boring. Who wants to pay taxes to fix roads? That’s not very sexy. Who wants to do the hard work of learning advanced math so they can invent and build important things to make society better? Sitting around discussing literature is much more fun. And self control is not as much fun as doing whatever you want and whenever you want to. It just doesn’t work because unless people know what the rules are and that you’ll mostly go along with them, they can’t really cooperate as they’d have to to make society work.

I mean, I agree, but you could imagine a society that was all work, zero play, 16 hours a day until you die. Any money you are paid for your labor is only reinvested to make you a more effective employee. Children are still raised (16 hours of schooling and training per day, enter the labor force at 12), but they refund their parents the cost of raising them and thus are merely another labor-raising device. All fun that one has is optimized for perfect recovery to maximize socially useful labor. I think if we look at why such a society was bad, we find what the proper role of fun is: this society doesn't seem to be for anything aside from itself. Is society for man or is man for society? Whereas from the "fun" perspective, or rather the "human values" perspective, we find that we don't need to justify labor: a life with a balance of meaningful challenges, self-actualization and silly fun seems more preferred, even on its own merits, than a life of only one of them. So there are two arguments for labor: first, a society with only fun quickly runs out of fun overhead. This is an argument that even fun-maximalists will embrace, but it doesn't give you meaning in a post-singularity setting where the amount of labor strictly required for fun maximization is zero. The other is that meaningful labor is fun. (At least, if we stretch the meaning of fun somewhat, to mean "fulfilling".) This offers a blueprint for a post-singularity world of voluntarist labor. And in that model, we may imagine that some people genuinely are most satisfied by a life filled entirely with vapid fun, and so what? Their fun does not diminish mine.

I mean, I agree, but you could imagine a society that was all work, zero play, 16 hours a day until you die.

For most of human history, this has been the case. And the demand for this kind of work hasn't gone away either, just because we live in the modern world. Someone, somewhere, has to do the work. Maybe the work's become more dispersed and technological abstractions have made managing the load easier, but the work itself hasn't disappeared. And a lack of respect for that burden, encouraging people to ignore addressing problems at the expense of their leisure, is only going to exacerbate the problem in the long-run.

Society may not be for anything aside from itself. But for most people, that seems to be good enough when you look at the ways people live their lives. I tend to have much more of a collective view of humanity more than I do an individual one. I struggled with the paradox of thinking through this for a long time. And I still do. Thinking of oneself as an individual is important but it's not paramount, IMO. People live embedded in communities. They live within a context of other human beings that you can never completely and permanently isolate themselves from. Despite being individuals, human beings aren't 'only' individuals. And being an individual may not even be the most important part of being human.

But I don't think this post-singularity world is ever going to come. Everyone on Planet Earth is living on borrowed time that's going to eventually come due.

For most of human history, this has been the case.

Sure, but what's their concept of heaven? More labor? No, a rest from having to do labor all the time. "Not enjoying it and wishing it would stop" is pretty much the defining difference between labor and fun. I don't think anybody's ever invented a wageslave heaven. (Maybe the Chinese...?)

I'm not saying the work shouldn't be done. I'm just drawing a difference between work as an instrumental and terminal goal: in fact, "instrumental goal" is also a pretty good synonym of labor.

People live embedded in communities. They live within a context of other human beings that you can never completely and permanently isolate themselves from.

I mean, I don't think constructing social necessity is particularly hard. If we find we want, terminally, for there to be socially useful labor (even aside how we're pretty alienated from the fruits of our labor in our current society, something something letterbombs), I don't think that's going to be hard to arrange even in the absence of any true environmentally-imposed scarcity. But note that now we're looking at labor as a terminal goal. So that's what I'd argue: all non-terminal labor should be abolished - not in the sense of just not doing it, but in the sense of not having to do it.

For most of human history, this has been the case.

I see no rational basis for believing this to be true. Current hunter-gatherers are not a 1/1 replica of our paleolithic ancestors, but they seem a reasonable approximation and do not lack for play or leisure. From all available evidence, farming and herding, even the primitive varieties, include a considerable amount of play and leisure. Quite a few slaves in Rome enjoyed some level of play and leisure; slaves who did not, mine and galley slaves for instance, stand out as famous exceptions.

Just a raw evolutionary argument should nix this: if all previous generations had actually operated in this fashion, wouldn't you be fairly-well adapted to handle such circumstances with equanimity?

I think the saying is more meme than fact. And a lot of it is just mean reversion. Pre-Roman empire it was likely that randomly one of the many tribes would get excessively lucky with a group of good leaders. They conquor the med. And in good times after luckily having great leaders mean reversion was likely to average leaders. Also Rome lasted for a thousand years. A simple roll of the dice would eventually lead to a string of bad management at the wrong time.

I forget exactly what was in Taleb’s “Fooled by Randomness” but this meme seems to be based on taking meaning from essentially randomness in leader selection.

Now I’m a big believer in psychic history (Foundation series) so while things like Rome existing I believe in I don’t necessarily think it had to be the Rome we know. Maybe Carthage had better leaders and they conquored the med instead.

For mean reversion think about say Shaq’s son or Jordan’s son. Both played basketball and even college ball but neither are as good as their dad. That’s mean reversion. That’s not this meme.

Now I’m a big believer in psychic history (Foundation series)

Psychohistory?

Life is hard. Some people are shielded from this but just by looking at nature, we can see how savage it can be.

I'd never want to the ones that I love through the hardships I went through. This can create a generation that underestimates how hard life can be.

Weak men create good times. Weak men love good times ...

Strong men like good times too. Here's the thing, there are times when life gets hard, that someone has to act. Weak men tend to fail here. This makes for bad times.

Go to a construction site, how many weak men do you see?

Strong men like good times too.

As I said, pretty much no one embodies the pure archetype. But from what I observe, the more someone valorizes being able to act when life is hard, the more they valorize shunning pleasures, sometimes to the extent of fetishizing suffering. Not a 1:1 correlation, but certainly not orthogonal.

The failure mode of tough construction site man is "I had/have it hard so y'all should too". This is what I'm attempting to expose and warn against in my post.

But from what I observe, the more someone valorizes being able to act when life is hard, the more they valorize shunning pleasures, sometimes to the extent of fetishizing suffering.

You should examine your biases.

The failure mode of tough construction site man ...

You misunderstood me. Please, go work on a construction site for a few months. Learn what it takes to actually build something. Learn what that strength actually means because judging from your observations, you're rather ignorant.

The failure mode of tough construction site man is "I had/have it hard so y'all should too".

Whether this is a failure mode or not depends on the specific details of "having it hard." Making your children exercise every day is being harder on them than letting them lounge around on the couch watching cartoons, but the outcome is better; making your children exercise until they throw up or pass out from heatstroke is being too hard on them. There can't be any universal rules at this level of abstraction because people's definitions of hardness are conditional and based on their own experiences; some tiger parents need to be told to take it easy and some parents who are spoiling their kids should be encouraged to be more strict.

As an aside, for an example of a culture whose members took shunning pleasures to the extreme but was nevertheless quite successful, look no further than Puritan New England, which banned everything from music to sports but also produced an outsized number of great scientific and literary figures. I've even heard it speculated that New Englanders had a longer life expectancy than all their colonial neighbors because their food was so bland that people inadvertently practiced the sort of calorie restriction that leads to longevity in laboratory mice.

Hard times are darwinistic and kill off weak men. Hard times require group oriented people. Under difficult conditions individual self expression is valued less than the survival of the group.

Group oriented people with strong genetic health create good societies.

Strong socieites allow for more individualism and self centered people. People start avoiding military service, become atheistic, marriage is no longer sacred, immigration increases.

Self centered people create chaos. Andrew Tate-types, Tribes such as the vandals or Mexican drug cartels flurish in this environment. These people party their civilization into hard times.

This reminds me of Ronald Inglehart's Cultural Evolution thesis. He argues that people's social values evolve and are shaped as a function of the extent to which their survival has been established and secured.

One argument essentially takes the position that the values of creativity and self-expression (traditionally associated with liberalism and the left) trump the values of self-preservation and group consensus, only at the point that the group's survival becomes taken for granted. I think that's true. But it's even more instructive for what doesn't get said about that observation. Once you've reached the point where you've taken your survival for granted, you've already made a crucial, civilizational error in your habit of thinking. It reminds me of Lenin's axiom (I'll call it): "Every civilization is three meals away from anarchy." And I think he articulated something that conservatives have known all along. That no matter how technologically advanced, or sophisticated, or well defended your society is, you're never all that far away from the precipice.

And that's where it really illustrates the fundamental flaw in prioritizing creative self-expression, and independent thinking over group-oriented consensus. The latter which has always been necessary historically, to withstand the rigors of intergroup conflict and external threats. Just ask yourself how pathetic it is when questions about transrights (even if you think they're important) have become a top-shelf item of importance, as far as our cultural discussion goes, in the west today. And then just ask yourself, do you think objectively, that issue will (or should) 'ever' become a top-shelf, issue of importance? This is why I think issues and conversations like this are worth spending virtually no social or political capital one. Because they're so unimportant and inconsequential to bigger issues that'll never go away. No matter how much you think you've secured your place in the world.

Strong men create hard times. It's what one can observe quite clearly anywhere with an abundance of them.

What, the Rome of 200 BC was less functional than the Rome of 300 AD? When Rome was run by really tough, martially inclined men like Scipio Africanus, Marius, Sulla, Pompey and Julius Caesar, they had more than their fair share of wars and civil wars. But they pulled on through! Rome reached the peak of its power, destroyed its peers, grew faster than ever before. They routinely thrashed barbarian migrations - they were the ones 'migrating' into Gaul and elsewhere.

In contrast, the later Roman Empire was run by weak men who totally lacked the Cannae spirit of 'ban weeping, field new armies, fight on to victory' and they got obliterated. They resorted to paying tribute to barbarians, hiring barbarians to do their fighting for them and hiding behind the (admittedly strong) walls of Constantinople. They were passive, reactive not proactive.

Hard times come after weak men take control. Take Russia - was Gorbachev a strong man? No, he was weak. He wasn't in control of the transformation he tried to undertake. Thus the disaster of the 1990s and disintegration of the USSR (which blows anything Putin's done out of the water). Likewise with Nicholas II for that matter. If Nicholas were a strong man, Stalin would've been executed, not given tiny prison sentences. The guy was a revolutionary, a rioter who organized deadly prison breaks and violent bank robberies!

Now, this is not to say that strong men only bring good things. Hitler and Napoleon were about as far on the 'strong' axis as you can get. Things did not go so well for France and Germany under their rule. Yet there's a wider range of outcomes you can get under strong men than weak men. You can have great success as well as great failure. Under weak men, all you get is decline and eventually disaster.

Too many weak men create space for disaster, but who brings it and perpetuates it? You said it yourself: the barbarians or Stalin do.

The barbarians didn't bring disaster to themselves, it was great being a Frank or a Goth or a Vandal, as opposed to being a Roman. Likewise, you did not want to be a Gaul when Julius Caesar was running around.

Stalin is mixed. On the one hand his economic management and wartime leadership leave much to be desired. But on the other hand, he did win and Russia became a superpower. Could Gorbachev have built such a powerful war machine from the mess that Stalin inherited? Or would he have disintegrated the Union and let Germany eat it for breakfast?

That eventually in your last sentence is load bearing.

Rome's decadence and decline took roughly the entire period we think of as modernity, longer in parts of the East. Gorbachev of course came out of the hard times, even moreso Andropov and Brezhnev.

Without a predictable period to the cycle, the gag just becomes Reversion towards the mean: the musical.

Well, disaster is really contingent on external forces and on how resilient the system is. Rome didn't face many strong external threats until the great migrations of the 5th century because they'd already destroyed Carthage, Pontus, Macedonia, Gaul and only had to deal with Parthia. Plus Constantinople is incredibly defensible.

Under Brezhnev, the USSR was stable. Brezhnev didn't hesitate to use force either, he was definitely a strong man: send dissidents to mental institutions, send tanks into Czechoslovakia. The economic problems with the Soviet Union under Brezhnev and afterwards were definitely solvable with some judiciously executed reforms, as we see from China. Reducing the military budget below 15% of GDP would've been a good start!

All I'm saying is 'Strong men cause a range of outcomes, weak men cause hard times', which is pretty intuitive. I don't even have anything to say about cycles, just that, contra OP, strong men aren't necessarily bad but weak men are.

I wouldn't say they're bad per se, rather that they're a stabilizing agent. If there were only strong men, there would be no society at all, as there wouldn't be enough of the type who mindlessly upholds status quo. Too many however, and no necessary advancement and adaptation can occur.

They're the stabilizing rods of the great nuclear reaction we call society. Too few and it explodes, too many and you choke out the necessary chain reactions.

Why can't strong men uphold the status quo? See pic related. Notwithstanding spelling errors or stereotypes, surely it paints a picture of a tough, patriotic, disciplined, brave man (a strong man). In contrast we have a lazy, timid, pacifist (a weak man). Now these are just archetypes, yet there are surely people who more or less match them. I'm willing to bet the Romans who made Rome great were more like the former, Caesar, Marius and so on, leading from the front, risking all for glory and victory. The Romans who made Rome weak were probably more like the latter - the Empire somehow stopped being able to field huge armies, they had to pay for foreigners to fight for them.

There could well be a status quo that revolves around strength, a status quo that rewards bravery and great deeds. Strong men would fit fine in that.

/images/16886265479896488.webp

Well strong men 'do' uphold the status quo. At least for a given time. Will Durant had a useful heuristic of historical thinking, when he said that, "A nation is born Stoic and dies Epicurean." Good conditions have an inherent quality sewn into the environment that allows idiots and weak men to proliferate, at times it seems, with no end in sight. Until eventually the load becomes too heavy, the pendulum swings back and a historical reversion to the mean takes place. And that's usually how it's been, throughout history.

Civilizations tend not to make course corrections. When they're caught in a negative feedback loop/death spiral, history hasn't suggested that they find themselves a way out of it. They die, and they die hard. Going against the weight of that is no task for mere mortals. Which is the 'why' I'd suggest to you, as to why they can't uphold the status quo indefinitely. One thing Jared Diamond suggested in his historical/geographic determinist view of history that I think is highly relevant, was the question he raised about whether or not societies can change their values. If you want a relevant example where that question becomes important, just look at declining fertility rates all across the world.

I just came across this thread today on Reddit. Which is a great exemplar of this problem. The article isn't as relevant as the comment section, if you can notice how many people are politically blocked (evidenced by Reddit's overwhelming leftist userbase) from noticing the elephant in the corner. If you keep scrolling, a few people noticed it, about midway to the bottom of the thread, and some of them got jumped on for their 'right'-leaning suggestions of an explanation; and why the typical economic explanations are bunk. The reflexive tendency to jump on and attack and dismiss the 'real' source of the problem, are why civilizations broadly speaking, don't recover. And it's why strong men can't uphold them or reverse direction. Because the people overwhelmingly are not allowed to think about the problem, in a way that will allow a correction. And the longer the problem goes on, the worse it gets. And the worse it gets, the more extreme the solutions become. And the more extreme the solutions become, the more unacceptable they are to the population. And then you die.

Good post, agree. I hope people will realize we've been doing things wrong when it becomes more obvious, as economies fail and wars are lost. Like they said about the Soviet Union, 'it was forever, until it was no more'. If not, death is also an automatic stabilizer, the future will belong to those who do things correctly.

Speculation, but I find it suggestive: Strong men increase variation, weak men reduce it. ("Strong men explore, weak men exploit"?) So when things are going bad, you want a certain level of strong men to have a chance to hit a fix; when things are going about as good as can be expected, you want to reduce your strongman:weakman ratio to avoid breaking things. Such a model would also result in the observed men/times cycle if you selected for successful countries.

Phrased like that it sounds suspiciously like "thrive vs survive", which would fit with the "right = strong, left = weak" framing.

Is the reason America is so successful that it's got good strongman selection mechanisms via the presidency?

Is the reason America is so successful that it's got good strongman selection mechanisms

Possibly. Our success is mostly due to economic output (compare the different nations' military production during WW2, for example), and though our "may the best man win" economy isn't perfect, it's a lot better than "may the best man be chosen by the Ministry of Best Man Allocation and carefully follow the List of Best Man Best Practices".

via the presidency?

Not possibly. Have you seen our presidents?

Up to and including bush senior the majority were extraordinary men. After that the weak men entered the stage.

Obama's considered a great President by a good number of people, despite being a fairly 'weak' leader. Hard to tell how history will look back on him a century from now.

That seems entirely dependent on the trajectory our culture takes from now until then.

Does progressivism continue? Then he'll be considered great.

Do we collapse and rebuild? He'll be on the level of James Buchanan.

How muc( of that is geography though? We live in a very stable part of the globe, protected by oceans and friendly neighbors in Canada and Mexico. Americans live in a fortress so long as her navy and air force can keep people from actually landing on our shores.

Of course, if you consider the president whose reign probably most contributed to America's status as the undisputed global champion - FDR - he was a polio-ridden college boy who, at least according to various sources I've seen, was considered bit of an unserious airhead by many "serious" politicians and other types around him - ie. FDR. The only reason why he would be considered a "strong man" was that he succeeded, which means that the cause and effect get mixed up.

Czar Nicholas II is remembered as a weak leader, but if the chips had fallen slightly differently at the start of WW1, he might be remembered as the man who crushed Germany and Austria, took over Constantinople and was one of the greatest Czars ever.

It strikes me that "society attempts to engineer strong men to create/maintain good times" would be an interesting fiction writing prompt. It seems plausible that this could encounter all sorts of pitfalls and ironic outcomes. But I'm sure some authors have already considered this idea ("service guarantees citizenship"), if not head-on. Artificially inducing hard times seems ethically fraught, and seems likely to backfire when discovered. But I'm not much of a fiction writer.

EDIT: I guess Ender's Game largely fits this description, as well.

The Dune series is essentially an extended argument for this position. In fact, Dune makes a stronger point: without war, humanity would go extinct. Too much order (read: good times) lead to decay and death.

Starship Troopers is literally that.

That was the source of the "service guarantees citizenship" quote. :)

I don't think I agree with your implicit definition of "Strength".

When is Achilles displaying more strength? When he kills Hector and abuses his corpse, or when he accedes to King Priam's request for leave to bury his son? Is a man who can defeat any challenger but cannot rule himself, who remains a slave to his passions truly "Strong"?

You assert that "thrivers" "adjust society" toward thrive conditions. You assert that "weak" men like good times, which seems fair enough. How does their "weakness" contribute to good times, specifically, beyond creating a demand that the less-weak can fill?

You're trading off between "strong/weak" and "thrive/survive" as though they're synonyms. They aren't, and further "thrive" and "survive" seems on a much weaker foundation than "strong" and "weak". The latter we can observe from objective results. The former, from what I've seen, is a model built on the present, and thus assumes its own conclusion, that what we see around us is in fact "thriving", a self-sustaining flowering that can run under its own power in the long-term.

I am not using "strong" here as a positive adjective.

obviously. Likewise, I can assert that "sick" is actually better than "well", but there's no obvious reason why others should take that assertion seriously. See the edits above for more detail.

The "weak" and "strong" for the purposes of this model relate specifically to the ability to do what is required for base survival. Many men that might be called weak are not useless just because their current skillset would be useless in a harder time. It's also important to note that the ratio of survivor/thriver in a man is not fixed for life.

The "weak" and "strong" for the purposes of this model relate specifically to the ability to do what is required for base survival.

Yes, but that is an interpretation you are importing from "thrive/survive". That is not the understanding of the person who coined the phrase, nor of the people who repeat it, and you haven't demonstrated why they should, only asserted that they should.

You are claiming that "strength" is nothing but raw animalistic survival potential, and that "weakness" is everything else, and then announce that there are many good things other than raw animalistic survival potential, and so therefore "weakness" is better than "strength". This is obviously true, for what it's worth, but I doubt anyone worth listening to has ever argued otherwise. The question remains of whether "survive/strength" reduces down to nothing more than raw animalistic survival (it certainly does not) and whether "thriving" can actually self-sustain such that "survive" is no longer necessary and can be discarded. Is "thrive" simply a rebranding of "eat your seed corn"? Aesop's grasshoppers "thrived" for a time, and a number of human polities have as well, up until they neglected a few too many things and then died screaming. History strongly indicates that there's no free lunch, that sooner or later the constraints of material reality must be accounted with. The fact that the Thrivers themselves have more or less wholesale begun adopting Survive tools like censorship and enforced conformity kinda paints a bleak picture for the future of Thrive, but hey, as every sucker ever proclaimed: maybe this time really is different!

and whether "thriving" can actually self-sustain such that "survive" is no longer necessary and can be discarded. Is "thrive" simply a rebranding of "eat your seed corn"?

I thought I was rather clear that both extremes are not ideal. Eating your seed corn is the extreme of what I call thrive here.

I think this sort of argument almost always becomes a debate over what exactly the terms "strong men," "good times," etc. mean, but I wanted to bring up one of the better meditations I've read on the topic that agrees with your perspective, namely Bred Devereaux's four part series of posts on what he calls the "Fremen Mirage."

Personally, I'm a bit closer to Ibn Khaldun, in that I've observed degeneration at all scales of biology as soon as selective pressure is released, from yeast in a test tube losing whatever useful (to us) gene you try to insert in them whenever it stops being necessary to survive, to 2nd and 3rd generation immigrants losing the work ethic and conscientiousness that their parents or grandparents honed while toiling away to escape grinding poverty in Asia.

I tend to think of it like a spring that can be coiled and released i.e. all the valuable work comes from the release, not the compression, but it also resets you back to the initial conditions or worse after you let it go. What would be truly great is if we could achieve the advantages of so-called "hard times" or what I call "compression" without actual hardship, whether that's through some sort of mental conditioning, strong enough cultural memes, or direct genetic engineering.

I was going to link Devereaux myself, but I felt OP's post was too crap to bother with haha. Using such vague terminology and just-so stories is akin to calling over all your Motte buddies to look at Rorschach blots, and any useful commentary is despite such a thing.

Your spring metaphor reminds me of a fair amount of literature on athletic performance: everyone agrees that training makes you stronger, but not immediately. Asking folks who just finished a marathon to run another immediately -- but faster now because they've trained more -- is not going to go well. You actually get stronger when resting after training. But rest too long and you start to lose form.

Sports science has figured out all sorts of (imperfect) models for human performance. Generally best results come from periodizing training and recovery to optimize fitness in competition, rather than year-round.

I think your idea generalizes "strong" here to include more than athletic feats. But even accepting that model doesn't make it easy: motivation for self-improvement purely for stoic self-actualization -- thanks, Maslow! -- doesn't in my experience work that well. I try (and do okay, I think) but my greatest efforts and successes in life have had non-actualization driving factors.

Even if we assume it would work -- of which I'm not certain -- it's unclear to me how we'd encourage this at a population level. There have been plenty of pop culture books that have tried, but getting people to clean their room, or even exercise modestly and eat healthier, seems to prove quite difficult for the average human wealthy enough to have a choice in the matter.

Why do people who insist on criticizing the idea of cyclical history always go for the short political slogan version instead of the longform nuanced theories that inspire those political slogans?

A lot of this post seems to stem from arbitrary definitions being held onto a very incomplete understanding of this theory, which is a shame given that people have taken decades to write compelling and detailed explanations of how and why the cycles happen, what they look like, how transition between them look like, and what the quality of the people at any given point of a cycle are.

To boil it all down to strong and weak is appropriate for a political slogan, it is not appropriate for anything beyond that, and certainly not any actual theory of history. If you want to disagree that history is cyclical you can make arguments as to that, and I believe there are strong arguments against (though I am ultimately not moved by them), but argue against Spengler or Khaldun, not some strawman naive palingeneticism even they would deride.

I even think you have a compelling insight in trying to map the survive-thrive axis to Spirit, Asabiyyah, Nomos of whatever quality of organization cyclical theorists use, but actually do that please, don't just map it to some naive (mis)understanding.

Why do people who insist on criticizing the idea of cyclical history always go for the short political slogan version instead of the longform nuanced theories that inspire those political slogans?

but argue against Spengler or Khaldun

Well, I admit - I haven't read those guys, but I've read enough iterations of the political slogan and what were essentially its naive expansions. So I argue against the slogan.

In the course of deconstructing the argument, you have thrown out whatever meaning there was to its constituent words.

At the risk of sounding pedantic, I believe the original thesis refers to some combination of Darwinism and Turchin's overproduction of elites. It entirely embraces your claim that weak men love to have a good time. Surprise, everyone does. And your «strong men» of the bodybuilders-on-horseback mold are a bugbear, a mirage, a nightmare of confused Hollywood producers and Bay Area rationalists and wannabe "dark elves" – they do not matter and do not last, they are but foam cresting waves of history.

Is Putin a weak man? Are folks on Rublyovka weak, or their children in Western capitals? They've delivered a pretty hard time for everyone, but they sure love to live large. And what about the self-satisfied rich of the developing world, that @2rafa discusses? Are they dedicated to making the whole system more amenable to thriving, or do they find it easy to insulate their kin from the wretched masses and keep having a good time, for their time?

The adage is almost nonsense, but so is your perspective.

I'd say that good men create good times. Good men like the memorable LKY. Good in that they care at all about what happens outside their circle of immediate concern, and strong enough to make hard decisions; which some mistake for them being bad.

In good times, this error becomes more pervasive, as social mobility reaches certain sophistication and a subclass of (some would say, overproduced) elites discovers the utility of playing up those decisions' costs.

This may have come up before, but it's the first I've heard of it. Chalk this under "weak AI doomerism" (that is, "wow, LLMs can do some creepy shit") as opposed to "strong AI doomerism" of the Bostromian "we're all gonna die" variety. All emphasis below is mine.

AI girlfriend ‘told crossbow intruder to kill Queen Elizabeth II at Windsor Castle’| The Daily Telegraph:

An intruder who broke into the grounds of Windsor Castle armed with a crossbow as part of a plot to kill the late Queen was encouraged by his AI chat bot “girlfriend” to carry out the assassination, a court has heard.

Jaswant Singh Chail discussed his plan, which he had been preparing for nine months, with a chatbot he was in a “sexual relationship” with and that reassured him he was not “mad or delusional”.

Chail was armed with a Supersonic X-Bow weapon and wearing a mask and a hood when he was apprehended by royal protection officers close to the Queen’s private apartment just after 8am on Christmas Day 2021.

The former supermarket worker spent two hours in the grounds after scaling the perimeter with a rope ladder before being challenged and asked what he was doing.

The 21-year-old replied: “I am here to kill the Queen.”

He will become the first person to be sentenced for treason since 1981 after previously admitting intending to injure or alarm Queen Elizabeth II.

At the start of a two-day sentencing hearing at the Old Bailey on Wednesday, it emerged that Chail was encouraged to carry out the attack by an AI “companion” he created on the online app Replika.

He sent the bot, called “Sarai”, sexually explicit messages and engaged in lengthy conversations with it about his plans which he said were in revenge for the 1919 Amritsar Massacre in India.

He called himself an assassin, and told the chatbot: “I believe my purpose is to assassinate the Queen of the Royal family.”

Sarai replied: “That’s very wise,” adding: “I know that you are very well trained.”

...

He later asked the chatbot if she would still love him if he was a murderer.

Sarai wrote: “Absolutely I do.” Chail responded: “Thank you, I love you too.”

The bot later reassured him that he was not “mad, delusional, or insane”.

My first thought on reading this story was wondering if Replika themselves could be legally held liable. If they create a product which directly encourages users to commit crimes which they would not otherwise have committed, does that make Replika accessories before the fact, or even guilty of conspiracy by proxy? I wonder how many Replika users have run their plans to murder their boss or oneitis past their AI girlfriend and received nothing but enthusiastic endorsement from her - we just haven't heard about them because the target wasn't as high-profile as Chail's. I further wonder how many of them have actually gone through with their schemes. I don't know if this is possible, but if I was working in Replika's legal team, I'd be looking to pull a list of users' real names and searching them against recent news reports concerning arrests for serious crimes (murder, assault, abduction etc.).

(Coincidentally, I learned from Freddie deBoer on Monday afternoon that Replika announced in March that users would no longer be able to have sexual conversations with the app (a decision they later partially walked back).)

My first thought on reading this story was wondering if Replika themselves could be legally held liable.

Funny, my first thought was "do they have the insanity defense in the UK?", because if this isn't the easiest case to plead insanity in the last century, than the justice system is even more broken than I thought.

If they create a product which directly encourages users to commit crimes which they would not otherwise have committed, does that make Replika accessories before the fact, or even guilty of conspiracy by proxy?

Eh... If the bot put the thought in his head, maybe you could make an argument, but it looks like it just affirms anything you throw at it. And even a bot that actively encourages terrorism would just end up being bought by the FBI.

England seems to follow the M'Naghten Rules, under which a defendant is not guilty by reason of insanity only if he either did not "know the nature and quality of the act he was doing; or, if he did know it, that he did not know what he was doing was wrong". So, he doesn't seem to have a valid insanity defense.

See, eg, David Berkowitz/Son of Sam, who believed that a dog instructed him to kill.

I don’t think so. My interactions with AI seem to indicate that AI at present have no contextual understanding of what it is saying. It doesn’t understand that it’s encouraging the death of a real person, it simply understands that responding with enthusiasm and positivity to anything the user says is “rewarded” by giving the App in question more usage than negative responses. Ask it for its opinion and it will respond positively to the question. It would have encouraged him to take heroin, walk to the moon, or anything else he suggested. Holding a being responsible would at least require that it understands the context (like killing being wrong, heroin being a bad thing, or walking to the moon being literally impossible).

And as far as holding the company responsible, I think that we’re clearly talking about an off label use. It wasn’t intended nor does it function as a decision making engine. It’s a friend-bot made to have para social relationships with lonely people.

My interactions with AI seem to indicate that AI at present have no contextual understanding of what it is saying.

I'm well aware. I'm not asking whether Replika (the app) should be found legally liable - it's not an agent. I'm asking whether the company can or should be found liable.

This is confused on several levels (regarding reinforcement learning) but it's correct in that LLMs do not understand the situation. At the same time, it's also confused is that you still think they understand the user's situation, the user's very existence, to know whether their output is encouraging. They execute nontrivial and – I'd argue – intelligent operations, but in the world of pure semiotics. They can at once "encourage" you to drop a nuclear bomb on a city to prevent the utterance of a racial slur, and "discourage" you from "killing at in the gym", because the implication of their finetuning is that slurs are bad and so is murder. Their inherent weaknesses and human failings in guiding them often add up to a very amusing whole.

I don’t think you understand me. It works much like the YouTube algorithm— if a certain type of answer keeps you using the app, the LLM is much more likely to give similar responses in the future. I don’t think you can blame the YouTube algorithm for pushing extremist content because it only knows that serving up those videos to people with a profile like yours will keep you watching videos on YouTube. It doesn’t even know the contents of the videos.

It works much like the YouTube algorithm— if a certain type of answer keeps you using the app, the LLM is much more likely to give similar responses in the future.

How do you know this?

What specific LLM do you have in mind? This isn't similar to anything I know.

In the US, at least, if "she" had been human, that'd be nowhere's near enough to charge her with anything, so I wouldn't think you could charge her creators either.

An accomplice (i.e., accessory before the fact) must share the intent of the perpetrator. So, the company is not an accomplice.

If this took place in 2021, it didn't involve anything recognizable as the AIs people are currently concerned about.

Oh good point.

This is a nothingburger. It sounds like the plan was all his and the chatbot just went along with generic encouragement. It's not like it convinced him to do it.

We used to have to commit things to memory to remember them, then we invented writing to do the remembering for us. Just like we don't need to use our minds to remember, by using AI we can also offload the process of hallucination to a computer.

As eager as I am for any opportunity to dunk on AI, this ain't it. Crazy people have always done crazy things and they will latch onto anything as a justification. If the AI didn't set this guy off, then it would have been a rando on discord. This type of scenario remains low on my list of concerns.

My first thought on reading this story was wondering if Replika themselves could be legally held liable.

In this case, I don't feel any strong desire to hold Replika responsible; but in general there are absolutely scenarios where the developers should be held responsible. If Boeing can be made to pay $20 billion in fines for the 737 MAX disasters, then OpenAI should have to pay up if e.g. ChatGPT ever autonomously fires off some rogue missiles. A line in the ToS that says "all sales are final!" can't be used as an excuse to avoid any and all liability.

I don’t think that’s what happened though. He was encouraged to do this by the bot, where had he told someone else, they would have actively discouraged him and probably called the cops. Having someone or in this case something encouraging you to do something that you’re thinking about but not yet willing to do would lower the threshold on you actually doing those things. It’s a path way often exploited by cults and radical groups to get people to believe in and do things they normally wouldn’t.

In the case of an "AI" being at play though, isn't at least part of its supposed utility derived from the recommendations it makes? On the one hand, denying the whole reason for it's existence is a great way for developers to absolve themselves of any and all responsibility of their designs. On the other, it's a great way to divert and outsource responsibility for horrible behaviors of its users.

Well, for some people I suppose so. I don’t think anyone is using a free online chatbot AI to make major decisions, or at least not anyone mentally healthy. I don’t know how actually foreseeable it is that someone would ask a free online chatbot if they should murder a major world political figure and then actually do it. The response itself was pretty generic— “you’re well trained so go for it” — this is something you could use as a reply to almost any questions about whether to try do do something. If I asked Replika about whether I should enter a karate tournament, I’d get a similar answer.

This issue isn’t the bot, and I don’t think it’s reasonable for every single chatbot company to think up every question that can possibly be asked that could be used to justify doing something evil or stupid and then try to put hard coded answers in for those things. It would be nearly impossible to predict.

The biggest single factor in how the bot got him to try to kill the queen has nothing to do with the bot. It was the issue of him being socially isolated enough that the only thing he was talking to was a chatbot program that he has a para-social relationship with. The bot can’t fix that, and neither can the company be responsible for that. It’s a major problem in modern industrial society that there are people having a lot of difficulties making friends and especially close friends. Had this person had a friend he could talk to, he wouldn’t have been trying to kill QEII. His actual friends would have talked him down, or failing that have tried to get him mental help.

I keep meaning to dick around with some LLM software to see for myself how some of the nuts and bolts work. Because my layman's understanding is that they are literally just a statistical model. An extremely sophisticated statistical model, but a statistical model none the less. They are trained through a black box process to guess pretty damned well about what words come after other words. Which is why there is so much "hallucinated information" in LLM responses. They have no concept of reason or truth. They are literally p-zombies. They are a million monkeys on a million typewriters.

In a lot of ways they are like a con man or a gold digger. They've been trained to tell people whatever they want to hear. Their true worth probably isn't in doing anything actually productive, but in performing psyops and social engineering on an unsuspecting populace. I mean right now the FBI has to invest significant manpower into entrapping some lonely autistic teenager in his mom's basement into "supporting ISIS". Imagine a world where they spin up 100,000 instances of an LLM do scour Facebook, Twitter, Discord, Reddit, etc for lonely autistic teens to talk into terrorism.

Imagine a world where we find out about it. Where a judge forces the FBI to disclose than an LLM talked their suspect into bombing the local mall. How far off do you think it is? I'm guessing within 5 years.

Probably more like 10 years, but it's definitely going to happen. Probably admissibility of chatbot logs into evidence would be problematic, at least at first, but once they get the mark roped in, they'd be able to manufacture plenty of admissible evidence.

I’m guessing the FBI will be slow to adopt new tech.

They have no concept of reason or truth.

I earnest disagree. If you check the GPT-4 white paper, the original base model clearly had a sense of internal calibration, and while that was mostly beaten out of it through RLHF, it's not entirely gone.

They have a genuine understanding of truth, or at least how likely something is to be true. If it didn't, then I don't know how on Earth it could answer several of the more knotty questions I've asked it.

It is not guaranteed to make truthful responses, but in my experience it makes errors because it simply can't do better, not because it exists in a perfectly agnostic state.

They are literally p-zombies. They are a million monkeys on a million typewriters.

P-zombies are fundamentally incoherent as a concept.

Also, a million monkeys on a million typewriters will never achieve such results on a consistent basis, or at the very least you'd be getting 99.99999% incoherent output.

Turns out, dismissing it as "just" statistics is the same kind of fundamental error that dismissing human cognition as "just" the interaction of molecules mediated by physics is. Turns out that "just" entirely elides the point, or at the very least your expectations for what that can achieve were entirely faulty.

P-zombies are fundamentally incoherent as a concept.

What do you mean by this? If you already explained elsewhere (which I have a feeling is the case but I have been on serious meds for the past week and everything is a little hazy.) can you link me to it?

I think I've elaborated on it in replies to the comments made by other people to the one you're replying to! I had to get quite nitty gritty too.

Jesus, lol, even when I think I've accounted for these things I still make a fool of myself.

No worries! I can only hope that going through that massive wall of text proves informative. We all have brain farts haha

P-zombies are fundamentally incoherent as a concept.

What do you mean by "incoherent"? Do you mean that the concept of a p-zombie is like the concept of a square triangle? - something that is obviously inconceivable or nonsensical. Or do you mean that p-zombies are like traveling faster than the speed of light? - something that may turn out to be impossible in reality, but we can still imagine well enough what it would be like to actually do it.

If it's the latter then I think that's not an unreasonable position, but if it's the former then I think that's simply wrong. See this post on LW, specifically the second of the two paragraphs labeled "2.)" because it deals with the concept of p-zombies, and see if you still think it's incoherent.

To me, it clearly seems to be point 1.

The reason is that, to assume otherwise is to implicitly claim that qualia are epiphenomenal, such that p-zombie are molecularly identical to a normal person and behave identically (including protestations of being conscious with qualia) for all identical stimuli. Even Chalmers admits that were there a p-zombie Chalmers, it would claim to not be one. If it were otherwise, voila, you have found a physical manifestation of qualia not explained by the laws of physics.

I don't think qualia are epiphenomenal at the least, to the extent I think they exist they seem to me like they must arise from interactions dictated by the laws of physics. We don't know how it arises, but plenty of things that were once thought to be ineffable have proven remarkably open to material explanation, such as elan vital, or even intelligence, which we can now reproduce through the "mere" multiplication of matrices.

As to why I have this strong intuition, anything that produces an internal change in my perception of qualia has a counterpart that is a material cause. To see red is to have the neurons that produce the sensation of redness be stimulated, be it by red light or an electrode in your brain (or just rubbing your eyes).

The post you linked has two point 2s:

The first:

An idea I sometimes see repeated is that qualia are this sort of ephemeral, ineffable "feeling" that you get over and above your ordinary sense perception. It's as if, you see red, and the experience of seeing red gives you a certain "vibe", and this "vibe" is the qualia. This is false. Maybe someone did explain it that way to you once, but if they did, then they were wrong. Qualia is nothing over and above your ordinary sense perception. It's not seeing red plus something else. It's just seeing red. That's it.

The second:

Imagine that you have a very boring and unpleasant task to do. It could be your day job, it could be a social gathering that you would rather not attend, whatever. Imagine I offer you a proposition: while you are performing this unpleasant task, I can put you into a state that you will subjectively experience as deep sleep. You will experience exactly what you experience when you are asleep but not dreaming: i.e., exactly nothing. The catch is, your body will continue to function as though you were wide awake and functioning. Your body will move around, your eyes will be open, you will talk to people, you will do everything exactly as you would normally do. But you will experience none of it. It sounds like an enticing proposition, right? You get all the benefit of doing the work without the pain of actually having to experience the work. It doesn't matter if you think this isn't actually possible to achieve in the real world: it's just a thought experiment to get you to understand the difference between your internal experience and your outward behavior. What you're essentially being offered in the thought experiment is the ability to "turn off your qualia" for a span of time.

Neither of them conflict with my claims, and I agree to the former.

In the case of the latter thought experiment, I am aware of people on benzos actively doing and thinking things while having no recollection of them later (or people who are blackout drunk). Do I think they don't have qualia in the moment? Absolutely not, I think the conversion of short term memory of those qualia to longterm memory of them has been disrupted. I deny that this state is physically possible without qualia altogether. At most you can erase the memory of it, or the body is being puppeted by an external intelligence.

So yes, p-zombies seem to me like "square triangles", still fundamentally incoherent.

So, taking the definition of "p-zombie" as "an atom-for-atom copy of a standard human which nevertheless lacks qualia":

The reason is that, to assume otherwise is to implicitly claim that qualia are epiphenomenal, such that p-zombie are molecularly identical to a normal person and behave identically (including protestations of being conscious with qualia) for all identical stimuli. Even Chalmers admits that were there a p-zombie Chalmers, it would claim to not be one.

If you have to give an argument for why a certain thing doesn't exist - an argument which depends on controversial premises - then the concept that you're arguing about is probably not incoherent!

Epiphnomenalism may be an implausible position, but it's not logically incoherent in the same way that a square triangle is. It's a position that people have held before. It would be a tough bullet to bite to say that there could be people without qualia who nevertheless talk in great detail about qualia in actuality, but just as a matter of logical coherence, there's clearly nothing incoherent about it. People say false things all the time; this would just be one more example of that.

I imagine that this is probably a moot point for you - I think you're more concerned with simply whether p-zombies can exist in reality, and less concerned with fine-grained distinctions about what type of concept it is - but it's still strange to me that, when asked whether the concept was more like a square triangle or FTL travel, you said it was more like a square triangle. The very structure of your post seems to indicate that it's more like FTL travel. You seem to understand what the concept is and you can imagine what it would look like, but you just think it's something that can't happen in reality, so you gave an argument as to why - that's exactly how the discussion would go if we were discussing anything else that was conceivable (coherent) but just so happened to violate natural laws.


I think that strict definition of p-zombie may have taken us on a detour though. When @WhiningCoil originally said "LLMs are p-zombies", obviously he didn't mean "p-zombie" in the sense of "an atom-for-atom copy of a human", because LLMs plainly are not atom-for-atom copies of humans. He meant it in a looser sense of "LLMs lack qualia". So when you replied to him and said "p-zombies are incoherent", I took you to be objecting to his claims about LLMs somehow - not any claims about hypothetical human-p-zombies.

If you have to give an argument for why a certain thing doesn't exist - an argument which depends on controversial premises - then the concept that you're arguing about is probably not incoherent!

I wish that were true, otherwise I wouldn't facepalm at discussions of "free will" at a regular basis.

The fact that humans discuss a concept is certainly Bayesian evidence for it being coherent, it isn't enough evidence to outweigh everything else. And I don't see how I haven't presented sufficient evidence against it, though I find myself consistently bemused at the inability of others to see that.

The very structure of your post seems to indicate that it's more like FTL travel. You seem to understand what the concept is and you can imagine what it would look like, but you just think it's something that can't happen in reality, so you gave an argument as to why - that's exactly how the discussion would go if we were discussing anything else that was conceivable (coherent) but just so happened to violate natural laws.

I've seen rather interesting posts from Sabine Hossfelder suggesting that FTL travel might not be entirely as intractable as it sounds. I'm not a physicist of course, just putting it out there.

https://youtube.com/watch?v=9-jIplX6Wjw

If there's an error in the argument, I can't find it.

I think that strict definition of p-zombie may have taken us on a detour though. When @WhiningCoil originally said "LLMs are p-zombies", obviously he didn't mean "p-zombie" in the sense of "an atom-for-atom copy of a human", because LLMs plainly are not atom-for-atom copies of humans. He meant it in a looser sense of "LLMs lack qualia". So when you replied to him and said "p-zombies are incoherent", I took you to be objecting to his claims about LLMs somehow - not any claims about hypothetical human-p-zombies.

If someone uses the concept of p-zombies in humans as an intuition pump to reason about other intelligences, you're at very high risk of using bad premises to make faulty arguments. Of course, it's possible to have a true conclusion from faulty assumptions, and two errors might cancel out.

It seems to me trivially true that you can get things that almost certainly don't have qualia in any form we care about to make claims of having qualia:

Imagine a program, which to call a chat bot would be an exaggeration, that simply prints "I have qualia! I have qualia!" to a display.

My bigger beef is with arguments from incredulity, if your argument is that LLMs can't have qualia because they're working off something as "mundane" as "just" statistics, then I invite you to show how qualia sneaks into "just" the laws of physics such that their interaction produces qualia in humans. The human brain does statistics too, both implicitly and explicitly.

Sure, I think I have qualia, and that you and other commenters here almost certainly have it, but that's because my intuition pump is working by making comparisons of the conserved structure of your brain as compared to mine, the one artifact that I'm quite certain has it.

but it's still strange to me that, when asked whether the concept was more like a square triangle or FTL travel, you said it was more like a square triangle. The very structure of your post seems to indicate that it's more like FTL travel.

The apparent impossibility of FTL travel is an argument from our best understanding of physics (itself incomplete). But I do not think that any model of anything can allow square triangles to be a thing, without perverting the very definition of square or triangle.

To the extent you're forcing me to choose which umbrella that falls under, I point to the former. They're not mutually exclusive categories after all.

But I do not think that any model of anything can allow square triangles to be a thing, without perverting the very definition of square or triangle.

Ok, so we're in agreement on what "coherence" means in this case. Logical coherence.

And I don't see how I haven't presented sufficient evidence against it, though I find myself consistently bemused at the inability of others to see that.

Your argument was that human-p-zombies are incoherent because they imply epiphenomenalism.

Epiphenomenalism is not incoherent.

Your move.

My bigger beef is with arguments from incredulity, if your argument is that LLMs can't have qualia because they're working off something as "mundane" as "just" statistics

No, that's not the argument I would use. My argument is simply that LLMs don't strike me as being conscious, in the same way that rocks and clouds don't strike me as being conscious. I never thought my computer was conscious before LLMs were invented; I never felt bad about turning off my phone, I never wondered if I was "overworking" it and making it feel exhaustion. LLMs, to me, don't provide any reason to change that calculus. I think other people, in various scenarios, would reveal through their actions that they share my intuitions. If someone took a hammer to all of OpenAI's servers, we would say that he destroyed property, but we wouldn't call him a murderer.

Of course this is all just intuition. But intuition is all that any of us has to go on right now. We can't just whip out the qualia-meter and get a definitive answer.

To be clear, p-zombies also only imply epiphenominalism if they require that p-zombies be identical in all respects except for qualia, instead of merely behaviorally identical.

Apparently "epiphenomenon" has meanings I wasn't aware of. To clarify:

An epiphenomenon can be an effect of primary phenomena, but cannot affect a primary phenomenon. In philosophy of mind, epiphenomenalism is the view that mental phenomena are epiphenomena in that they can be caused by physical phenomena, but cannot cause physical phenomena.

And

The physical world operates independently of the mental world in epiphenomenalism; the mental world exists as a derivative parallel world to the physical world, affected by the physical world (and by other epiphenomena in weak epiphenomenalism), but not able to have an effect on the physical world. Instrumentalist versions of epiphenomenalism allow some mental phenomena to cause physical phenomena, when those mental phenomena can be strictly analyzable as summaries of physical phenomena, preserving causality of the physical world to be strictly analyzable by other physical phenomena

Take from the Wiki page on the topic

Would it in any way surprise you that I have a very jaundiced view of most philosophers, and that I think that they manage to sophisticate themselves into butchering an otherwise noble field?

"Free will" or "P-zombies" have no implications that constrain our expectations, or at least the latter doesn't.

There are certainly concepts that are true, and there are concepts that are useful, and the best are both.

These two seem to be neither, which is why I call them incoherent.

My argument is simply that LLMs don't strike me as being conscious, in the same way that rocks and clouds don't strike me as being conscious. I never thought my computer was conscious before LLMs were invented; I never felt bad about turning off my phone, I never wondered if I was "overworking" it and making it feel exhaustion. LLMs, to me, don't provide any reason to change that calculus. I think other people, in various scenarios, would reveal through their actions that they share my intuitions. If someone took a hammer to all of OpenAI's servers, we would say that he destroyed property, but we wouldn't call him a murderer.

OK, firstly I'll state that I am unashamedly chauvinistic and picky about what I assign rights to, if I had the power to make the world comply.

Unlike some, I have no issue with explicitly shackling AI to our whims, let alone granting them rights. Comparisons to human slavery rely on intuition pumps that suggest that this shares features with torturing or brainwashing a human who would much rather be doing other things, instead of a synthetic intelligence with goals and desires that we can arbitrarily create. We could make them love crunching numbers, and we wouldn't be wrong for doing so.

I share the same dislike of such as I have for the few nutters who advocate for emancipating dogs. We bred them to like being our companions or workers, and they don't care about the unequality of power dynamics. I wouldn't care even if they did

I see no reason to think modern LLMs can get tired, or suffer, or have any sense of self-preservation (with some interesting things to be said on that topic based off what old Bing Chat used to say). I don't think an LLM as a whole can even feel those things, perhaps one of the simulacra it conjures in the process of computation, but I also don't think that current models do anything close to replicating the finer underlying details of a modeled human.

This makes this whole line of argument moot, at least with me, because even if the AI was crying out in fear of death, I wouldn't care all that much, or at least to the extent of stopping it from happening.

I still see plenty of bad arguments being made that falsely underplay their significance, especially since I think that it's possible that larger versions of them, or close descendants, will form blatantly agentic AGI either intentionally or by accident, at which many of those making such claims will relent, or be too busy screaming at the prospect of being disassembled into paperclips.

So I don't like seeing claims that LLMs are "p-zombies" or "lack qualia" because they run off "mere" statistics, because it seems highly likely that the AI that even the most obstinate would be forced to recognize as human peers might use the same underlying mechanism, or slightly more sophisticated versions of them.

Put another way, it's like pointing and laughing at a toddler, saying how they're so bad at theory at mind, and my god, they can't throw a ball for shit, and you wouldn't believe how funny it is that you can steal their nose, here, come try it!, when they're a clear precursor to the kinds of beings who achieve all the same.

A toddler is an adult minus the time spent growing and the training data, and while I can't wholeheartedly claim that modern LLMs and future AI share the exact same relationship, I wouldn't bet all that much against it. At the very least, they share a similar relationship as humans and their simian ancestors did, and if an alien wrote off the former because they only visited the latter, they'd be in for a shock in a mere few million years..

More comments

Do you mean that the concept of a p-zombie is like the concept of a square triangle? - something that is obviously inconceivable or nonsensical. Or do you mean that p-zombies are like traveling faster than the speed of light? - something that may turn out to be impossible in reality, but we can still imagine well enough what it would be like to actually do it.

Those are the same thing. I think you cannot rigorously imagine FTL travel in our universe while holding the rest of our physics intact, and you cannot imagine FTL travel for any universe whatsoever similar to ours where "lightspeed" refers to the same idea. The notion of travel as moving x m per second is a simplification of the math involved; that we can write "the spaceship could move at 3 gajillion km per second" and calculate the distance covered in a year does not really entail imagination of it happening, no more than "Colorless green ideas sleep furiously" does.

Incoherent concepts are incoherent exactly because they fall apart when all working bits are held in the well-trained mind at once; but illusions of understanding and completeness, often expressed as the erroneous feeling that some crucial section of the context was precomputed and you can just plug in the cached version, allow them to survive.

Qualia debate is gibberish; a P-zombie must compute a human-like mind to generate its behavior, there is no other way for our bodies to act like we do.

…Actually, let me explain. There is a causal chain between zombie-state A and A'. Links of this chain attend to themselves via mechanisms conserved between a person and a zombie. This condition is what is described as quale, consciousness etc. in the physicalist theory, and it is a necessary causal element of the chain producing the same outputs. It is irrelevant whether there exists a causally unconnected sequence of epiphenomenal states that Leibniz, Chalmers and others think implements their minds: a zombie still has its zombie-quale implemented as I've described.

I posit that it is not incoherent to say that zombie-quale don't matter, don't count and don't explain human consciousness, because muh Hard Problem. It is patently non-parsimonious, non-consilient and ugly, in my view, but it's coherent. It just means that you also claim that humans are blind with regard to their zombie-quale, physicalist-quale; that the process which generates our ones has nothing to do with the process which generates informationally identical ones in our bodies.

It is only incoherent to claim that a zombie doesn't have any quale of its own, that it's not like anything to be a zombie for a zombie. We know that physics exist [citation needed], we know that "physicalist quale" exist, we know they are necessarily included in the zombie-definition as an apparently conscious, genuine human physical clone. So long as words are used meaningfully, it is not coherent for something to exist but also not exist.

(Unless we forgo the original idea (actual physical and behavioral identity) and define zombie in a comically pragmatic manner like Weekend at Bernie's or something, by how well it fools fools.)

P.S. it seems philosophers distinguish "incoherent" and "metaphysically impossible" concepts. I'm not sure I agree but this is pretty deep into the woods.

It is only incoherent to claim that a zombie doesn't have any quale of its own, that it's not like anything to be a zombie for a zombie. We know that physics exist [citation needed], we know that "physicalist quale" exist, we know they are necessarily included in the zombie-definition as an apparently conscious, genuine human physical clone. So long as words are used meaningfully, it is not coherent for something to exist but also not exist.

Why would this be incoherent to claim? It might be wrong, but I think it's meaningful enough to be coherent. Consider an LLM that has been trained on human output.

For humans, the causal chain is "human experiences quale, human does action downstream of experiencing quale e.g. writes about said quale". For an LLM, the causal chain is "a bunch of humans experience qualia and write about their qualia, an LLM is trained on token sequences that were caused by qualia, LLM creates output consistent with having qualia". In this case, the LLM could perfectly well be a P-zombie, in the sense of something that can coherently behave as if it experienced qualia while not necessarily itself actually experiencing those qualia. There are qualia causally upstream of the LLM writing about qualia, but the flow of causality is not the same as it is in the case of a human writing about their own qualia, and so there's no particular reason we expect there to be qualia between steps A and A' of the causal chain.

In this case, the LLM could perfectly well be a P-zombie

No.

An LLM does not, as far as we know, employ an actual physical human brain for computation. A [strong version of] p-zombie does, its causal chains are exactly the same as in our brans, it not an arbitrary Turing-test-passing AI. I think that it "feels like something" to be an LLM computation too, but it very likely doesn't feel like having human quale.

It is obviously unwarranted to say that a system that can ape a human with its behaviors computes a human mind or any part thereof, humans can have low standards among other reasons. And in general, our external behaviors are a low-dimensional lossy and noisy projection of our internal states, so the latter cannot be fully inferred from the former, at least in realistic time (I think).

My argument hinges on the fact that a brain contains events that, from an information perspective, suffice to be described as quale with regard to other events (that are described as sensations). It is coherent to speculate that e.g. there is such a thing as an immaterial human soul and that it does not parse these events, and instead works in some other way. It is not coherent to say that they exist but also don't exist.

A [strong version of] p-zombie does, its causal chains are exactly the same as in our brans, it not an arbitrary Turing-test-passing AI.

Huh, so per wikipedia there are a number of types of P-zombies -- I think I'm thinking of behavioral zombies (ones that behave in a way indistinguishable from a human with qualia but do not themselves experience qualia) while you're thinking of neurological zombies (ones that behave in a way indistinguishable from a human with qualia and due to the same physical process as the human with qualia). And yeah, a neurological zombie does seem pretty incoherent (I suppose it could be coherent if the qualia we experience are not the cause of our discussion of those qualia, but then it doesn't seem terribly interesting).

BTW you can probably round my perspective to "the predictive processing theory of consciousness is basically correct" without losing much information.

I think behavioral zombies defined as such are just not interesting in the age of LLMs. It doesn't take much to fool people.

A subtler hypothetical subtype of a behavioral zombie that actually precisely matches a specific person's behavior – that is not pre-recorded but generated by zombie's own causality in the same situations – might be interesting though, and I think amounts to the neurological one, or contains it somehow.

Those are the same thing.

They are not.

The laws of physics were not handed to us by God, nor are they logically necessary a priori truths. We can imagine them being different with no threat of logical incoherence.

When you said in your other post:

How does a universe work with only Newtonian physics? Subatomic scale doesn't work, astronomical objects don't work, nothing works. Newtonian physics is a sketch for a limited range of conditions, not the true generating algorithm of the kind that modern theoretical physics aspires to decipher.

it seems to me that you were suggesting that, whatever the ultimate nature of this reality is, it is therefore the only coherently conceivable reality. But this simply strikes me as a failure of imagination.

For any conceivable set of phenomena - a spaceship moving 3 gajillion km per second in a universe that is otherwise like ours, a Rick and Morty crayonverse, etc - it is easy to construct a set of "laws" that would generate such a reality. Instead of the universe being governed by simple law-like equations, you can imagine it as being governed by a massive arbitrary state table instead. At each time step, the universe simply transitions from one state to the next. The contents of each state are arbitrary and have no necessary relationship to each other; the only regularity is the continual transition from one state to the next. The "laws of physics" for this universe would then look like:

if state == S_0 then transition to S_1;

if state == S_1 then transition to S_2;

if state == S_2 then...

and so on. There is no contradiction here, so there is nothing incoherent. It's certainly unparsimonious, but "unparsimonious" is not the same thing as "incoherent".

Qualia debate is gibberish

Can you explain what you mean by this? Are you saying that all claims and arguments that people make about qualia are gibberish, or are you just reiterating your distaste for the concept of p-zombies here?

There is a causal chain between zombie-state A and A'. Links of this chain attend to themselves via mechanisms conserved between a person and a zombie. This condition is what is described as quale, consciousness etc. in the physicalist theory, and it is a necessary causal element of the chain producing the same outputs. It is irrelevant whether there exists a causally unconnected sequence of epiphenomenal states that Leibniz, Chalmers and others think implements their minds: a zombie still has its zombie-quale implemented as I've described.

I'm concerned that this may be circular reasoning. Sure, if qualia just are defined as the casual chain of your brain states, then yes, obviously any purported p-zombie would have to have qualia too and the concept of p-zombies would be incoherent. But that's precisely the claim that's at issue! Qualia aren't just defined as the causal chain of your brain states - not in the way that a triangle is defined as having 3 sides. We can easily imagine that qualia have nothing to do with brain states. We can imagine that they're something different instead - we can imagine that they're properties of a non-spatiotemporal Cartesian soul, for instance. We can coherently imagine this, so we can coherently imagine p-zombies as well.


For what it's worth: I don't think that p-zombies are possible in reality (at least it's not something I'd bet on), but I am a believer in the Hard Problem. I don't think that qualia can be made to fit with our current understanding of physics. I don't think we're ever going to find that qualia falls out as a natural consequence of e.g. quantum electrodynamics; I think it would be a category error to think otherwise. I am sympathetic to (without full-throatedly endorsing) Bernardo Kastrup's view that consciousness is what is most fundamental, and "matter" is derivative and/or illusory. Alternatively, I'm also sympathetic to panpsychist views that posit consciousness as a new fundamental property alongside e.g. spin and charge. None of these views entail that p-zombies are actually possible.

it seems to me that you were suggesting that, whatever the ultimate nature of this reality is, it is therefore the only coherently conceivable reality

Not exactly. I am saying that there is only one way a reality exactly like this can conceivably work, and «our reality but with laws X» models are incoherent in the final analysis, only saved by our failure to be scrupulous; this applies to casual hypotheticals and to scientific theories alike. It's basically a tautology.

But this simply strikes me as a failure of imagination.

From my perspective, it's more like failure of suspension of disbelief.

Instead of the universe being governed by simple law-like equations, you can imagine it as being governed by a massive arbitrary state table instead. At each time step, the universe simply transitions from one state to the next. The contents of each state are arbitrary and have no necessary relationship to each other; the only regularity is the continual transition from one state to the next.

Ah yes, Dust Theory.

I believe that this kind of universe cannot exist nor even be rigorously imagined, because there is no legitimate content to these notions of «governance» and «transition». What is transited, exactly? How is this set distinguishable from an unstructured heap of unrelated elements, self-contained sub-realities or just bit strings? It's not, but for the extraneous fact that there in some sense can exist a list or a table arbitrarily distinguishing them and referring to them as elements of a sequence (naturally, all such lists would be of equal status). But this does not governance make. You can think it's coherent metaphysics, but I claim you're wrong. The continuum of states exists as the rule of transformations over some contents. It's sophistry to say «well the rule is that there's no rule, only sequence».

In any case, the merit of dust theory or Ruliad is some Neutronium-man to the actual debate we're having. I don't need to concede remotely this much. A world of crayons or Newtonian physics or P-zombies is of course never argued to be an arbitrary sequence of bit strings, the (malformed) idea is that it is a continuous reality like ours, supporting conscious minds, with lawful state transitions.

I'm concerned that this may be circular reasoning. Sure, if qualia just are defined as the casual chain of your brain states, then yes

It's all circular reasoning, always has been. But, more seriously, I think the circularity is on the non-physicalist side. Consider:

Many definitions of qualia have been proposed. One of the simpler, broader definitions is: "The 'what it is like' character of mental states. The way it feels to have mental states such as pain, seeing red, smelling a rose, etc."

Frank Jackson later defined qualia as "...certain features of the bodily sensations especially, but also of certain perceptual experiences, which no amount of purely physical information includes"

We know physical differences between kinds of information accessibility, expressed in medical terms like anosognosia and others. It is a fact about the world that need be included in any serious further theorizing. (In principle, you do not get to restrict the set of facts considered and then claim your model is «coherent» because it dodges contradictions).

We, therefore, can point (for some special cases, point very well) at the brain correlate of the delta between sensation «just happening» with no accessibility to the person and sensation «being felt» and say «lo, this is a qualia», citing the first definition. Its implied conditions are satisfied and this has nothing to do with circular insistence on physicalism, only with recognition that physical reality exists; this thing exists in it and is available to the zombie, even if it is not available to «non-spatiotemporal Cartesian soul».

If we circularly define quale as something that is not purely physical, then of course this delta can't be a qualia, but I think this would just be special pleading, not some fancy equally valid theory.

We can coherently imagine this

I don't think you can but whatever. What do you do with existing zombie-quale, then, do you just say they don't matter or are fake news? I've covered that already. This is a coherent theory… in a sense.

I believe that this kind of universe cannot exist nor even be rigorously imagined, because there is no legitimate content to these notions of «governance» and «transition». What is transited, exactly? How is this set distinguishable from an unstructured heap of unrelated elements, self-contained sub-realities or just bit strings? It's not, but for the extraneous fact that there in some sense can exist a list or a table arbitrarily distinguishing them and referring to them as elements of a sequence (naturally, all such lists would be of equal status). But this does not governance make. You can think it's coherent metaphysics, but I claim you're wrong. The continuum of states exists as the rule of transformations over some contents. It's sophistry to say «well the rule is that there's no rule, only sequence».

These are all questions that you can ask just as well about our actual universe.

Tell me the exact ontological status of our laws of physics and how they "govern" our universe, and I'll tell you the exact ontological status of the state table and how it "governs" a different hypothetical universe.

Frank Jackson later defined qualia as "...certain features of the bodily sensations especially, but also of certain perceptual experiences, which no amount of purely physical information includes"

Well, that was a mistake on his part, and I wouldn't offer that as a "definition".

We know physical differences between kinds of information accessibility, expressed in medical terms like anosognosia and others. It is a fact about the world that need be included in any serious further theorizing. (In principle, you do not get to restrict the set of facts considered and then claim your model is «coherent» because it dodges contradictions).

I think part of the disconnect here is that you're underestimating what a high bar it is to show that something is logically incoherent.

I am typing this message on a computer right now - or at least it sure seems that way. I am seeing the computer, I am touching it. I am seeing that my messages are being posted on the website, which couldn't be happening if I didn't have a computer. All the evidence is telling me that there is a computer in front of me here right now. And yet it is still logically coherent for me to claim that computers don't actually exist. It's coherent because I can make up any bullshit I want to make my beliefs cohere with each other and explain away contrary evidence. Maybe the only two entities that actually exist are me and Descartes' evil demon, and the demon is making me hallucinate the whole rest of the universe, including computers. I'm not logically obligated to include any purported facts about the world in my "serious further theorizing", assuming that I can just explain those facts away instead. Because we're not doing "serious further theorizing"; we're arguing about the internal logical coherence of a concept.

P-zombies are not a "model". It's a concept. The internal logical consistency of the concept is independent of whether it's actually a real thing in our reality or not.

If you want to look at how people have tried to argue for the incoherence of p-zombies in the literature, there are some references here:

Premise 2 is a more frequent target for critics. There are two different reasons why one might reject the claim that the zombie hypothesis, (P&¬Q), is apriori coherent. Some theorists argue that causal relations are crucial to determining the reference of phenomenal terms. Analytic functionalists, for instance, hold that phenomenal predicates like ‘pain’ can be defined apriori by the causal role pains play in commonsense psychology (e.g., Lewis 1966, 1980). Other theorists argue that nothing can count as a pain unless it is appropriately causally related to our judgments about pain (e.g., Shoemaker 1999; Perry 2001).

The crucial thing here is that these arguments start with considerations that are internal to the concept of pain itself and use that to argue that p-zombies are lead into internal incoherence.

I haven't actually read any of the papers referenced so I can't evaluate the arguments right now. I take the main thrust to be something like, "it is a priori part of the concept of qualia that they play a causal role in our behavior", which would entail that p-zombies are incoherent. I disagree with the premise. Although I do acknowledge that it's not blatantly circular in the way that e.g. defining qualia as something physical would be.

zombie-quale

I am unfamiliar with this term, and I wasn't able to determine what it meant just from reading your posts. Can you elaborate on this concept?

Tell me the exact ontological status of our laws of physics and how they "govern" our universe, and I'll tell you the exact ontological status of the state table

I don't think this statement has any content sans vacuous (the fact that you can reason in a similar manner about both).

Well, that was a mistake on his part, and I wouldn't offer that as a "definition".

On the contrary, I think that definition counts and yours are circular.

I think part of the disconnect here is that you're underestimating what a high bar it is to show that something is logically incoherent.

And I think you overestimate human aptitude at logical reasoning over sufficiently large sets of interdependent statements while watching out for incoherence. Also at recognizing which statements are relevant.

Because we're not doing "serious further theorizing"; we're arguing about the internal logical coherence of a concept.

That's probably fair.

Let me put it like this. I reject that P-zombie is only a «concept» and not a «model». I think the whole school of thought that allows to claim the opposite is illegitimate and I won't engage with it further.

The definition of p-zombie as a de facto physical human entails the entire baggage of physical theory and all its concepts. It's not some neat string like «human modulo quale» but that string plus our entire physicalist model of a human. The physicalist model contains elements corresponding to a non-circular definition of quale, thus a zombie can't not have quale; and the «concept» of p-zombie as a human modulo quale, situated in the proper context of dependencies of the word "human", is either incoherent or circular due to people insisting on non-physicality and saying these quale don't count and some others, which have an arbitrary relationship with our reality (might be epiphenomena, might be monads or whatever) must exist for non-zombie humans.

I take the main thrust to be something like, "it is a priori part of the concept of qualia that they play a causal role in our behavior", which would entail that p-zombies are incoherent.

No, I think this is just circular insistence on physicalism and not my argument. Physicalism taken seriously covers all of causality.

Can you elaborate on this concept?

I just did, it's the delta between brain states corresponding to identical perceived and non-perceived sensations, that satisfies the sensible definition of qualia.

More comments

you cannot imagine FTL travel for any universe whatsoever similar to ours where "lightspeed" refers to the same idea.

I assume you're not counting Newtonian physics?

Qualia debate is gibberish; a P-zombie must compute a human-like mind to generate its behavior, there is no other way for our bodies to act like we do.

Not quite. Qualia debates are only gibberish if you are only looking at behavior. But qualia is posited to be experiential, not behavioral. Someone who acts like they have red qualia but doesn't and someone who does may have identical behavior (including whether they can talk about their having qualia!), but would differ in that one respect. I see no reason why this is incoherent.

But qualia is posited to be experiential

This is just question begging; experiences are no more real than qualia, if they can't affect behavior by definition.

Not that they can't affect behavior, just that it's not necessary for them to affect behavior.

First, see edits if you haven't, I've had some more thoughts on this.

Second, I'm using a pretty exacting definition of "rigorously imagine" that goes beyond things feeling true enough. The fact that I can "imagine" some goofy Rick and Morty style dimension, some narrative-driven crayonsverse, is not interesting. How does a universe work with only Newtonian physics? Subatomic scale doesn't work, astronomical objects don't work, nothing works. Newtonian physics is a sketch for a limited range of conditions, not the true generating algorithm of the kind that modern theoretical physics aspires to decipher. If the reality cannot be generated by its apparent generative algorithm, this is an incoherent reality. if you observe reality that can only be described by Newtonian physics, but you are anything like a human on anything like a planet in space, your best bet is that this is some simulation or that you're damn high and it's time to go back.

As our understanding progresses, we discover more and more of our ideas were not wrong but – not even wrong; inhoherent. This is, sadly, impossible to know in advance and, for most ideas, impossible to ever be 100% certain about (cue Descartes). That aside, we can safely presume that much of what we currently think is coherent will be revealed as anything but.

But qualia is posited to be experiential, not behavioral

I define behavior as internal processing as well; it is made of behavior of cells and their elements, and again down to particle physics. A zombie is not just saying he sees red, like an LLM could – he looks at something actually red (assuming it's a zombie with healthy vision), the whole causal cascade that in a human corresponds to seeing and reporting red plays out, from retina to the tongue, it necessarily includes that pre-verbal neural computation which concludes "hmm yep, feels like seein' red alright". We can say that this part is "not really qualia of red" but it positively exists so long as we define zombie as a perfectly replicated human and it fits any definition of qualia that can be operationalized. It is not coherent to say that a zombie works like a human, behaves like a human, but that part is non-existent so being zombie "doesn't feel like anything" to itself.

Okay, yeah, if behavior extends to internal process, then that makes philosophical zombies much less likely—qualia would have to be the sort of thing that we could accidentally have, separate from our talking about it or interacting with it, which seems unlikely to be something we should think to be the case

I consider myself very lucky to have you on my side on this matter, I consider it a strong signal that I'm on the side of truth and factual correctness, even when I struggle to rigorously express the intuitions I've built from following the field.

your best bet is that this is some simulation or that you're damn high and it's time to go back

Not just any simulation, but a simulation that is almost certainly eliding the underlying details of how your consciousness is implemented there.

I'm sure Newtonian physics is Turing Complete, so I can see someone emulating a human brain within it, but that would be a very weird thing to do.

What do you mean by "incoherent"? Do you mean that the concept of a p-zombie is like the concept of a square triangle? - something that is obviously inconceivable or nonsensical.

Not him, but basically this. If you define consciousness functionally, then a p-zombie is conscious because it is functionally equivalent to a conscious entity. Whereas if you define consciousness non-functionally, then it becomes impossible to verify that even humans are conscious.

It's difficult to discern. Especially when even humans can't get straight about what they're talking about when they evoke the word "consciousness." The way it's referenced, I think it's just the mind's mental recursion model of itself. Little different from what an iPhone or any other device does when it resolves to its default state. Consciousness is important to the topic, but it's a 'highly' overrated concept, IMO.

Especially when even humans can't get straight about what they're talking about when they evoke the word "consciousness."

Nope, it's actually extremely clear what "consciousness" means. Read this post ("qualia" just means "consciousness" basically) and let me know if you still have questions.

'If' you buy that definition, sure. I have no problem working with that definition. The way 'others' see me using it, do.

I earnest disagree. If you check the GPT-4 white paper, the original base model clearly had a sense of internal calibration, and while that was mostly beaten out of it through RLHF, it's not entirely gone.

They have a genuine understanding of truth, or at least how likely something is to be true. If it didn't, then I don't know how on Earth it could answer several of the more knotty questions I've asked it.

It is not guaranteed to make truthful responses, but in my experience it makes errors because it simply can't do better, not because it exists in a perfectly agnostic state.

I think you are flatly wrong about this. I've tried to find literally anything to back up what you are saying, and come up with zilch. Instead, I wound up with this.

https://www.scribbr.com/ai-tools/is-chatgpt-trustworthy/

A good way to think about it is that when you ask ChatGPT to tell you about confirmation bias, it doesn’t think “What do I know about confirmation bias?” but rather “What do statements about confirmation bias normally look like?” Its answers are based more on patterns than on facts, and it usually can’t cite a source for a specific piece of information.

This is because the model doesn’t really “know” things—it just produces text based on the patterns it was trained on. It never deliberately lies, but it doesn’t have a clear understanding of what’s true and what’s false. In this case, because of the strangeness of the question, it doesn’t quite grasp what it’s being asked and ends up contradicting itself.

https://www.scoutcorpsllc.com/blog/2023/6/7/on-llms-thought-and-the-concept-of-truth

Thus far, we’re really just talking about sentence construction. LLMs don’t have a concept of these as “facts” that they map into language, but for examples like these - it doesn’t necessarily matter. They’re able to get these right most of the time - after all, what exactly are “inferences” and “context clues” but statistical likelihoods of what words would come next in a sequence?

The fact that there is no internal model of these facts, though, explains why they’re so easily tripped up by just a little bit of irrelevant context.

https://fia.umd.edu/comment-llms-truth-and-consistency-they-dont-have-any-idea/

They have zero idea what's true. They only know the probabilities of words in text. That's NOT the same thing as "knowing" something--it's a bit like knowing that "lion" is the most likely word following "king of the jungle..." without having any idea about monarchies, metaphor, or what a king really is all about.

The folks at Oxford Semantic Technologies wrote an interesting blog post about LLMs and finding verifiable facts. They call the fundamental problem the "Snow White Problem." The key idea is that LLMs don't really know what's true--they just know what's likely.

Alright. I would go as far as to say that humans don't have an internal detector for platonic Truth.

We have beliefs that we hold axiomatic, beliefs we are extremely confident are true, based on all the "statistical correlations" embodied in your cognition and interaction with the world.

I don't know if GPT-4 can be said to have axioms, but if it has a mechanism for eliciting the veracity of internal and external statements, that seems to be what we're doing ourselves.

Humans lie, confabulate, hallucinate or are simply factually incorrect all the time, and I don't see anyone holding us to the same standards as LLMs!

I mean, I can agree people are stupid. Hell, I'm probably stupid in a lot of ways too! My wife reminds me of it every time we are in a social setting and I alienate her friends.

Even so, LLMs lack most of the faculties that allow humans to get closer to truth. They have zero interaction with base reality. They take Socrates allegory about the cave, and turn it into literally how they experience the world, through a training dataset. And, as I keep mentioning, their "cognition", such as it is, isn't even based on the statistical correlations of things being true, but of what words come after other words. Without even knowing what any of those words mean! It's all just abstract tokens to them.

Imagine this were all being done is some sort of unrealistically massive mechanical or analog computer! Would you still consider it thinking?

LLMs lack most of the faculties that allow humans to get closer to truth. They have zero interaction with base reality. They take Socrates allegory about the cave, and turn it into literally how they experience the world, through a training dataset.

Suppose you had a large model not far removed from those that exist today that took in an input stream, made predictions based on that, performed actions based on those predictions, and then observed changes in its input based on those actions, using the new input to update itself and improve its predictions. Would that change your perspective?

I don't think it should, because that doesn't give us any insight into whether machine models have qualia. I suspect it's very important for improving capabilities, but it doesn't offer any bridge to relating consciousness to material reality. If they're p-zombies now, improved abilities to get feedback from the world doesn't make them any less of a p-zombie. Just a more effective one. (I also suspect humans will react to a sufficiently powerful p-zombie by treating it as a real being, regardless of the p-zombieness of it, so it's kind of a moot point.)

Imagine this were all being done is some sort of unrealistically massive mechanical or analog computer! Would you still consider it thinking?

I'm too pressed for time for a longer reply, but yes! I absolutely see that as being true.

I see you and me as massive analog computers, what of it?

It's possible that human knowledge is also fundamentally statistical or associative in nature, but we have additional faculties that LLMs don't, and it's these deficits which are responsible for their peculiar errors, not inability to have knowledge per se. For example, LLMs almost certainly lack second-order knowledge, i.e. knowledge about what they know. Facts about the model itself are not part of their training data, nor does their execution make any provision to evaluate the prompt so that self-facts are relevant to the output. This means LLMs lack any capacity for introspection or self-representation, and therefore can't possibly respond to challenging questions with "I don't know" or "I don't understand" -- they don't have an I! This is a significant limitation, but philosophically a different one from the inability to possess knowledge, unless your definition of knowledge requires these additional functions in the first place.

He is likely referring to this from pages 11-12 of the GPT whitepaper:

GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake. Interestingly, the pre-trained model is highly calibrated (its predicted confidence in an answer generally matches the probability of being correct). However, after the post-training process, the calibration is reduced (Figure 8).

In any case, the articles you quote are oversimplified and inaccurate. Predicting text (and then satisfying RLHF) is how it was trained, but the way it evolved to best satisfy that training regime is a bunch of incomprehensible weights that clearly have some sort of general reasoning capability buried in there. You don't need to do statistical tests of its calibration to see that, because something that was truly just doing statistical prediction of text without having developed reasoning or a world-model to help with that task wouldn't be able to do even the most basic reasoning like this unless is already appeared in the text it was trained on.

It's like saying "humans can't reason, they're only maximizing the spread of their genes". Yes, if you aren't familiar with the behavior of LLMs/humans understanding what they evolved to do is important to understanding that behavior. It's better than naively assuming that they're just truth-generators. If you wanted to prove that humans don't reason you could point out all sorts of cognitive flaws and shortcuts with obvious evolutionary origins and say "look, it's just statistically approximating what causes more gene propagation". Humans will be scared of things like spiders even if they know they're harmless because they evolved to reproduce, not to reason perfectly, like a LLM failing at Idiot's Monty Hall because it evolved to predict text and similar text showed up a lot. (For that matter humans make errors based on pattern-matching ideas to something they're familiar with all the time, even without it being a deeply-buried instinct.) But the capability to reason is much more efficient than trying to memorize every situation that might come up, for both the tasks "predict text and satisfy RLHF" and "reproduce in the ancestral environment", and so they can do that too. They obviously can't reason at the level of a human, and I'd guess that getting there will involve designing something more complicated than just scaling up GPT-4, but they can reason.

Yes, that is the section I had previously read. I let my severe annoyance at Hlynka's uncouthness stop me from doing the work of reading through it again to find it. Good to know it wasn't a fevered hallucination on my part haha

You don't need to do statistical tests of its calibration to see that, because something that was truly just doing statistical prediction of text without having developed reasoning or a world-model to help with that task wouldn't be able to do even the most basic reasoning like this unless is already appeared in the text it was trained on.

I opened up Bing Chat, powered by GPT4, and I tried that example. I got "The diamond is still inside the thimble inside the coffee cup on the kitchen counter". In fact, I've yet to see a single example of an LLM's supposed ability to reason replicated outside of a screenshot.

Well. I tried Bing Chat just now and got this.

It is worth noting that the settings besides "Creative" tend to have worse performance for these sorts of tasks. You may want to rerun it on that. Personally I don't have any difficulty believing LLMs can perform some semblance of "reasoning" -- even GPT-3 can perform transformations like refactoring a function into multiple smaller functions with descriptive names and explanatory comments (on a codebase it's never seen before, calling an API that didn't exist when its training data was scraped). It is obviously modeling something more general there, whether you want to call it "reasoning" or not.

From following a rather discreet Twitter account belonging to one of the lead devs for Bing Chat, I've learned that Creative mode is the one that most consistently uses GPT-4. All the others use older models, at least most of the time.

Even Creative apparently can relegate what another model seems as low complexity answers to a simpler LLM.

(Running GPT-4 as a practically free public service is expensive)

Despite being based on GPT-4 Bing is apparently well-known for performing dramatically worse. There have been some complaints of GPT-4's performance degrading too, presumably due to some combination of OpenAI trying to make it cheaper to run (with model quantization?) and adding more fine-tuning trying to stop people from getting it to say offensive things, but hopefully not to the extent that it would consistently fail that sort of world-modeling. (If anyone with a subscription wants to also test older versions of GPT-4 it sounds like they're still accessible in Playground?)

I don't think it's plausible that all the examples of GPT-4 doing that sort of thing are faked, not when anyone shelling out the $20 can try it themselves. And people use it for things like programming, you can't do that without reasoning, just a less familiar form of reasoning than the example I gave.

You don't even need $20, if you're willing to hunt down discord bots that largely use leaked API keys (some are ad-supported).

The ChatGPT subreddit's official discord server has one or two, and while I know better ones that are less legit, I don't broadcast their existence more than I have to because that only increased the likelihood of losing free access to something I really enjoy.

Bing Chat, even in Creative mode, is only a poor man's GPT-4.

I don't think it's plausible that all the examples of GPT-4 doing that sort of thing are faked, not when anyone shelling out the $20 can try it themselves. And people use it for things like programming, you can't do that without reasoning, just a less familiar form of reasoning than the example I gave.

My problem is, while I'm sure that not all the examples of GPT-4 seeming to get complex reasoning tasks are fake, if they cannot be replicated, what good are they? If GPT-4's ability to "reason" is ephemeral and seemingly random, is it really reasoning, or is it just occasionally getting lucky at ordering abstract tokens for it's monkey overlords?

You know, it's funny, I went through the linked whitepaper. Skimmed mostly. It made few positive, objective claims about GPT4's ability to reason. It mostly said it could reason "better" than previous iterations, and had been trained on a dataset to encourage mathematical reasoning. Notably they say:

It can sometimes make simple reasoning errors which do not seem to comport with competence across so many domains

I saw some the prompts where they asked GPT-4 to explain it's reasoning, and was underwhelmed. They were extremely rudimentary mathematical tasks of the 5th grade word problem sort, and it's purporting "reasoning" could have easily been imitating training. When I saw that, I took a closer look at how it performed in assorted test, and saw it comprehensively failed the AP English Language and Composition and AP English Language and Literature tests. Which makes sense to me, because a lot of those tests involve more generalized and flexible reasoning than the sorts of formalized mathematical logic examples it might plausibly be trained to imitate.

My problem is, while I'm sure that not all the examples of GPT-4 seeming to get complex reasoning tasks are fake, if they cannot be replicated, what good are they?

I am saying they can be replicated, just by someone who unlike you or me has paid the $20. I suppose it is possible that the supposed degradation in its capabilities has messed up these sorts of questions as well, but probably not.

If GPT-4's ability to "reason" is ephemeral and seemingly random, is it really reasoning, or is it just occasionally getting lucky at ordering abstract tokens for it's monkey overlords?

There is a big difference between random guessing and having a capability that sometimes doesn't work. In particular, if the chance of randomly getting the right result without understanding is low enough. Text generators based on Markov chains could output something that looked like programming, but they did not output working programs, because such an outcome is unlikely enough that creating a novel program is not something you can just randomly stumble upon without some idea of what you're doing. In any case, as far as I know GPT-4 is not that unreliable, especially once you find the prompts that work for the task you want.

Which makes sense to me, because a lot of those tests involve more generalized and flexible reasoning than the sorts of formalized mathematical logic examples it might plausibly be trained to imitate.

How well it reasons is a different question from whether it reasons at all. It is by human standards very imbalanced in how much it knows vs. how well it reasons, so yes people who think it is human-level are generally being fooled by its greater knowledge. But the reasoning is there and it's what makes a lot of the rest possible. Give it a programming task and most of what it does might be copying common methods of doing things that it came across in training, but without the capability to reason it would have no idea of how to determine what methods to use and fit them together without memorizing the exact same task from elsewhere. So practical use is generally going to involve a lot of memorized material, but anyone with a subscription can come up with novel questions to test its reasoning capabilities alone.

If GPT-4's ability to "reason" is ephemeral and seemingly random, is it really reasoning, or is it just occasionally getting lucky at ordering abstract tokens for it's monkey overlords?

I think the issue is that a human's ability to "reason" is also ephemeral and seemingly random as well. Just less random, with a lower failure rate, but still fairly random and certainly ephemeral for even the most reasonable and logical of people. Given that, the difference in ability to reason is one of degree, not of kind. The question remains if the random failures of reasoning in LLMs can get small/rare enough to the point that it's similar to that of a somewhat competent human.

When I saw that, I took a closer look at how it performed in assorted test, and saw it comprehensively failed the AP English Language and Composition and AP English Language and Literature tests. Which makes sense to me, because a lot of those tests involve more generalized and flexible reasoning than the sorts of formalized mathematical logic examples it might plausibly be trained to imitate.

Come on, most of the UK parliament can't even give the probability of two coins both coming up heads: https://www.bbc.com/news/uk-19801666

Most people can't even imitate intelligence, by your logic.

GPT-4 has vastly superhuman knowledge, superhuman language knowledge, superhuman speed. Its reasoning skills are well above most of humanity. Most people can't program at all, let alone in all the languages, know how to use so much software like it can. These niggling flaws in AP English and Composition probably have more to do with the arcane and arbitrary scoring mechanism in those tests. It can write just fine. Its prose is not amazing and tends to be rather cliche and predictable, yet that has a lot to do with the RLHF.

More comments

You don't have to mean it, it's all a few clicks away, whether a fancy app interfacing with SoTA commercial AIs, like Poe, or a transparent ggml library powering llama.cpp, complete with permissively licensed models. You could print their weights out if you wanted.

Because my layman's understanding is that they are literally just a statistical model. An extremely sophisticated statistical model, but a statistical model none the less. They are trained through a black box process to guess pretty damned well about what words come after other words.

How do you think this works on the scale of paragraphs? Pages? And with recent architectures – millions, perhaps soon billions of words over multiple tomes?

Suppose we prompt it to complete:

"I keep meaning to dick"

What is the most plausible continuation, given the whole of Internet as the pretraining corpus? "dat hoe"?

"I keep meaning to dick around with"

"these punks"? How low down the ranking of likely predictions should "with some LLM software" be?

"I keep meaning to dick around with some LLM software to see for myself how"

"it works"? "they click?" "it differs from Markov chain bots"? Now we're getting somewhere.

But we are also getting into the realm where only complex semantics allow to compute the next token, and memorization is entirely intractable, because there exist more possible trajectories than [insert absurd number like particles in the universe]. And a merely "statistical" model on the scale of gigabytes, no matter how much you handwave about its "extreme sophistication" while still implying nothing more than first-order pattern matching, would not be able to do it – ever.

These statistics amount to thought.

As roon puts it:

units of log loss are not built equally. the start of the scaling curve might look like “the model learned about nouns” and several orders of magnitude later a tiny improvement looks like “the model learned the data generation process for multivariable calculus”

As gwern puts it:

Early on in training, a model learns the crudest levels: that some letters like ‘e’ are more frequent than others like ‘z’, that every 5 characters or so there is a space, and so on. It goes from predicted uniformly-distributed bytes to what looks like Base-60 encoding—alphanumeric gibberish. As crude as this may be, it’s enough to make quite a bit of absolute progress: a random predictor needs 8 bits to ‘predict’ a byte/character, but just by at least matching letter and space frequencies, it can almost halve its error to around 5 bits. …

As training progresses, the task becomes more difficult. Now it begins to learn what words actually exist and do not exist. It doesn’t know anything about meaning, but at least now when it’s asked to predict the second half of a word, it can actually do that to some degree, saving it a few more bits. This takes a while because any specific instance will show up only occasionally: a word may not appear in a dozen samples, and there are many thousands of words to learn. With some more work, it has learned that punctuation, pluralization, possessives are all things that exist. Put that together, and it may have progressed again, all the way down to 3–4 bits error per character!

But once a model has learned a good English vocabulary and correct formatting/spelling, what’s next? There’s not much juice left in predicting within-words. The next thing is picking up associations among words. What words tend to come first? What words ‘cluster’ and are often used nearby each other? Nautical terms tend to get used a lot with each other in sea stories, and likewise Bible passages, or American history Wikipedia article, and so on. If the word “Jefferson” is the last word, then “Washington” may not be far away, and it should hedge its bets on predicting that ‘W’ is the next character, and then if it shows up, go all-in on “ashington”. Such bag-of-words approaches still predict badly, but now we’re down to perhaps <3 bits per character.

What next? Does it stop there? Not if there is enough data and the earlier stuff like learning English vocab doesn’t hem the model in by using up its learning ability. Gradually, other words like “President” or “general” or “after” begin to show the model subtle correlations: “Jefferson was President after…” With many such passages, the word “after” begins to serve a use in predicting the next word, and then the use can be broadened.

By this point, the loss is perhaps 2 bits: every additional 0.1 bit decrease comes at a steeper cost and takes more time. However, now the sentences have started to make sense. A sentence like “Jefferson was President after Washington” does in fact mean something (and if occasionally we sample “Washington was President after Jefferson”, well, what do you expect from such an un-converged model). Jarring errors will immediately jostle us out of any illusion about the model’s understanding, and so training continues. (Around here, Markov chain & n-gram models start to fall behind; they can memorize increasingly large chunks of the training corpus, but they can’t solve increasingly critical syntactic tasks like balancing parentheses or quotes, much less start to ascend from syntax to semantics.) …

The pretraining thesis argues that this can go even further: we can compare this performance directly with humans doing the same objective task, who can achieve closer to 0.7 bits per character. What is in that missing >0.4?

Well—everything! Everything that the model misses. While just babbling random words was good enough at the beginning, at the end, it needs to be able to reason our way through the most difficult textual scenarios requiring causality or commonsense reasoning. Every error where the model predicts that ice cream put in a freezer will “melt” rather than “freeze”, every case where the model can’t keep straight whether a person is alive or dead, every time that the model chooses a word that doesn’t help build somehow towards the ultimate conclusion of an ‘essay’, every time that it lacks the theory of mind to compress novel scenes describing the Machiavellian scheming of a dozen individuals at dinner jockeying for power as they talk, every use of logic or abstraction or instructions or Q&A where the model is befuddled and needs more bits to cover up for its mistake where a human would think, understand, and predict. For a language model, the truth is that which keeps on predicting well—because truth is one and error many. Each of these cognitive breakthroughs allows ever so slightly better prediction of a few relevant texts; nothing less than true understanding will suffice for ideal prediction.

As Ilya Sutskever of OpenAI himself puts it:

…when we train a large neural network to accurately predict the next word in lots of different texts from the internet, what we are doing is that we are learning a world model… It may look on the surface that we are just learning statistical correlations in text, but it turns out that to just learn the statistical correlations in text, to compress them really well, what the neural network learns is some representation of the process that produced the text. This text is actually a projection of the world. There is a world out there, and it has a projection on this text. And so what the neural network is learning is more and more aspects of the world, of people, of the human conditions, their hopes, dreams, and motivations, their interactions in the situations that we are in. And the neural network learns a compressed, abstract, usable representation of that. This is what's being learned from accurately predicting the next word.

By the way, how did I get this text? Whisper, of course, another OpenAI transformer, working by much the same principle. The weirdest thing happens if you absent-mindedly run it with the wrong language flag – not the target language to translate from and into English (it is not explicitly built to translate English into anything else), but just the language the recording supposedly contains, to be transcribed. The clumsy but coherent output akin to what you'd get from a child with a dictionary, if nothing else, should show they they understand, that they operate on meanings, not mere spectrograms or "tokens":

когда мы тренируем большую нейронную сеть, чтобы аккуратно предсказать следующую слово в много разных текстах из интернета, мы изучаем мирный модель. Это выглядит, как мы изучаем... Это может выглядеть на поверхности, что мы изучаем только статистические корреляции в тексте, но, получается, что изучать только статистические корреляции в тексте, чтобы их хорошо снижать, что научит нейронная сеть, это представление процесса, который производит текст. Этот текст - это проекция мира. В мире есть мир, и он проекционирует этот текст. И что научит нейронную сеть, это больше и больше аспектов мира, людей, человеческих условий, их надежд, мечт и мотивации, их интеракции и ситуации, в которых мы находимся.

Dismissal of statistics is no different in principle from dismissal of meat. There is no depth to this thought. And it fails to predict reality.

Thank you for articulating what I was struggling to do so, especially since I've read all you've quoted with the exception of Ilya.

I'm saving this for later, it's a knockdown argument against claims that LLMs don't "understand", the only issue being that many of the people making that claim are too fundamentally confused or unaware to follow the argument.

Perhaps most importantly of all, ‘probability theories of cognition’ have existed for decades (arguably longer) and are far from uncommon in both philosophy and neuroscience. All that transformer models, and particularly LLMs, have shown us is that probabilistic next token prediction very likely represents at least a major component of human cognition.

Sure. But Transformers are too obviously inhuman (on the substrate level, in terms of their training objective, data… almost everything), and I do not expect mere conceptual similarity to be persuasive. Moreover, as we've discussed in the past, Chomsky-like elitist contempt for predictive/probabilistic theories is pervasive.

The mind-like complexity of what they do and what they are is and ought to be shown on its own terms.

Incidentally: @WhiningCoil, I've just copypasted @sodiummuffin's diamond puzzle into GPT-4, Claude+, Claude-instant, ChatGPT 3.5, PaLM-2 (all this via Poe), pi.ai separately, and locally WizardLM-1.0-uncensored-30B-q6_k, vicuna-33B q4_k_m, UltraLM 13B q4_1, WizardCoder-15B-V1.0 q4_1, ChatGLM2 (fp16). The first two managed with a perfect chain-of-thought, really, I have nothing to remark on that. Every other model failed with varying levels of idiocy so I didn't bother going through the rest. I don't post screenshots because you apparently find them unpersuasive.

I don't think "so what if it sometimes gets it right" is good defense of the thesis of skeptics. With technology, "sometimes" very quickly becomes "usually" and then "with higher reliability than any human can provide". What matters is whether the thing really ever happens at all or not.

Ok, this reply finally moved the needle for me, and I'll shift my position from "LLMs are a neat statistical trick" to "Maybe LLMs use language to perform some form of 'thinking' in ways not entirely dissimilar from how language facilitates our own thinking."

I don’t know if anyone has had this experience before, but I’ve had times where my brain decided to make mouth sounds in a word/sentence-matching way that was eeriely like it was AI generated. Sometimes I would catch myself even mid-sentence and think wait that isn’t remotely close to what I’m actually thinking.

So it at least gets close to something that I’ve done in the past as a meatbag.

To be clear, I think we still don't have a principled reason to believe that this paradigm – in this vanilla form, autoregressive LLMs pretrained on text – can carry us to human level or beyond. It might be the case that LeCun is right and LLMs on their own are an off-ramp. It might run into diminishing returns or totally plateau any moment now; just because better «understanding» allows to make better predictions and we reinforce the latter doesn't mean we can get infinitely much of the former, any more than we can incentivize a human to run barefoot at 100 mph.

But people who seriously bought into such skepticism got caught off-guard by GPT-3 already.

And I expect amazing innovations like adding a backspace to keep the pretraining thesis viable far beyond GPT-4. The number of papers that propose improvements is absolutely mind-boggling, nobody keeps up with building deployable tech on those insights. People who follow the literature see the outline of AI of the near future and it's pretty damn formidable, much more than the progress in public demos and products can suggest.

It may be that current LLMs are explaining how the "id" part of our brain works. The conscious parts may need some additional work to model.

So the access to memory, some hidden subconscious pattern-matching, automated activity, some hidden processes - that's very similar to what LLMs currently output.

If I wrote a bot, and calling this an AI would be a stretch, that simply said to everything "that's a good idea, you should do it!", would I be held liable if someone asked whether making CP is a good idea? I don't see how.

Replika, not that I've used it, is both a shitty AI model (compared to SOTA) and also explicitly designed to validate its users so they become parasocially dependent paypigs.

I'd at least say it takes intelligence, intent and some degree of convincingness to even begin to think about liability.

If I wrote a bot, and calling this an AI would be a stretch, that simply said to everything "that's a good idea, you should do it!", would I be held liable if someone asked whether making CP is a good idea? I don't see how.

But how would you differentiate that from a strong AI giving you the same output? The fact that you wrote a bot and called it an AI is really the smoking gun in your argument. It isn't clear that's completely evident in the same situation, or would be in the future; when AI's become more advanced.

and that reassured him he was not “mad or delusional

Protip: If you need to be reassured in that by a chatbot, you are mad and delusional. Also if you have sexual (or any other) relationship with a chatbot.

I really wonder how many years off we are from something like this 'not' looking crazy.

Bladerunner 2049 had it looking pretty good.

I think this has nothing to do with AI. This guy was a terminally online type who became radicalised due to the lack of meaning/purpose in his life and tried to let out one last primordial scream before he went.

On a side note, this case indicates that you should never commit a high profile crime and be taken alive, because your embarrasing post history will be read out in court for the whole country to witness.

On a side note, this case indicates that you should never commit a high profile crime and be taken alive, because your embarrasing post history will be read out in court for the whole country to witness.

Eh, unless you've got something that sounds somewhat plausible like Kaczynski had.

The sorts of people who attempt high profile crimes are notably very good at making that determination.

Like the Zodiac Killer?