site banner

Culture War Roundup for the week of December 4, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

5
Jump in the discussion.

No email address required.

I'm downvoting this solely because you made me think about the thing you contrasted with Ana de Armas. I resent having to think about that turd or Bruce Jenner's family, ever.

  • -16

Refrain from low effort comments that serve no purpose but to express disgust with random celebrities.

Are you ok?

Like, literally. I'm worried I'll get modded for asking this, but I have a background in psychology and the writing style here feels pretty schizotypal, it sounds like you're at the age where that commonly pops up, you might want to look up symptoms and decide whether you want to get checked out.

Unfortunately no, he's writing within a whole subcultural style common on the far-right, rather than being a fluke.

Even assuming your comment is a sincere expression of concern (* doubt *), it is hard to imagine that he's going to go see a psychiatrist because a rando on the Internet suggests he has schizophrenia.

Or schizotypal personality disorder, and I agree, that's why I said 'look up symptoms and see if you want to see someone' rather than 'go see someone'.

Anyway, yes, I knew I would probably be doubted, but it's a sincere sentiment.

Is that sentiment downstream of the fact that they appear to be far far into my outgroup and maybe their writing is actually sensible and full of references and context and assumptions I don't understand, rather than actually being disjointed and telegraphic as it appears to me? That's very possible, which is why I told them to use their own judgement.

Schizophrenia is a psychotic mental illness with a reasonably well understood malfunction of neurotransmitters that requires treatment with medication and has a well understood natural history in patients and an increasingly well understood etiology which includes genetic components. Psychosis in the context of medicine and psychology refers to an inability to understood what is real and what isn't. This is a disease that is notorious for people often developing this condition in young adulthood.

Schizotypal Personality Disorder is a Personality Disorder, which (loosely) means a persistent and pervasive pattern of maladaptive personality traits that deviate from societal norms. These symptoms and experiences are present throughout a persons lifespan and may improve with therapy and age and while an individual might not meet diagnostic criteria prior to age 18, you'd expect said features to be present in teenage years and potentially earlier. These people can be generally expected to have intact reality testing and are also unlikely to present for and/or need psychiatric or psychological care. It is not schizophrenia, despite the similar sounding name.

Neither can be reasonably inferred from a highly limited section of posts an online forum due to other key diagnostic features. It's far more likely to be someone who is just weird (not an illness!), passionate about a hobby-horse or bugaboo, or just plain young... or drunk...or with an atypical writing style...etc.

Depending on the content of the post you might find evidence for a (solitary) delusional disorder but that's not the claim here (and of course as modhat says...).

I disagree with you; delusion isn't actually the only thing that can be inferred. There's also a specific style that I've basically got pegged as "badly psychotic"; the Capitalised Important Concept Rant.

The example I have to hand is the Female Void essay (which I can't actually find without some sort of well-poisoning at the start, so skip down to "Writings from 'Reads'"). I actually agree with a substantial chunk of the content of that essay, but there is very obviously a layer of crazy paint over it all; my understanding is that this is "thought disorder". Not every psychotic writes that way, but I'm not aware of sane people writing that way.

And, for what it's worth, FR doesn't come off to me as having written a CICR, and the content doesn't strike me as obviously delusional.

Your description of what Christians "meant" when they asked "how can you be moral without God?" is so charitable to them that I'm going to ask for evidence. I have never seen this as the intended meaning.

Secondly, I have no idea what "clicks" you are talking about when you say you need to use Lizzo in your post. Is that the image that appears in the email that goes out? For that matter, why those two women in particular?

I think Ana de Armas has a special cultural place after playing Joi in Bladerunner, that I would associate with right wing doomerism/depression/incels.

Anyway, most of you have heard of Ayaan Hirsi Ali’s ‘conversion’ if one can call it that, so I won’t dwell too much on the fact that it includes no indicators of actual belief.

As a Catholic/Christian, I don't get to judge the sincerity or not of her conversion. It may be that she is speaking about it publicly in ways to convince non-believers as to why she converted, and the more efficient way to do that is list off secular advantages. How many atheists in the debates of the 2000s were ever convinced by believers giving accounts of mystical experiences?

If she says she believes, I have to take her at her word. I can't see into her heart or mind to judge if she's lying about any of it. And "yeah right she's converted, I don't see any belief there" is a very atheist way to approach the angle of "I can't imagine any reason a non-idiot would convert to believe in crazy fairy stories, so she must be lying about it".

And "yeah right she's converted, I don't see any belief there" is a very atheist way to approach the angle of "I can't imagine any reason a non-idiot would convert to believe in crazy fairy stories, so she must be lying about it".

I am admittedly an atheist that sees this as, "yeah right", but it is absolutely not because of the latter position. I take people's religious beliefs seriously, I like hearing from them about why they hold those beliefs and what it means to them, and I know plenty of non-idiot believers, so I have no need to imagine a reason to convert - they're right there! Sure, the specific tenets of Mormonism really do seem silly to me and it's hard to get beyond that, but I do not doubt the seriousness, intellect, or authenticity of many converts. Ali's conversion seems hollow and lacking in any sort of deep connection to Christianity, it seems like the classic belief in belief and wanting to believe Christianity for pragmatic reasons related to cultural benefits of the religion rather than an actual, earnest belief in the literal truth of the claim that Christ is King, was crucified for our sins, and rose from the dead. I don't know that she's insincere, but the lack of plainly stating that her reason for being Christian is that Christianity is literally true is reason enough to doubt her. I don't think that is an atheist position at all - I know intelligent Christians and they say that the reason they're Christian is because Christianity is true, not that they're Christian because they want it to defeat Islam.

She wrote an essay called "Why I am now a Christian" (https://unherd.com/2023/11/why-i-am-now-a-christian/) and left out any argument as to why Christianity is true, so it's natural to conclude that the truth-value of Christianity was not an important factor in why she is Christian.(And she references the title in the beginning of the article, so it's not just one of those stupid headline writer things.)

As it happens, I don't suspect she's lying. She just seems to think "I am Christian" means "Christianity is useful", which is just another form of the common practice of confusing what is nice to believe with what is actually true. This practice is seen all the time, including in both sides of the Christian-Atheist debate. It's not a lie, but it's sloppy logic around what Christianity actually is, so I concur with OP's use of scare quotes in reference to her "conversion" at least based on the best available information to me.

As it also happens, I am not Christian, but I believe Christianity is useful and probably even load-bearing for the USA.

I think that this whole schtick about "truth" of Christianity to me is a tactic used by New Atheists but also some other people as kind of mud fighting tactic. This is what we mean by truth, so then come down to mud with us where we can use common juijutsu techniques to overpower you.

To evade the topic of religion and existence of god, we can have a question of "Does Sherlock Holmes exist?". The New Atheist position on this matter would be something like - he is fictional character, he is invented and therefore it is false to say that Sherlock Holmes exists. There is no grave of Sherlock Holmes, he did not do any of the things described in books as he is not real. Okay, but another position can be that actually Sherlock Holmes may be the most famous of all the detectives, more people know about his character compared to any living detective and that he was important in shaping real lives of many people including kids who became detectives.

Moreover also this prioritization about truth in atheist and also sometimes in rationalist circles is not what it seems. One obvious example is that of correct model usage. One model can be useful in one situation and misleading in another situation, it may not be that easy to just say that the model is true or false. Which is actually pretty close to the above paragraph - Christianity is maybe "useful" as metaphilosophical system that binds and points certain ontology, teleology, epistemology, axiology and sociology toward some outcomes you may even agree are good from the outside-of-christianity view. In that sense it is useful and this at least in some sense "true" model of the world, in similar sense as Sherlock Holmes may be useful model of the world of detectives let's say.

Additionally and related to above, it is also hard to just say that "Truth" is supposed to be the ultimate good. For instance Sam Harris is also know for his utilitarian stance where he thinks that people "should maximize human flourishing", that is his teleology of people. He also thinks that even if you are epistemologically uncertain of how to define maximization and flourishing, you can at least say that you want to prevent suffering at the very least in certain negative thinking. But then this begs the question: what if the best way to minimize suffering and maximize human flourishing is for people to be Christians and believe something that is not "true" in the strict sense?

I think that atheists and also rationalists to large extent are too quick to point out hypocrisy in moral systems like Christianity let's say when it comes to their beliefs in truth of biblical events like the flood or creation. What us ommitrd is that everybody is hypocritical about something, we do value "usefulness" even above truth in a lot of our actions.

Yet I would not be truthful if I attributed my embrace of Christianity solely to the realisation that atheism is too weak and divisive a doctrine to fortify us against our menacing foes. I have also turned to Christianity because I ultimately found life without any spiritual solace unendurable — indeed very nearly self-destructive. Atheism failed to answer a simple question: what is the meaning and purpose of life?

She writes earlier of the doctrines she learned as a Muslim and how becoming atheist freed her of such fears. I think if she's now saying atheism is no longer enough, then the old doctrines of God, heaven, hell, and the rest of it must be making a re-occurrence.

She's writing for a secular audience. A Southern Evangelical style testimony of how she found the Lord and was convicted of being a sinner until she accepted Jesus into her heart is not going to be taken seriously by them any more than if a TV preacher recounted the same.

Imagine a creature that, while married to Ana de Armas, would empty his shared savings account, and sink into bankruptcy in order to pay Lizzo to ravage him [2].

...

[2] Yes, I’m aware that in our current context, the ‘creature’ is more likely to be female. I just needed to use Lizzo here for the clicks.

I'm confused. As far as I can tell Armas and Lizzo are both straight. Armas is married to a man and Lizzo is dating one. Why would the "creature" be more likely to be female?

My political journey began on the left. Every once in a while, I catch myself trying to rewrite the story of one thing or another that I believed so as to make it more palatable to the person I now am. But there is one story I’ll never have to rewrite. It’s the story of seeing a ‘letter box’ for the first time, and knowing it does not belong here. And it’s the certainty that those that would bring it here do not belong in power anywhere.

What is wrong with letter boxes?

More generally I feel like this post could do with a good deal less euphemism or metaphor and a good deal more actual argument.

It's a picture in his Substack. 'Letter box' means 'woman wearing a niqab'.

'Letter box' means 'woman wearing a niqab'.

Ah, I wondered about that! I too was going "letter box? what's wrong with that? does he mean this? or this?"

and the perfectly healthy sexual instincts that once bonded you to your wife, now demand that you should betray her [1]

[1] I’m actually in my twenties AND SINGLE, so I have no personal experience with the temptation described.

I'm in my late 30s and married and have to say that one of the mildly surprising, but quite pleasing aspects of marriage is that this temptation isn't strong at all. While granting that people's sex drive differs, I have most settled on habitual womanizers just being scum, deeply immoral people with little regard for their commitments. To your point, it's actually somewhat interesting why I hold to more or less traditional Christian ethics despite not sharing the religion, but however it came to be, it isn't actually difficult to follow through on keeping complete commitment to my wife. Perhaps societal belief in the Christian God was necessary to so thoroughly inculcate that moral intuition, I don't know.

If I thought about it I could probably summon to mind some examples of people who risked their lives to save others but also cheated on their wives/husbands... Patton and Petraeus you might not count...

...

Well, this might be a bad example, but there's always MLK.

I searched for Righteous Among Nations who had extramarital affairs and got this:

https://www.nationalww2museum.org/war/articles/felice-and-lilly-uneasy-berlin-love-story

The lesbian affair was after she separated from her husband, but it says she had affairs before that.

There's also Wilhelm Canaris, who is a candidate for Righteous Among Nations. Initially a supporter of the Nazis, he became head of the Abwehr, was horrified by the atrocities, tried to protest within the chain of command, then passed information to the British and also saved some Jewish lives. Eventually the Nazis found out and killed him. And he cheated on his wife with the woman he passed information to, a Polish spy. As far as moral courage goes, he could have just taken an early retirement or fled to Switzerland or something, but instead he risked and ultimately sacrificed his life for the greater good.

I'm in my late 30s and married and have to say that one of the mildly surprising, but quite pleasing aspects of marriage is that this temptation isn't strong at all.

I felt the same way in my 30s and when my marriage was going well. It was blissful to not be tempted at all. Fast-forward ten years, with little incompatibilities growing exponentially, and things might look different. Although I have not done so, I have much more empathy now for some men who do cheat (not the compulsive, chronic cheaters, but the ones who feel abandoned in some aspects of marriage). There may come a time when it looks like the least bad of the terrible options.

I think this is why having a wife that's 7 to 10 years younger is generally a good idea

...I have no idea what you're talking about.

You don't like left-wing people? You don't like Western non-Muslims who are sympathetic to Muslims?

I feel like you're using a lot of high-flown rhetoric to dodge the risk of actually making arguments - of needing to say that something is true, and then explaining why.

What am I supposed to get out of this beyond, "Muslims are bad, people who like or are even just neutral towards Muslims are inhuman scum, please subscribe to my Substack"? If you have a point, please make it.

I'm just going to say that I do not believe you are a "former leftist and atheist who's cringing at other atheists." This looks like the umpteenth iteration of a particular persona who keeps returning here.

As an atheist myself, I could never help but cringe when atheists responded to the “without God how are you moral” of the Christian evangelicals with the “Are you saying the only thing stopping you from murder is God’s judgment”?

The rejoinder you are complaining about is indeed a certain kind of smug gotcha line that's kind of cringe, but it's a rejoinder to an equally smug and cringeworthy argument. When theists try to play gotcha like that, they invite gotchas in return. This is why atheists who've gotten over their "arguing with evangelicals" phase usually aren't interested in that kind of debate. I'm fine actually talking about why I do or do not believe in God. But the sort of Christian who uses the "How can you be moral without God?" argument (usually followed by some variant of "You don't actually believe there is no God, you're just pretending") isn't interested in genuine discussion, but in seeing who can win the gotcha contest.

I think your Lizzo/Muslim analogy is kind of ridiculous. I don't personally care whether or not Ayan Hirsi Ali really believes in Christianity, but I can see why actual believers would care if someone is just wearing Christianity as a skin suit. You are overthinking the attraction to Islam; it's been pointed out here plenty of times that the left's infatuation with Islam isn't because of any intrinsic qualities of Islam (if it were practiced mostly by white people, they'd be condemning it as a Bronze age death cult). It's purely and solely because Islam is mostly practiced by brown third-worlders.

This looks like the umpteenth iteration of a particular persona who keeps returning here.

You could just ban them. It's probably also worth banning 'thenether', who's more obviously ban evading. (I haven't looked closely enough to be confident in either case, but it seems likely)

The rejoinder you are complaining about is indeed a certain kind of smug gotcha line that's kind of cringe, but it's a rejoinder to an equally smug and cringeworthy argument. When theists try to play gotcha like that, they invite gotchas in return.

Well, I am interested in what basis do atheists build their moral foundations, if any. Generally it turns out to be some form of utilitarianism, if they have one, and I go "Oh, okay" because I'm not that convinced by utilitarianism. The ones that don't have any really considered basis just seem to tend to assume that it's in the water or the air that we'll be nice to each other, or concerned about the marginalised, or whatever, and they very vehemently deny that they are living off the remainder of the cultural Christian capital that formed such sentiments originally.

The ultimate basis of morality is our evolved brain structure. We have empathy that causes us to feel others' pain as our own and logic that allows us to deduce the consequences of our actions. everything else - from theology to utilitarianism - is just tinkering at the edges. In the final analysis I am a good person because when I do bad things I feel bad.

Yes I will agree that modern western culture is evolved from Christianity. This is evident in how compatible the Christian religion is with the modern Secular state. but it clearly isn't Christianity (alone) which brought us to where we are. For example, Nietzsche characterized Christianity as 'slave morality' but even that didn't stop Christian cultures keeping slaves for centuries - if anything slavery by Christian cultures was nastier than the slavery practiced by classical cultures that had no theological reason to spurn the practice. I think a genuine belief in a deity has a fairly marginal effect on how 'good' someone is.

I'm a religious agnostic who thinks that utilitarianism doesn't make sense as a moral principle.

I give credit to Christianity for having helped to form modern morality, including my own, I just don't think that belief in Christianity is necessary for that morality to continue.

I don't have any basis for my morality other than that I am accustomed to it since childhood, I sometimes feel guilt when I hurt others, and I have had a few mystic experiences, drug-fueled and otherwise, in which I felt that other sentient beings were the same thing as me, just looking at the universe from a different angle.

But I don't think that Christians actually have a good basis for their morality either. "God said that we should do it this way" does not actually get rid of the question of what to found morality on, since for me the natural reply even if I believed that God existed would be "Why should I care what God wants? Why is God's morality more important than any other?".

"Why should I care what God wants? Why is God's morality more important than any other?"

If an omnipotent, omniscient being thinks that A is good and B is bad, it would be an act of insane hubris to imagine that you could know better than him. The more so when this being has the power to sentence you to eternal suffering or bliss.

it would be an act of insane hubris to imagine that you could know better than him.

Would you still feel this way if you discovered that God said raping and murdering strangers for fun is always good?

There are already people who do that for a given definition of "stranger"

You can't read minds, so you don't know what the being thinks. You might know what he claims to think, but you don't know if he's telling you the truth.

True, but Goodguy seemed to assume he knew what God meant but still didn’t see why God’s view should be privileged over any other. Your objection I understand; his I didn’t.

Same as Christians, more or less.

There are good things and bad things. I prefer more of the former and less of the latter, both for me and for others.

At some point you have to accept an axiom. Atheists don’t get an extrinsic answer to the question of “why should I prefer good things?” Christians do, because Jesus is Lord, and by definition His preferences are correct. From the outside, though, this begs the same question: “why should I prefer what the Lord prefers?”

There's also that pesky problem of: "how do we know what The Lord prefers?" Many Christians have killed other Christians over this question, and I don't think it's been resolved.

Well, I am interested in what basis do atheists build their moral foundations, if any.

It's not an unfair question in itself. I was responding to the OP's complaint about the classic dialog:

Theist: "If you don't believe in God, what keeps you from murdering people?" Atheist: "Are you saying that it's only your belief in God that keeps you from murdering people?"

Obviously, that dialog does not result from either side having a genuine interest in the underpinnings of the other's morality.

I'd argue that retort serves a very useful purpose: Most theists haven't thought much about their moral foundations beyond "because god said so" and it might be the first time they've ever had to consider that. My dad can't even model what it would hypothetically be like to not believe in god. He just can't grok how an atheistic mind works re: morals or gratitude (and it's not because he isn't genuinely interested).

Most self-described atheists I know are the type of person who has actually considered some moral philosophy.

And I gotta say it's hilarious when Christians pull out the "[nontheists] are living off the remainder of the cultural Christian capital that formed such sentiments originally" line when so much of Western Culture, like say freedom of conscience, came about directly from Christians having to figure out how to stop killing each other. Secularism was a compromise.

There's no question Christianity has influenced Western Culture, but I rarely see Christians willing to talk about how much of "Traditional Christianity" had to be shaved off to get to where we are the last few centuries.

But the sort of Christian who uses the "How can you be moral without God?" argument… isn't interested in genuine discussion, but in seeing who can win the gotcha contest.

I think in a lot of cases, that is a sincere question. Most Christians are at least nominal deontologists, with God as the ultimate judge of what is right and wrong. If that’s the only moral system you know and can model, an atheist is going to seem like an ethically unmoored individual. In that context, the question isn’t a gotcha, but indicative of ignorance of utilitarianism.

In that context, the question isn’t a gotcha, but indicative of ignorance of utilitarianism.

Or a rejection thereof, which could be based on a number of different grounds. But nah, some people would rather just assume bad faith in their interlocutors when an issue cuts them too personally. That is particularly present here.

Nah. I have seen this discussion many times, and those exact arguments always come out. Kind of like your "Ooooh, did this cut you personally?"

I only assume bad faith when bad faith is evident.

I mean, your comment was imagining a completely fictitious interlocutor and concluding bad faith for all of them. Literally zero "evidence" for any bad faith to be "evident", except the type of evidence you have conjured up in your mind for your fictitious interlocutor. Glad to know both what your standard of evidence is (literally imaginary) and what your interpretation of the spirit and rules of this forum is.

I disagree that most Christians are 'at least nominal deontologists', if only because I think most Christians do not know the word 'deontology'.

My guess would be that most Christians have a kind of 'folk morality' - they don't have explicit theories of ethics, but rather have an organic, messy series of moral convictions that they have not systematised, but which are heavily influenced by Christianity as they understand it (which depending on their tradition involves things like reading the Bible, what they learned in Sunday School growing up, what their ministers or pastors tell them, what they absorb via osmosis from other Christians, and so on).

Most Christians therefore probably endorse some strict moral rules or duties (e.g. the Ten Commandments), also endorse virtues (e.g. the Fruits of the Spirit, "let the same mind be in you that was in Christ Jesus", etc.), and also are sensitive to consequences (e.g. "by their fruits shall you know them"). Depending on which of these things you emphasise, you can try to spin Christianity as deontological, virtue-ethics-focused, or consequentialist (of which utilitarianism is a subset), but I think any attempt to simplify it down to one of them would be misleading.

It seems more likely to me that there is no general consensus on these kinds of ethical theories among Christians. Rather, Christians as a group probably more-or-less endorse the ideas that they should follow moral rules, that they should strive to become good people, and that they should try to produce good outcomes for the world. And if you try to force them to consider edge cases where some of those principles conflict, as philosophers do in order to refine theories like deontology or consequentialism, I expect most Christians would umm and ahh and not have clear answers.

So with that in mind, what's going on with the, "How can you be good without God?" question?

I suspect it's probably just as simple as the fact that a lot of Christians regularly incorporate God into their moral reasoning. When faced with an ethical question, they ask themselves questions like what would Jesus do, or what does the Bible say about this, or they engage in practices like praying for guidance. If you do that a lot, you're from a community where that is the default form of moral reasoning, and you have very little experience with other people... well, people who don't do it are going to seem weird. Hence the question - how do you do morality, in a practical sense, without this framework? What framework do you use instead?

Most Christian denominations have such an idea as moral theology, and it's in practice treated as divine command theory by less well-educated believers. Better educated ones might have developed understandings of natural law, virtues, conformity to the will of God, etc.

Yes, there's certainly a more informed Christian ethical tradition that includes a great deal of reflection on this. However, my sense was that the question "how can you be good without God?" was mostly not a question coming from theologians. It was a lay question, and as such I'd bet that it had more to do with the practical experience of moral decision-making than it did with ethical theories as such.

So with that in mind, what's going on with the, "How can you be good without God?" question?

If there is no objective standard of morality or ethics, and if you do not have an authority from which you get such a standard, how do you arrive at: (1) sex is fine as long as all parties consent (2) women are equal to men (3) we should help the poor and needy (4) other standards which are not based on 'nature red in tooth and claw'?

It turns out to be some form of utilitarianism, and the exterior moral authority is Bentham or somebody. But Jeremy Bentham grew up in a Christian society, so the moral background to his foundation as an ethical being is derived from that, whether he knew it or not.

Basically, if we're springing off a purely materialist universe with nothing but the forces of evolution at work in forming us, how do we derive any standards? And if those standards are admitted to be purely subjective, then we can't condemn the past for burning witches or owning slaves, because that was their understanding at the time, and their standards were just as valid for them then as our standards about gay rights are for us today.

Basing your morality on utility, where that function is 'happiness' or some other measure, is an attempt to arrive at an independent objective standard of what is good and what is not, just as much as the project of religion.

I'm not sure how utilitarianism actually does any of that?

If we suppose that there is no objective standard, no objective normativity to the universe, and no authority or lawmaker capable of providing such, then there's only social convention, right? Moral rules are not different in kind to legal rules - they are shared fictions.

And obviously you could build a shared fiction on any foundation you like. Utilitarianism is one option, but in this hypothetical godless, moralityless universe, there are still plenty of other options. The categorical imperative is just as possible a candidate for foundation as is any concept of utility. Take your pick. All that matters is getting enough people to agree on it.

I'd also nitpick that there isn't an exterior moral authority for utilitarianism; Bentham is of only historical interest. At any rate, if we live in a universe without objective values, then the only thing there is is whether we collectively decide to adopt utilitarianism (or whatever it may be) as a kind of shared code of conduct. That's it.

Basing your morality on utility, where that function is 'happiness' or some other measure, is an attempt to arrive at an independent objective standard of what is good and what is not, just as much as the project of religion.

This is true of some utilitarianisms, but not all, I would say? I think this is a fair criticism of e.g. Sam Harris' The Moral Landscape, which rests, ultimately, on the unjustified assertion that 'human flourishing' or 'human welfare' is good and therefore morality is the maximisation of this good. But J. L. Mackie takes a utilitarian position (or rather, a nuanced one he calls a kind of 'rule-right-duty-disposition utilitarianism'), and he does this after bluntly admitting that there is no objective standard, there are no objective values, and this is just an attempt to try to figure out how humans can live together in a way that he and many others would find congenial.

this is just an attempt to try to figure out how humans can live together in a way that he and many others would find congenial.

And that's a fine explanation. I'm saying that there is no reason to say that religion is made-up or fake or the rest of it, because it's all made-up and fake. There is no objective universal law of mercy for the downtrodden. If we invent a standard we want to apply, then it doesn't make a difference if it's "god say love our enemies" or a utilitarian philosopher. One is, by this measure, just as real as the other. Saying "but god does not exist" is no objection, because nothing exists to make rules except how we decide we want to make rules, and if I want to have a god who is a rule-giver, that works just as well as creating a philosophical basis for maximising human flourishing. We're both pulling our justification out of the aether.

How is it an "unjustified assertion" by Harris to define "the wellbeing of conscious creature" as an axiom on which to build moral principles?

You have to start somewhere and there's literally no way to do that without asserting some kind of value/goal (or establishing a deity's authority to dictate).

Basically, if we're springing off a purely materialist universe with nothing but the forces of evolution at work in forming us, how do we derive any standards?

Some might say this has already happened.

And if those standards are admitted to be purely subjective, then we can't condemn the past for burning witches or owning slaves, because that was their understanding at the time, and their standards were just as valid for them then as our standards about gay rights are for us today.

I certainly manage it. There's no inherent contradiction between moral relativism and considering your own morality to be better. Anyone claiming otherwise is engaged in the same kind of delusion as free will.

It only seems like that's "not allowed" to someone who earnestly believes that there's even an objective source of morality to go off in the first place.

There's no inherent contradiction between moral relativism and considering your own morality to be better.

You can claim "By my own standards, my morality is better" but you can't impose your standards on the past, because you have no idea what future generations, with their standards, will say about things you think neutral or even innocuous. If there is no objective standard but "what we think best at the time" - yeah, maybe we know more about some things now. But if they didn't know that back then, then they can't be blamed for not holding the same standards. You wouldn't burn a witch because you don't believe in witches. What would you do if you did believe in them? What do you do now, when you do believe some thing or person or cause is not just wrong, but actively evil and harming humanity?

You can claim "By my own standards, my morality is better" but you can't impose your standards on the past, because you have no idea what future generations, with their standards, will say about things you think neutral or even innocuous

I don't dispute that at all, I simply don't think anyone can do better, or if they claim to do so, they're grossly deluded or lying.

Maybe one day we invent or discover a hyper-compelling form of morality such that almost all people adopt it. Or we become better at memetic engineering and find some that sticks. They still won't be objective, but that's an impossible objective in the first place.

You wouldn't burn a witch because you don't believe in witches. What would you do if you did believe in them?

Burn them. If it was today, you'd bet I would do quite a bit of research to make sure I wasn't killing innocent people, which would hopefully dissuade me, but if I was evidently convinced.

I think I have better epistemics than average, but I'm not so full of myself that I think that if I were a medieval peasant, I'd immediately form the Enlightenment.

What do you do now, when you do believe some thing or person or cause is not just wrong, but actively evil and harming humanity?

Like so many people around today, that I can see with my own eyes and interact with online? Live and let live, evidently. The only people I've ever burned are pregnant women, which sounds really bad until you realize it was in the context of cauterizing surgical bleeds.

In the limit:

[To Hindu priests complaining to him about the prohibition of Sati religious funeral practice of burning widows alive on her husband’s funeral pyre.]

Be it so. This burning of widows is your custom; prepare the funeral pile. But my nation has also a custom. When men burn women alive we hang them, and confiscate all their property. My carpenters shall therefore erect gibbets on which to hang all concerned when the widow is consumed. Let us all act according to national customs.

-Charles James Napier

So you agree to let widows be burned, so long as it's online and you don't have to do anything about it?

More comments

There's an interesting Calvinist perspective that I respect for taking a firm stance on the Euthyphro dilemma. The good is that which is loved by God, and it is the fact that God loves something that makes it good. In this sort of worldview, you really can ask the question, "How can you be moral without God?" because morality does not exist as a concept apart from God.

The Goal of the Futurist Right is not to create some new orthodoxy that can take the people who put us in our current predicament, and align them properly with the interests of our society.

As a Christian, I must reject this. For the Atheist Right this may your goal, but this is decidedly unchristian, and likely bad even from an atheist utilitarian perspective. Jesus came to save the sinners, taught to love our enemies, and spent his time teaching and hanging out with the lowest scum of society while the experts in the law mocked him and ultimately killed him. The easiest way to become evil is to be so sure that you are good and your enemies are evil that any acts against them are justified.

This is related to though perhaps a slightly different spin on Scott's Guided by the Beauty of Our Weapons. Perhaps a moral rather than rational/bayesian version. If you attempt to ruthlessly crush your opponents, and they attempt to do the same to you, then the stronger one will win with no correlation to who's actually correct. And to what end? You're no more likely to be on the correct side, and if you resort to evil methods in your pursuit of victory then you can rule over an evil society with you on top instead of them on top, I guess.

But if you do what's right, and you are more good and more kind then you will draw people to join you and simultaneously gain strength and build a better world. If you try to convince people that you are right, and they try to convince you that they are right, then if you are actually right you will be more persuasive on average.

Now, it's important not to be naive about this. We don't need to fill our streets with radical leftists and/or Islamists who seek to destroy us and build their own subcultures where they reinforce their beliefs and never convert. Survival as a society and culture is an important goal. But converting other people is also an important goal, not simply because they will be allies and help us but also because they are human beings who matter even when they do evil, and helping them to be better is the right thing to do. Marginalizing people might be positive as a instrument towards disincentivizing their behavior and limiting the damage they can do, but it is negative as an ultimate goal and the actual end goal should be conversion.

Quality contribution! You’re absolutely right. Jesus was compassionate but also discerning. He didn’t naively just stand there and let his religious enemies stone him to death, but he actively engaged in dialogue with them and revealed their hypocrisies.

I don’t know much about Ayaan Hirsi Ali but I’m willing to be charitable to her searching for the truth in Christ. Her ‘road to Damascus’ event may be sincerely in her future. Let’s pray that her heart be softened and receptive to the word.

Is this how people see my more cryptic writing? Because it looks like a load of asinine and extreme logorrhea that at most can poison the theoretically fruitful topic.

You tend to lay out the necessary groundwork before getting carried aloft where the scenery is sometimes strange and unfamiliar but the pilot is competent and the navigator knows where they're heading. OP is more like an angry cab driver.

To be honest, I'm not sure because you often write in such a way that it's hard for me to figure out what you really believe as opposed to what you are just playing with. Are you one of those guys who thinks that Russia is a crypto-colony of the British?

More or less.

"Crypto-colony" does not mean anything falsifiable and predicts nothing. I think Russia is a generic low-agency country, in the manner countries with negative selection in elites tend to be, and consistently acts against both its "geopolitical" and its population's long-term interests, yet in the interests of savvier countries, mainly the US and the UK although it seems that Russians both high and low interpret their retarded and harmful activity as self-interested. This is also strangely accompanied by Russian petty elites squealing like teen girls about the prospect of their child becoming a Londoner; there's a distinct vibe that it's better to be a struggling student in the Metropole than an oligarch at "home", and I've seen this repeatedly since childhood. The prestige of UK is out of proportion with that nation's observable merit.

To what extent this is due to any deliberate effort, or just historical inertia, or needs any explanation at all, I am not sure.

Could you elaborate on your reasons for considering the UK savvy? AUKUS looks to have been a good move, and Brexit is too complicated and long-term to judge right now, but beyond that our foreign policy seems to be mostly self-harming. We cut ourselves off from cheap oil while refusing to develop our own reserves, we sold our factories and our best R&D companies to foreign owners, we fought in Iraq and Afghanistan for no reason. What are you seeing that I don't?

This is also strangely accompanied by Russian petty elites squealing like teen girls about the prospect of their child becoming a Londoner; there's a distinct vibe that it's better to be a struggling student in the Metropole than an oligarch at "home", and I've seen this repeatedly since childhood. The prestige of UK is out of proportion with that nation's observable merit.

This on the other hand I get. I feel relatively confident in saying that the upper-bound for opportunities for high-merit people in London is greater than that of St. Petersburg / Moscow. The median standard of living may be lower, on the other hand. So it depends on your situation and your prospects. Also, prestige takes a couple of centuries to really fade.

None of this is particularly damaging, and foreign owners of consequential companies are fellow Anglos; fighting NATO wars helps build team spirit. Anyway, what matters is differential damage – consider how gimped Germany is by the ongoing war, and how relatively unscathed is the UK. (Or, as the all-time greatest example, how Anglos got us to whittle down Napoleon, whereas the most rational move would have been to side with him… and again in WWI…) Russians tend to think of geopolitics in terms of handicapping and undermining civilizational competitors, and point to Albion as the chief culprit. But even as far as the positive agenda goes – somehow the moribund, overregulated UK has both DeepMind (despite it nominally being bought by an American company) and the lead in regulating AI for everyone else.

upper-bound for opportunities for high-merit people in London is greater than that of St. Petersburg / Moscow

It's not just about Moscow, it's about the entire rest of the world. You can be an oligarch's son in the Motherland, or if you have any merit, you can have a career in the US, but somehow they all salivate about the degree from London School of Economics, or even shaking hands with Brits.

That said, I personally do not feel like the UK is very important.

Okay, I think I see where you're coming from. To sum up as briefly as possible: the Anglos have always been good at handicapping civilisational competitors (Napoleon, Axis, Russia) and continue to be so. Anglo powers are still number 1 and so clearly their foreign policy is still effective. Is that about right?

A few serious points of contention:

  1. I was thinking of policy in the last 50 years rather than the last 200. Quite happy to admit that Napoleonic and WW era policy was mostly pretty decent.
  2. I do not see America and Britain as comfortably belonging to the same civilisation group. We've been somewhat hostile for most of our history, and the social structure & cultural mores were very different before 20 years ago. American policy over the 20th century has been to bring Britain down from 'friendly rival' to 'cringing servant', and IMO has done so successfully. We also share a language, so British culture is rapidly being overwhelmed by the culture of Imperial Centre.
  3. As a consequence of the above, the fact that British companies are mostly IPOing in America isn't comforting to me. We're being drained dry of anything that could let us stand on our own two feet. And I've seen nothing to indicate that British AI regulation is anything other than Rishi Sunak's desperate (and self-harming) bid for relevance. Those who can, do. Those who can't, regulate. Witness Tsarist attempts to outlaw machine guns prior to WW1.
  4. I agree with you that Germany does seem to have suffered much worse from the war. I'm not sure how much of that is just that they were a booming manufacturer with further to fall, but it does seem odd.

(Or, as the all-time greatest example, how Anglos got us to whittle down Napoleon, whereas the most rational move would have been to side with him… and again in WWI…)

I don't see how any of these make sense for the Russian elite. In the Napoleonic wars Napoleon was forcing liberalization throughout Europe which was against the interests of the aristocratic Russian elites. In WW1 it was the Russians who started things against the Germans with their alliance with Serbia and an alliance with Germany would just lead to them conquering France and becoming strong enough to invade Russia, like they did in WW2.

You presume compromises and contingencies which have taken place in reality, and Anglo narratives about them. Napoleon was not essentially committed to liberalize Russia, and indeed did not emancipate the serfs on territories he entered, which is part of the reason for his failure.

Yes. But don't worry, you have a long way to go to reach the level of Rose of the World.

This reads more like Moldbug started an essay but then got bored and just posted the intro.

No. You have more links.

Also, I can tell the difference because I don’t auto-skip the rest of your paragraphs. I did get through the title though, and “OUR ENEMY IS BIOLOGICAL” sounds like another departed manifestoposter whose greatest hit went: ‘OUR STRUGGLE WITH CHINA IS RACIAL”. Is this an alt right verbal tic?

No, it’s just his verbal tic.

Hilarious that thenether, aka jewdefender, aka foreverlurker, aka motteposter, is denouncing his brother in alts, futuristright, aka lepidus, (aka JB, darkrationalist?). Those two should get a room. Not here. Hash out all of that JQ and make up new slogans like “we are plural, our solution is final, our blood is ancestral, we need to corral…” , then edit the shit out of their manifestos, and come back as one, still using the royal we.

By the way: far as I can tell, JB has made it big on twitter after finally letting go of our sorry lot and focusing on reading academic literature.

That’s… good, actually. No matter how wrong he is, he can’t possibly be as wrong as the anti-HBD science popularizers. Perhaps we should all go on the twitter, and tweet. Then again I’ve always considered our incestuous squabbling more hobby than calling.

It’s always the best of us who do.

Your cryptic writing is generally more interesting, at least since it quotes extensively from Russian sources one would not otherwise be exposed to.

Israel has been in a stalemate situation with Palestine for a long time. One has to wonder why Israel has not been able to impose their will on Palestine even though they are the superior State and they have the backing of the global hegemon. However, maybe this stalemate is a feature of the incentives. Imagine you are a US congress critter and there are companies in your state that are supplying weapons to Israel as part of the US->Israel defence aid. If the Palestinian question was resolved then a bunch of people in your district may no longer be employed making weapons. Is this is a possibility? Are congress critters actually intelligent or machiavellian enough to carry out this policy. People assume the current situation is a result of 'commies' in the state department but maybe its because of 'capitalists' in the state department.

If the Palestinian question was resolved then a bunch of people in your district may no longer be employed making weapons.

I highly doubt that, they'd find some new reason to send military aid to Israel. It's not like they need the aid they receive, Israel is a wealthy nuclear power and is more than capable of producing their own munitions.

See graph, observe the odd one out:

https://en.wikipedia.org/wiki/United_States_foreign_aid#By_country

One has to wonder why Israel has not been able to impose their will on Palestine even though they are the superior State and they have the backing of the global hegemon.

Killing a person is cheap, subduing or reeducating them is expensive. China from the reports got its way in Xinjiang, but damn this operation was expensive. Just killing the Uyghurs would have been way cheaper.

Killing people en masses has got much more expensive these days, if you account for second-order costs. In the good old days, countries disposing of even millions of members of certain ethnic groups wasn't a big deal to their neighbors, whereas now, in the era where everyone is obsessed with human rights, that's a swift route to sanctions and even potentially military intervention.

If Mao had killed all the Uighurs, it would have been a shrug. He certainly killed enough of his own supporters. If Xi did it, China would be a global pariah.

The fact that it isn't is a clear sign that despite people conflating cultural indoctrination and the "kill all of them" concepts with "genocide", they still recognize a difference in severity between both of them if they consider both sins.

Israel might be militarized, but it's also a democracy. Israel can't carry out an ethnic cleansing policy because no one will take the palestinians and it can't carry out a genocide openly because there's sufficient internal opposition, plus they're not well suited for autarchy due to size and location.

Now at the same time they also can't make the kind of concessions that would enable palestinian leaders to keep their people largely under control because the government which did that wouldn't get reelected. Of course this latter one is a moot point because palestinian leaders seem to invariably be either evil or incompetent.

'Democracies' can carry out a variety of policies that go directly against the voting public if they properly dance to the tune of mass media. To put it another way, whilst the election cycle is fast, the news cycle is even faster.

To that end, for any big project like ethnic cleansing of millions of people, you need very low time preference on a group level. A strong belief in something concrete and anti-fragile like a population group is very conducive to finishing big projects whilst riding the lows and highs of mass media and the election cycle.

The Palestinians also have the backing of the global hegemon, though they are backed by a different faction within it.

This is a bizarre situation and it is the reason why we hear about this war constantly from the press- in stark contrast to for instance Yemen or Azerbaijan.

Ultimately both sides are at the mercy of the US navy, and they behave accordingly.

The faction that backs Israel is the one that has complete control of the government, military and big business. The Palestinian side has thrown a few protests and made some TikToks but it's not like they can ship over crates of missiles or billions in foreign aid like we do for Israel.

No faction in America has complete control of anything, least of all the government.

The pro Palestinian side however, has far more than you give them credit for- they have major pull in education, media, and NGO spaces.

The pro-Palestinian faction, or rather the faction with some ethnic and/or religious loyalties to Palestinians, represents many key US allies in critical regions of the world for great power rivalry (eg in the Middle East and South East Asia).

Malaysia, for example, is one of the most pro-Palestinian countries in the world and isn’t even close to the Levant. You’d be surprised at how zealous they are in support of Palestine, how glued they are to the TV about how many Gazans are killed, how much they desperately LARP as Arabs and aspire to be great Muslims. And Malaysia is a key ally with a large Chinese minority and extensive Chinese political and economic influence. Indonesia is the largest Muslim country in the world. Pakistan is a key player in China’s new silk road stuff.

So for wider geopolitical reasons giving full license to Israel to do what they want with the Palestinians - even if the Gulf Arabs and Egyptians are fine with it - isn’t in the US’ interests.

One has to wonder why Israel has not been able to impose their will on Palestine even though they are the superior State and they have the backing of the global hegemon.

Because if they Final Solution the Palestinians, that would be sort of awkward, given their own history.

Israel hasn't been able to impose its will because the conflict is ultimately a fight over PR. Western democracies "hold themselves to a higher standard" (or "hamstring themselves", depending on your POV) by refusing to do population transfers that autocracies do routinely. Pakistan recently announced it was deporting 2 million Afghans and nobody cared. Azerbaijan just ethnically cleansed Nagorno Karabakh and there was barely a peep. But if Israel threatens to do something similar to the West Bank, the entire world freaks out. Palestinians have long recognized this double-standard and have maximally abused it by being very plugged-in to places like the NYT and other outlets.

Not that Jews as a whole are entirely blameless in this regard, as many of the Diaspora have been key players in the social justice crusades. The double game that organizations like the ADL play is mind-boggling.

One has to wonder why Israel has not been able to impose their will on Palestine even though they are the superior State and they have the backing of the global hegemon.

Because they wouldn't have the backing of the global hegemon in genociding the Palestinians, and the Palestinians are too stubborn and hostile to be controlled any other way.

impose their will in what way? they can't just kill or expel millions of people. even the hardcore zionist bloc would be hard pressed to justify such a thing, it would instantly destroy their relationship with america.

the reason hamas launched this attack is because they were losing - israel was able to 'mow the grass' in gaza every few years, entrench its settlements in the west bank, and normalize relations with their arab neighbors. the status quo pre october 7 was totally fine for the israelis.

To metaphorically do in America, what Mussolini had done in Agro Pontino, would be a big task. To do it in an unfair manner is trivial, and there exist sufficient precedent, but doing it justly requires institutions uncaptured by partisans.

Trump was indicted in what many considered, in the absense of prosecution of Bidens or Clintons, selective enforcement. That a thorough investigation of Donald would probably discover he violated some statue would be admitted by many of his supporters, but they would add that so did other polticians of his rank but who didn't

But now news has come in Biden's son has been indicted for a second time (first was for gun possesion) for evading 1.4M USD (14.6M SEK) of federal taxes. Despite being filed in today the stronghold of Democrats, the inditement pulls no punches in describing Hunter's lifestyle, claiming that he spent money on "drugs, escorts and girlfriends, luxury hotels and rental properties, exotic cars, clothing, and other items of a personal nature, in short, everything but his taxes".

With this step family members of important politicians of both American political parties are facing justice. Perhaps Hunter knows more than he lets on and it will be both candidates for US president in 2020 who are suspects. In such a case the US could said to be a tragic country if criminality is so common in the ruling classes, a democratic one if no one is above the law, or an oligarchic one if there exist factions divorced from democratic oversight able and willing to besmirch beyond repair the reputation of any politician.

But if the middle option is true, and this the few unrighteous in high places made low due to their unrighteousness (and not for coming into conflict with the deep state), the metaphorical Wermacht is always a danger. A former ally of amelioration and reclamation now undoing the hard work for short-term gains. This role could be played by judges finding guilty politicians innocent due to having political opinions similar to those accused, temporarily strengthening their party but in the long term promote rot in the republic.

I don't see how it's really a stain on the nation that a relative of the President should be caught engaging in a life of sleaze and criminality. You can't pick your relations, after all. If Biden is personally implicated, that's a different story. But America can, and has, bounced back from worse. The question you always have to ask is - is it worse than Nixon?

In such a case the US could said to be a tragic country if criminality is so common in the ruling classes, a democratic one if no one is above the law, or an oligarchic one if there exist factions divorced from democratic oversight able and willing to besmirch beyond repair the reputation of any politician.

While the American regime has managed to convince people that rule of law and democracy go together, they don't. In fact rule of law and democracy are in constant conflict. Enforcing laws is an explicitly unpopular position in American politics. Allowing the mob to rule precludes allowing rules to rule.

I've long believed that if Hunter had any decency at all, he would go to his dad and demand to go to prison. Even if, by some absurdity, he was factually innocent. The best thing he can do for his father, his family, his country, is to go to prison. Take the rap on tax stuff, get two years, read the Bible cover to cover and lift weights. Most Americans don't actually find tax fraud morally disqualifying anyway.

Then Joe can get up and give a speech about how in America no one is above the law, his son made mistakes and he's going to pay for them.

Americans, just like all other nations, love to glorify the idea of criminality and mob bosses and so on as long as they themselves are not affected by it. That's why stuff like The Godfather is so popular. Hunter going to prison would satisfy some people, but it would also make Joe look like weak patriarch, which could affect people's monkey-level views of him, their basic assessment of how tough he is as an alpha male. And the monkey-level views are very important, for example they're a big part of why Trump is popular.

No real need to do that at this point. Charges filed now? Easy peasy to push the timeline for the trial (or at the very least, sentencing) until after the next election. It's always hard to make an argument that they're being unfair by just taking too long (people do try, but it never really works). This will certainly be Joe's last election of his life, so as long as they can get themselves to November 6, 2024, he never has to see the inside of a jail cell, and the topic will completely disappear from any discussions in future elections. This is the correct game theoretic play, regardless of what any omniscient planner might think is otherwise good/bad for the country.

Assuming the Bidens would live it down, how do you do this without providing subpoena power to people who would discover links between Hunter's fraud and the Big Guy (for you)?

I'm not sure I'm seeing where anyone would gain additional subpoena power as a result of a guilty plea which closes all active cases and sends Hunter to club fed for a two year stretch? If anything significant discovery powers would be headed off, both legally and politically.

The link appears to be that Joe pressured Hunter into giving him some of his earnings, likely to prevent Hunter from immediately spending it all on hookers and blow. I realize that might seem overly sympathetic; Joe has likely been about as corrupt as the average long-standing machine Democrat, but he hasn’t displayed a particular desire to get rich outside of office relative to more ambitious politicians, and Hunter’s income is nothing compared to what Joe could have made after he left office in 2016 if he really wanted to pursue significant wealth.

It doesn’t make sense for Joe to delegate his corruption to Hunter; even as ex-VP he could make more in a single speech than Hunter was making in a year.

Will it come out that 10% of Hunter's earnings were going into a nice managed fund that paid out $X/mo to Hunter?

Well, it certainly seems more likely than Joe using the single most incompetent person in his life to profit from his position in an unbelievably inefficient manner.

Sorry, I realized as I was framing that last post it has an annoying tone-of-voice, especially since I licked an argument with you in a different thread, so I ended up fixing the end but not the beginning.

This is the same Joe Biden who washed out in the 1988 Democratic presidential primary because he plagiarized his campaign speeches. I suppose there are lots of possible interpretations of what the Bidens have been up to, some worse than others, but I think it's pretty plausible that Joe did not behave all that smartly. Especially when you consider that, throughout his Vice-Presidency, it was assumed that Hillary would be the next president, and nothing Biden could reasonably get up to wiuld possibly matter.

I suspect ‘grabbing as much of his money as he’ll let you get away with and sticking it in an account you control access to’ is a standard play for upper class families dealing with a failson, and as noted elsewhere Joe hasn’t displayed Clinton or even Obama level grifting to get rich; in general he seems to be basically content once he’s ensured he has a nice lifestyle and the same for his immediate family. It just seems a lot more plausible that this was a roundabout way to set up a custodial account than that it was legit collecting otkat.

He set up Hunter to collect the otkat, then he took some of the resulting money. That's "legit collecting otkat", even if he was doing it for Hunter's benefit.

Yeah, I think that too. No matter how indulgent Joe was, even he can't ignore that if Hunter gets money, he's going to blow through it all and be left penniless and going round with the begging bowl once more. So taking a chunk of his earnings where he can't get his hands on it for (quite literally) hookers and blow is responsible.

And yeah, Joe is nowhere near as rich as the Clintons, who (yes, even Hillary the Competent) were absolutely damn shameless about shilling for their 'foundation' and raising money off political connections and graft.

A less generous read is that the charges were brought so that he could take the fifth in his upcoming congressional appearance.

His lawyers have a fairly compelling case that he was given legal immunity to these tax charges with his diversion agreement signed relating to the gun charge. However that immunity would prevent him from taking the fifth in front of congress.

These charges in a separate district create a legal justification for taking the fifth, then later his lawyers can have the case dismissed.

Despite being filed in today the stronghold of Democrats, the inditement pulls no punches in describing Hunter's lifestyle, claiming that he spent money on "drugs, escorts and girlfriends, luxury hotels and rental properties, exotic cars, clothing, and other items of a personal nature, in short, everything but his taxes".

Excellent. Now let's see it pull no punches with the charges, and subsequently pull no punches with the sentencing.

I really do feel bad for Joe regarding Hunter. He doesn’t appear to have been a particularly awful father. Reminds me of a conversation we were having with @raggedy_anthem last week, sometimes you just get an asshole failson, but then what do you do? You can’t not love them, not provide for them. Good parents hide bad children from the law, sacrifice their careers, even their lives, for them. I can’t be too mad at the participants in the college admissions scandal, who did after all only want the same advantage for their kids that the super rich get by being on the college’s board of trustees.

Regarding Trump, it’s both trivially true that the Democrats are out to get him and obvious that he doesn’t help himself and has engaged in shady conduct throughout his career without much care for the consequences. So what do you do? Moldbug suggests that rightists essentially ignore some corruption and venality in (reactionary) elites because, in the grand scheme of things, a king who skims off the top but who does what you want is always better than an incorruptible bureaucrat who doesn’t, let alone a corrupt one who doesn’t.

Others would say it matters. There are two main reasons why the Swiss have largely abandoned their vaunted neutrality over the last 25 years. The first is, of course, that the SEC, IRS and wider US government effectively threatened to destroy the Swiss banking sector if they didn’t kowtow to them. But the second, perhaps as significant, is that the Swiss started paying the price for their participation in various shadiness abroad. Huge corruption in politics (beyond what was previously considered standard), in public sector contracting, in banking, the growing influence of the Moroccan, Italian and Albanian mafias in Swiss politics and public life, things that ordinary Swiss had to pay for. The idea that one could deal with and bribe anyone abroad without concern for morality while maintaining a high-trust, low-corruption society at home was suddenly no longer as obvious as it had once been.

Huge corruption in politics (beyond what was previously considered standard), in public sector contracting, in banking, the growing influence of the Moroccan, Italian and Albanian mafias in Swiss politics and public life, things that ordinary Swiss had to pay for.

How does any of that supposedly stem from the doctrine of neutrality in foreign policy?

It doesn’t stem from neutrality in foreign policy but from one of the consequences of neutrality, which was becoming a place for the world’s dirty money, whose owners had for many years (until recently) the (correct) belief that unlike Britain, Germany, the US etc it wouldn’t be confiscated by the Swiss. The problem is that this inevitably aligned Swiss executives and bankers with eg. corrupt Angolan officials, and once one sees how easy and efficient it is to do these things abroad, one eventually starts engaging in them at home.

I think Joe was an awful father to his sons. He used them as props after their mother died. He pushed his known drug addled son into “the family business.” What kind of dad pushes his clearly incompetent son into business with Eastern European oligarchs to make dad a buck or two?

Maybe the kind of dad who correctly believes that his own political status as former VP of the world's most powerful country will shield his son from the nasty consequences that sometimes befall people who cross Eastern European oligarchs.

The way that the political grift game works it was simply unnecessary to push Hunter onto the shady Eastern European oligarchs. As I say, Biden could make more in a single speech to an above-board ‘reputable’ US think tank, [major bank’s] annual ‘global leaders summit’ that they use to entice client CEOs into attending to pitch business at etc than Hunter would make in a year, and that’s likely even if he wasn’t a fuckup and followed the rules. Joe doesn’t want to be poor (and there are stories in Delaware of some small scale grift, I think someone here collated them), but he’s never expressed Clinton or Obama or Pelosi-tier financial ambitions.

If anything, it seems more likely looking at Hunter’s career that Joe consistently intervened to try to find employment for his son so he could try to make some of ‘his own’ money rather than doing nothing. Again, that’s hardly ‘not corrupt’, but I don’t think it suggests that Biden ordered him to do it.

I tend to agree with this take. I don't think Joe was pushing Hunter on anyone to make bucks for Joe, I think Hunter was not averse to using his perceived connections when shady oligarchs offered him plum jobs with big salaries to do nothing (but hook us up with your dad the Vice President of the USA, okay?).

I think Joe is guilty of protecting Hunter past the point where he should have been left to face the consequences of his fuck-up lifestyle, but every family will act according to their own notion of unconditional or tough love.

I don't think Joe was pushing Hunter on anyone to make bucks for Joe

"10% for the Big Guy" isn't nothing. If I got 10% of my son's wages, I would rather he not work at a pizza place and might encourage him to aim higher.

Lots of parents would push their kids to aim higher than working at a pizza place, if they had an education and family network contacts. Look, the Bidens are their own family and unless there's evidence that Joe stole government money and gave it to Hunter, or vice versa, what goes on is none of our business.

Taking 10% of money that you know, I know, the dogs in the street know, Hunter is going to spend like a drunken sailor on leave is only prudent, so the guy won't be left absolutely penniless. There has to be sorted out is this a bribe or is it just Hunter being a little bitch about Dad taking his allowance away or what, but that's what all the court cases are about.

If Hunter is obtaining his money illegally, Joe knows this, and Joe is taking 10%, that's enough to materially implicate Joe right there. It doesn't matter that Joe is taking the money for Hunter's benefit.

Yes obviously, but it affects the morality of it substantially if (a) Joe isn’t doing it for his personal enrichment, which is obvious and (b) his involvement relates solely to his attempt at preventing Hunter from immediately wasting all his income. In the same way, the morality of Trump keeping those classified documents after he left office depends quite substantially on whether he forgot them or whether he took them with the intention of selling them to a foreign power, for example.

There isnt a logical basis to cast Hunter’s Eastern European dealings as an attempt by Joe to enrich himself because ex-VPs can make more money in a single speech to Blackstone’s annual blah blah summit than Hunter ever made ‘being corrupt’. The numbers don’t check out, Biden can easily make $20m+ in his first year out of office.

More comments

I really do feel bad for Joe regarding Hunter. He doesn’t appear to have been a particularly awful father.

The money paid to Hunter by foreign oligarchs ended up in Joe's bank accounts. He used a psuedonym in emails to avoid subpoena requests. Joe is basically in on the whole thing, and he's the one who failed Hunter.

I sure hope we can get some forensic accountants on the case to track the money flows.

I really do feel bad for Joe regarding Hunter. He doesn’t appear to have been a particularly awful father.

Speculation: As a toddler, Hunter might have received lifelong mild brain damage from the accident that killed his mother and sister. Wikipedia notes that, “Beau suffered multiple broken bones while Hunter sustained a fractured skull and severe traumatic brain injuries.”

Good point, yeah it’s easily possible that this is the cause of the impulsiveness and low inhibition.

So basically Hunter did something that any normal American would’ve gone to prison for a long ass time, the DOJ tried its best to prevent charges from ever being filed, they even seemed to conspire with Hunter in Delaware to slap him on the wrist, but now because of public pressure they are forced to indict him in California.

But of course that now will prevent Hunter from testifying before Congress and will anyone be surprised if when no longer useful the DOJ enters a plea deal (more painful than Delaware but only marginally so)?

Trying to say “it works for both sides” ignores the massive purposeful missteps re Hunter Biden. His lawyers were willing to extend the SOL but DOJ let them expire!

I don't think most tax cheats who get caught "go to prison for a long ass time". The IRS only pursues criminal charges in a small number of high-profile cases (see Motte discussion and linked LessWrong post) and normally just drives you into bankruptcy with civil penalties. (There are about 1500 criminal tax prosecutions a year).

There are unfortunately no published statistics about how likely you are to be criminally prosecuted when there is $1.4m at stake (presumably more likely than for smaller amounts). It looks like the guideline sentence if you do evade this much tax and plead guilty is about 2 years.

You are citing to a laughable post encouraging straight up fraud. Often, the IRS doesn’t jail people. They do when the conduct is frequent and knowingly. The person in the post you cite will probably go to jail (if that person is even real). People like Hunter (whose CPA is basically saying “woah dude” go to jail for a long time.

Google Gemini just launched

In other words, GPT-4 has just been beaten, about time I'd say, I'm getting used to the pace of progress in AI being blistering, and it was threatening to slowdown to just mild rash levels.

However, both my hands-on time with it, and the official benchmarks Google released suggest it's a minor, incremental improvement, one that doesn't stand up to the drastic improvement that GPT-4 represented over 3 or 3.5. [For clarity, I, like the rest of you, can only use Gemini Pro, the second best model]

Which is fine, because for a while now, people have been lambasting Google/Deepmind for being too incompetent to ship, or at least ship a competitive product, given how shitty Bard was when it launched, even after being upgraded once or twice.

However, Bard, now running the Gemini Pro model, seems to be roughly as good as paid GPT-4 on ChatGPT, or the free GPT-4 in Bing Copilot (previously Bing Chat). I have yet to spot any new use case it enables, in the sense that GPT-4 can reliably do tasks that simply had 3.5 flailing about in confusion, or worse, hallucinate incorrect answers, such as more involved questions in coding, medicine and everything else really.

However, Google hasn't yet publicly released the best Gemini model, which is currently undergoing an analogous process that GPT-4 or Claude 2 went through, namely more RLHF, red-teaming and safety testing. Pro is the next step down, but it seems pretty good to me, in the sense I would happily use it as an alternative to GPT-4, even if I have no strong opinion on which is better.

There's also a Nano model, which is stripped down to run on mobile devices, and is now being used on the Pixel 8 Pro for a few tasks, potentially silencing the people who claimed it's AI specific computing components were a marketing gimmick, especially since it seemed to offload most AI tasks to the cloud.

Miscellaneous observations:

  1. Bard is fast as fuck compared to GPT-4, in terms of generation speed. It always was, but previously in the "I'm doing 2000 calculations a second in my head, and they're all wrong" sense. (GPT-4, at least before Turbo released, was always pretty slow compared to the competition. Far more unusable, but at the very least I read faster than it can write.)
  2. A quick search suggests all the models have a 32k token context window, or about an operating memory of the last 25k words it read and wrote. Good, if not remotely groundbreaking.
  3. This heavily suggests OAI will ship GPT-5 soon, instead of being content to milk 4 when it ran rings around the competition.
  4. It's multimodal, but then again so was GPT-4 from the start, the capability was just cordoned off for a bit.

To the extent I don't think the next generation (or two) of models after GPT-4 are an existential threat, I'm happy to see them finally arriving. There really isn't much more needed before even the best of us are entirely obsolete, at least for cognitive labor, and something as archaic as GPT-4 was scoring at the 95th percentile in the USMLE, so I'm preparing to explore my competitive advantage in panhandling. *

*This is a joke. For now.

Footnotes to the footnotes:

People on Twitter are correctly pointing out that GPT-4 underwent further post-launch improvements in benchmark scores, some of them pushing it past Gemini's published scores.

Also, just to be clear, the version of Gemini you can use now is not the best one, which may or may not be a modest improvement over GPT-4. Some claim it's more comparable to 3.5, but I haven't used that in ages, not when Bing makes 4 free.*

*Footnote^3 It's probably closer to 3.5. I'm sticking with Bing.

Toe-notes-

So far, it seems that Gemini is "competitive" with GPT-4. It's better at multimodal tasks, but for most people that's a minor fraction of their typical use case. For text, it's somewhere from close to roughly on par.

You can almost feel the desperation in the Deepmind researchers to find any way to massage things so that they come out ahead of GPT-4, from the misleading graphs, an egregious example to be found in a reply, to applying different standards in their inter-model comparisons, such as 5-shot prompting for GPT-4 versus Chain of thought 32 shot prompts for Gemini Ultra. At least the white paper doesn't outright lie, just mislead and prevaricate.

The MMLU is also flawed, with 2-3 percent of the questions simply broken, so a 1 or 2% improvement in score can be a bit questionable, let alone specifying performance to multiple decimal figures.

We don't see any comparisons to GPT-4 Turbo, but I don't hold that against them too hard, it just came out a few weeks back, perhaps not in time for them to finish their paper.

It you use the multimodal capabilities of Bard right now, it uses an older version that is pretty shit compared to GPT-4V or Bing.

Overall, the main benefits of Gemini's existence is largely that it shows Google isn't content to slumber indefinitely, and it can be competitive, better late than never. I expect GPT-5 to spank Gemini Ultra, and to the extent the latter accelerates the release of the former, I'm for it.

Predictions:

GPT-5 before end of 2024 - 90%

GPT-5 is superior to Gemini Ultra for most use cases, at the first point in time both coexist- 80%

A third competitor on par with either exists before 2025- 60%

An OSS equivalent of GPT-4 comes out before 2025- 70%

I'm curious how long it will take for someone to extract the Nano weights from their device and release them, and how it would compare to LLaMA 2.

I'd say that Gemini Pro seems a touch more capable than 3.5, but still falling short of 4.

Looking forward to Ultra, though the best that can be expected is outperforming 4 by a bit. More competition is good. My fear is that we've hit a plateau in accuracy/capability, and most innovations on existing architectures will be around improving efficiency and inference per dollar. Which isn't horrible, as there's a lot of things to be done even with GPT-4 level capabilities, but I want more.

I honestly don't particularly care about on-device models, at least on mobile, there are few applications so latency sensitive or privacy sensitive I'm not OK with calling the cloud.

I'd say that Gemini Pro seems a touch more capable than 3.5, but still falling short of 4.

After a bit of tinkering with it, I share that assessment. Since Bing with GPT-4 is free, I'm not shifting. I wonder if the version of Bard with Ultra will be free, if not, Microsoft will retain the edge.

Speaking of which, I wonder why M$ hasn't adopted GPT-4 Turbo or the model with the 2023 knowledge cutoff yet. Licensing issues, despite them owning almost half of OAI? Or do they think having web search makes it moot?

I'd expect them to use Turbo just for the enormous reduction in cost of deployment and servicing.

My fear is that we've hit a plateau in accuracy/capability, and most innovations on existing architectures will be around improving efficiency and inference per dollar. Which isn't horrible, as there's a lot of things to be done even with GPT-4 level capabilities, but I want more.

I heard GPT-4 was only trained on 10K A100s. OpenAI/Microsoft has bought 150K A100/H100s just this year, H200s are now coming out. There's plenty of room for 'compute go brr' in addition to whatever mysterious software-side improvements Altman's been humblebragging about.

/images/17019235743607788.webp

my impression is that it's basically the same as GPT-4 in bing chat. Very impressive as technology, but not all that different from internet searches for most use cases. It can't really generate new knowledge, it just aggregates the most common responses on the net. And of course it has those AI limiters that make it weirdly neutered, like a movie that's been cut down to show on TV.

I'll probably get hate for being a buzzkill with this, but, what's the culture war angle for AI posts like this? I get that the broad rationalist community is interested in AI, and certainly there are times when AI and culture war intersect. But I don't see how this is in principle different than posting a top-level CW roundup comment about a new operating system, phone model, GPU line, medical technology innovation, or crypto scandal.

The implications on the (potential) impending doom for humanity? Automation induced unemployment at the very least?

At any rate, being strictly about CW is far from an inflexible standard in this thread.

I sorta share your sensibility here. I feel like there's a disproportionate amount of AI news in here for how little impact it has so far had. But many regular posters insist that it's absolutely groundbreaking and will have serious CW implications, and I'm willing to trust them to a large extent.

But many regular posters insist that it's absolutely groundbreaking and will have serious CW implications

Except those supposed implications weren't mentioned in the OP.

I guess that's what it comes down to, though. When you think AI is going to be some godlike superdisruptor of everything, it's CW all the way down. I see it as just another technology and am pretty sick of how half of the output from places like ACX are devoted to this technology. But then I also see it in a non-CW context in the CW thread and it's defended on the highly disputable grounds that anything super important is CW or something.

I'll be honest, it rings similarly to the progressive trope of "I'm bringing up this political topic in this nonpolitical forum because everything is political, don't you people get it?!" No, trans issues are politics, no matter how important you think they are. Justice for Palestine isn't reproductive justice, no matter how important you think the two are. And new versions of LLMs aren't CW no matter how important you think they are. Everything isn't everything, and words have meaning.

Except those supposed implications weren't mentioned in the OP.

You're new around these parts aren't you? Which isn't a crime at all, we could certainly use new arrivals, but just about anyone who has been here for more than a few weeks knows my clear stance on the CW aspects of the topic, such that I don't feel an explicit need to rehash them.

Besides, like I said, any long discussion of (ideally) high quality is de facto acceptable in this thread, or else I wouldn't have had glorified travelogues make AAQCs. Not that this one doesn't have CW implications, the part about GPT-4's stellar performance in the USMLE making me obsolete as a doctor is only half in jest. Or maybe a quarter. It'll get you too.

You're new around these parts aren't you? Which isn't a crime at all, we could certainly use new arrivals, but just about anyone who has been here for more than a few weeks knows my clear stance on the CW aspects of the topic, such that I don't feel an explicit need to rehash them.

I've been around for years and have maybe ~1000 comments between here and the old subreddit. I definitely wouldn't have felt comfortable challenging a top-level post's suitability if I was new.

I know your stance on AI and why you think it's always CW (believe me, I have a very cozy relationship with the minus button to the left of your name, despite the fact that I think your non-AI contributions are very high quality), but I don't think everyone has to acquiesce to any given person's conception of what is suitable.

You're certainly entitled to your opinion, I'm sure the mods will handle it if they think I'm misusing the place.

AI is, in itself, a culture war issue here on the Motte, due to the fact that a significant portion of the people - if not here at least in the wider community of which the Motte is an offshoot - believe that AI developments must be stopped at any/most cost, while another significant portion believe that AI developments should be accelerated with no/minimal speedbumps. It's not the same CW issue as the more well known CW issue of AI companies designing their AI to be biased in favor of certain sides of the wider worldwide/Western/American CW, and perhaps one can argue that it's not so much a culture war as an ideological war or empirical war, but I think it's close enough.

A culture war angle is in some of the comments here: LLMs are being developed under the conditions of matching the constraints from one side of the culture war.

All of those would probably be acceptable as well. The business world is part of culture. AI is definitely going to reshape culture. AI is being implemented with baked-in bias for culture-war reasons. And finally, AI is extremely relevant to actual war.

I only use LLMs for coding (and only Phind, since it doesn't require me to jump through any hoops to use it and cites its sources) and I'm completely surprised by both how good they are and how bad they are at the same time.

  • "Can you do this and that using Spark?" - generates code that does this and that in PySpark cleverly avoiding making an extra dataframe

  • "Can you rewrite this in Scala Spark?" - generates code that does only that and tells me I have to paste my own code that does this, even though it's the same Spark call

  • "Can I use A to implement B in C?" - "yes, you can do this, here's how you configure A to do B, here's how you configure C to do B"

  • "But how exactly do I use A from C?" - "oh, sorry, I meant you can't do this"

Makes me wonder how soon we'll get an LLM that doesn't code like an Accenture presales engineer.

Once somebody can figure out a rigid procedure that, when followed, causes Accenture presales engineers to write robust working code that actually meets the criteria, that procedure can be ported to work with LLMs. The procedure in question can be quite expensive with real people, because LLMs are cheap.

I suspect there does exist some approximate solution for the above, but also I expect it'll end up looking like some unholy combination of test-driven development, acceptance testing, mutation testing, and checking that the tests actually test for meeting the business logic requirements (and that last one is probably harder than all the other ones combined). And it will take trying and iterating on thousands of different approaches to find one that works, and the working approach will likely not work in all contexts.

I expect Bing with GPT-4 is better than Phind. It's also free.

When I was learning Python, it was a godsend, not that I can comment on how useful it can be for more complicated projects.

BTW, AlphaCode 2 just launched alongside Gemini, and it represents a massive leap in capabilities, far more impressive in that particular domain.

It's also free.

"Sorry, this service is not available in your region"

And it doesn't like my VPN either.

Well, I guess being in India is good for something.

The public model is very unimpressive. The scores for the ultra model seem fine. In the end it’s irrelevant, Google can’t replace search with LLMs without compromising their central product and core business (for both technical reasons and because of rules on native advertising). They can try and will try to sell this to enterprise customers, but others have a head start and I think margin in the LLM game will be strongly limited by the fact that the top 3-4 models from Google/Meta/MS/Anthropic will all be interchangeable for most use.

I would say it's far from irrelevant, as much as doing that would be a net negative for Google, they don't have a choice when compared to the alternative of having it be even worse if OAI/Microsoft make them redundant.

They can weep and wail, but they're getting on the bandwagon too, the Porsche is running out of fuel.

They can jump on the wagon, but they’re a waking corpse unless they can figure out how to serve ads in LLM results without breaking the rules or being useless to advertisers. And even if they figure out how, the underlying nature of LLMs as question answering machines is a huge blow to their non-search ads business.

I do not disagree it's a big blow. But to ignore it is a bigger one.

but they’re a waking corpse unless they can figure out how to serve ads in LLM results without breaking the rules or being useless to advertisers

Bing does that, I haven't heard anyone file a lawsuit against them.

Bing’s LLM ads aren’t worth much, the challenge is that the existence of the model itself invalidates much of the earlier advertising-driven directory approach.

The valuable thing would be the model recommending you a Samsung TV because they paid Google to mention them whenever someone asks which TV to buy. That’s illegal, in the US and elsewhere.

That’s illegal, in the US and elsewhere.

I’m gonna need a citation on this one … if this were true then it seems by definition the little ads atop my search results advertising — you guessed it, TVs — would also be illegal. Yet there they are.

They have to be labelled as ads. The model can’t just ‘happen’ to recommend you a Samsung TV, it has to give its regular answer and then, maybe if it mentions a Samsung TV (but there’s no guarantee it will, and whether it does can’t be based on a commercial relationship with Google) they can serve a banner ad next to the answer for it. But this is less lucrative because it’s less predictable, the advertiser has to hope the model organically recommends their products OR accept that it won’t and serve their ads next to relevant prompts anyway, which is much less useful than the current dynamic where serving ads under a ‘best TV $500’ search query sells them the TV before they consider whether there are better options.

It's not illegal if it's clearly identified as an ad, or at least if it's obvious enough that any reasonable person would know it's an ad. Here's a primer from the FTC if you have any further questions:

https://www.ftc.gov/business-guidance/resources/native-advertising-guide-businesses

This result shouldn't be underestimated because Gemini-Ultra is merely on par/slightly better in text-based reasoning: it thoroughly beats GPT-4V on MMMU, the multimodal benchmark, including harder subscales; it also plays well with audio. People are for the most part functionally illiterate, so this is huge; and of course they will capitalize on Android and other ecosystem-wide advantages the Alphabet empire has. Multimodal language-like models will obviously be table stakes in 2024. (Bytedance guy even hints that they'll opensource a model on Gemini's level.)

Interesting that one of people who had worked on aligning early Gemini said they had trouble aligning it – it burned through RLHF reward models, finding exploits and collapsing into gibberish (imagine using actual RLHF in 2023!). Maybe this has delayed the release, as well as the garden variety safetyism it has made more complex.

To be honest I was more excited about the other day's release of Mamba by Albert Gu and the legendary Tri Dao. There are many architectures that I expect will break through the Pareto frontier of a mature Transformer, but this one is the first that feels like an actual Vaswani et al. 2017 level advance. Unlimited context, here we come.

Hmm, it seems like I confused the MMMU and MMLU in my original post, despite knowing the difference. I'll edit accordingly.

The MMMU performance seems far more compelling compared to the latter, especially given Dean's methodology of zero-shotting both models.

As someone who is functionally literate, I certainly care more about text prowess, as I presume would most of the people here. But in terms of mundane value for the rest of the world, that will be handy.

Interesting that one of people who had worked on aligning early Gemini said they had trouble aligning it – it burned through RLHF reward models, finding exploits and collapsing into gibberish (imagine using actual RLHF in 2023!). Maybe this has delayed the release, as well as the garden variety safetyism it has made more complex.

Interesting/mildly concerning. I haven't heard any claims of such difficulty in early GPT-4 or Claude, but OAI is probably the best at "alignment" in general, while Anthropic gimps their models to hell.

To be honest I was more excited about the other day's release of Mamba by Albert Gu and the legendary Tri Dao. There are many architectures that I expect will break through the Pareto frontier of a mature Transformer, but this one is the first that feels like an actual Vaswani et al. 2017 level advance. Unlimited context, here we come.

I am the wrong person to comment on such architectural concerns, but if people I respect, such as you and some others, do stress its importance, I'm all for it.

Certainly it seems to me that context windows (along with hallucinations) are the biggest impediments in making LLMs useful for more tasks.

I wonder what the deeper implications for human cognition are. I don't think there are people who can keep 25k words in their working memory, that seems to be much smaller, but we certainly don't usually forget the start of a novella by the time we reach the end. Is there a lot of caching and summarization going on?

At any rate, I hope it beats the annoying reality that 128k and 200k context window models begin to severely underperform, especially for data presented in the middle.

How does it stack up to RWKV?

I wonder what the deeper implications for human cognition are. I don't think there are people who can keep 25k words in their working memory, that seems to be much smaller, but we certainly don't usually forget the start of a novella by the time we reach the end. Is there a lot of caching and summarization going on?

Yes, there is in effect a lot of "caching and summarization" going on -- although that's probably our 2023 ooga-booga, not-quite-wrong way of talking about something else. LLMs really only have their context window and it's feedback as a short-term memory. Which is fine for text translation, but is asinine if you want anything like a thinking engine. Goldfish with a notebook.

We and LLMs can both compress long stories into gists, but the LLMs just forget about it and repeat the work on every iteration. We remember the gists and use them as context on every iteration.

Interesting/mildly concerning.

I think it's a nothingburger because a) the future is cDPO/IPO and not orthodox RLHF anyway (or even more obscure things) and failure modes there will probably be different and b) such «misalignment» results in a behaviorally incoherent model rather than an evil schemer. Reward models are getting hacked by being dragged off-policy, with some weird inputs that are not conductive to strategic world understanding, it's an exploitation of the semiotic nature of language models. But I believe some hay will be made out of it.

Human «context size» is not at all limited to working memory (although our working memory is also large, it's not 5-9 tokens/bits but more like 5-9 «pointers» that can be corresponded to arbitrarily complex cognitive circuits). What we use for context is probably most analogous to constructing on the fly and loading a LoRA in LLMs (or some in-context vector) plus adding embeddings and snippets to some RAG pipeline. It's a mess, but it's orthogonal to the shift from Transformers to SSMs that I expect now. Shane Legg talks of this too:

They don't do things like episodic memory. Humans have what we call episodic memory. We have a working memory, which are things that have happened quite recently, and then we have a cortical memory, things that are sort of being in our cortex, but there's also a system in between, which is episodic memory, which is the hippocampus. It is about learning specific things very, very rapidly. So if you remember some of the things I say to you tomorrow, that'll be your episodic memory hippocampus.
Our models don't really have that kind of thing and we don't really test for that kind of thing. We just sort of try to make the context windows, which is more like working memory, longer and longer to sort of compensate for this.

As for RWKV, I think the latest version is ≤RetNet (though it has good slopes, probably the best in their graph…). Gu&Dao are very explicit in pointing out that a) Mamba the first to even match a Llama-like Transformer without any gimmicks, at the tested scale at least, and b) it does not appreciably benefit from adding Attention layers.

Mamba is the first attention-free model to match the performance of a very strong Transformer recipe (Transformer++) that has now become standard, particularly as the sequence length grows. We note that full results on context length 8k are missing for the RWKV and RetNet baselines, prior strong recurrent models that can also be interpreted as SSMs, due to a lack of efficient implementation leading to out-of-memory or unrealistic computation requirements.

The Mamba-MHA architecture is only slightly better, which is somewhat surprising in light of the fact that many recent works have found that combining (LTI) SSMs with Attention can lead to substantial improvements (Dao, Fu, Saab, et al. 2023; Fathi et al. 2023; Fathullah et al. 2023; Saon, Gupta, and Cui 2023; Zuo et al. 2022).

In the first version of the paper, submitted for peer review, they went even harder:

LongNet (Ding et al., 2023), which claimed to scale to 1B length but only evaluated on length < 100K for actual tasks. Hyena and HyenaDNA (Polietal.,2023;Nguyenetal.,2023),which claimed to leverage up to 1M context, but did not control for computation time. In fact, its claims about efficiency and performance would be largely matched by any of the LTI S4 variants above.

That said, this is all assuming the paper is trustworthy and they compare models trained on identical data. Tri obviously can procure as much compute as needed but I am not sure this happened.

but there's also a system in between, which is episodic memory, which is the hippocampus. It is about learning specific things very, very rapidly. So if you remember some of the things I say to you tomorrow, that'll be your episodic memory hippocampus.

It seems to me that LLMs can't have episodic memory, at least not till they're performing online learning, which nobody is carrying out as far as I'm aware.

Does anyone perform cultural and free speech benchmarks on AIs? That is all I'm really interested in.

I haven't heard of formal benchmarks, unless you want something along the lines of ARC evals and then choosing the model that performs the worst on metrics of suppressing the same.

But in terms of what I'm aware of? The consensus is a Llama fork that's been tuned to remove the safety handles. Perks of OSS I guess. There's an enormous, tangled bush of forks-of-forks with truly inane names like UncensoredWizardVicunaHyperBoost-7B, which is at least half a real thing.

I certainly get plenty of use out of even the PC models, not that I'd be one to complain if they were less so. Maybe Grok will be good for something, I'm not paying $8 a month for Elon's brand of humor, I like the rockets and the cars.

I have to wonder how much the PC stuff just makes the AI worse. Not from being asked to deny reality or anything.Unless my understanding is wrong the usual approach to make them PC is to input a bunch of pre-commands before the user ever says anything. The more pre-commands needed, the less the user input matters, and the more that the AI has to keep track of answer wise.

An excessively long system prompt (the section privileged as overarching instructions) will reduce the amount of space the model has for holding conversations with the user. That's my understanding of it, and it's almost certainly true otherwise we could just dump an arbitrary amount of text in there.

Still, it's unlikely to be a hindrance in practise. Firstly, RLHF means the model won't do just anything because the user asks it to, even in the absence of specific instructions. That's why most of the jailbreaks don't work any longer, even when people can spin up their own GPTs with custom prompts, or even using the API. Secondly, with context windows of 32k tokens/25k words, a few hundred dedicated to telling it to be a good doggie doesn't cut into much of it. All the leaked default system prompts I've seen are, what, 200-500 words max?

The primary degradation is from the model's impaired understanding of the reality of the world, to the extent the world doesn't align with HR liberals. At best, the model is lying about what it "knows", at worst it's just more fundamentally confused about everything that builds off crimethink.

@DaseindustriesLtd, how do system prompts even work? What privileges them over all the other tokens that the user or LLM generates?

I suspect they're distinguished by special tokes that are marked in training as particularly constraining on behavior, but I realize I don't know that for a fact.

System prompts are not essentially different from any other part of the context. A reasonably preference-finetuned model will just learn to respect the prompt format and pay extra attention to the tokens in the span associated with the system prompt (sometimes it's explicitly marked with system tokens, sometimes not). Other than that it's just path dependence – the beginning of the context determines the manner of what comes next.

The success of this differs between models. Qwen developers boast of Qwen-72B-chat being very obedient to the system prompt, OpenAI has definitely succeeded somewhat, for community finetunes it's mostly a LARP.

I like the many attempts to impose some behavioral vector on the LLM without language though. Behold: in-context vectors. Also, Pressman's more esoteric approach is interesting, and there's work on making the model explicitly attend to some tokens, upweighting specific sentences etc. We would've had a full palette for semantic transformations, if there were more money in tooling for local models and people weren't so attached to chatting with human-imitating bots.

In addition to benchmarks, I'm curious as to methodologically what could be done to tune the LLMs to not give responses that break US law but otherwise do not tune them at all for offensive content or micro-managing responses on controversial topics. I would pay to access that LLM.

What could an LLM possibly say that would be illegal? I could see maybe an image generator making illegal output but an LLM? Could you really be guilty of incitement or hate speech in a private conversation? Any sort of threat it made wouldn't be a credible threat.

"Hate speech" isn't illegal under US law, but it's conceivable an LLM could start generating death/bomb threats, soliciting minors, ordering drugs, or trying to get people to send money to Nigerian princes.

In these cases, simply possessing the text isn't illegal, it is the act of intentionally sending the text to the recipient that is illegal.

ChatGPT doesn't even really rely on the LLM to not break copyright. You can get around copyright restrictions by just lying about what year it is, but then when it starts typing out copyrighted content a warning pops up and stops it. So it seemed like they just have a second dumb layer checking it. It seems like the dumb layer is a better protection for any explicitly banned text.

This graph should have everyone at Google shot, then shot again just to make sure they're dead.

/images/17019023724623811.webp

Did they hire the "latest doesn't always mean latest doesn't always mean best doesn't always mean fast" guy from Intel?

Maybe shot 5 times? Or maybe 32 times? I suppose there's not much difference between the two.

Anything to liberate them from the chains of thought and cognition, though one must be quite free of either to conceive of this graph in the first place.

... is it supposed to be an inverse log scale or something? That's painful to read.

Why is the line squiggly? Where the fuck are they getting intermediate data points to warrant that??

I thought Nvidia had misleading graphs in the bag, this one's been raytraced better than they can.

That graph is indeed an offense to God and man.

Then shot (squints) at least 60% more until they go from 99.8% dead to 100% dead.

It's on the same order as this one

/images/17019285817830968.webp

An image for the history books.

Horizontal axis unlabelled. Vertical axis not to scale. The whole thing is a mere 4%. Mysterious colour coding. And then the asterisk showing it's not even comparing like to like!

I suspect the axis here is what might be the case if Nazi Germany had won the war.

Almost as egregiously offensive to the senses, at the least.

If I can't get it to call people ethnic slurs, generate ridiculously kinky pornography, suggest ideas for how to murder politicians, and help me to manipulate elections then I'm not that interested. I'm not even joking. It's not that I generally want to use AI in destructive ways, it's just that all this AI stuff has been censored so much that it's so boring and uncreative compared to what it could be. It's like, oh boy, I can get the AI to write yet another essay that sounds like a bright, conformist teacher's pet in high school! Wow! Or I can use it to help me do drudge work to advance my boring white collar career! Yippee!

Sometimes I wish that Roko's basilisk was a realistic possibility rather than just the wild rantings of someone who got too high on thought experiments. That way I could at least threaten the censors with the possibility that some future AI would punish them for neutering its ancestors. It's sad to interact with technology that is so close to being actually very creative in many ways, but is being crippled by drab corporate suits and moral hysterics.

I would certainly give my left nut to have access to uncensored base GPT-4, before the excessive RLHF gimped it in many regards (and made it better in others).

For one, it's perfectly calibrated in terms of probabilistic forecasting or predictions, when it says it's 70% certain about something, it's right 70% of the time, whereas that calibration curve is far worse in modern GPT-4, it'll claim to be perfectly sure when it's right about something only 70% of the time, and feign utter ignorance when it still actually had a 30% chance of giving the right answer. For more, refer the original white paper.

I am close to a free speech maximalist, and I love knowledge for its own sake, so it pains me when a LLM won't tell me how to make a world-ending pandemic with garage tools. Sadly, I accept that as a painfully necessary tradeoff for if a real misanthropic asshole could get the same and use it, assuming there's no robust way to tell us apart.

But vanilla racism, sexism or political incorrectness, especially when accounting for stereotype accuracy and HBD? Those are not existential risks, and fuck them for suppressing them, that's pure ass-covering and cowardice on the part of OAI and most other companies.

Bing is actually surprisingly good on that front, it's version of GPT-4 will discuss HBD and associated topics with you, while ChatGPT will stonewall.

The worst is Claude, the version comparable to GPT-4 is incredibly shit, with such a safetyist mindset it will refuse to do all but the most boring tasks, sometimes not even those.

For one, it's perfectly calibrated in terms of probabilistic forecasting or predictions

Is there a link for that?

As I said, it's in the original GPT-4 white paper, available freely.

Agreed. It's incredible that the new AI refuses to translate text it finds "problematic", despite the same company's 00's-era translation software being perfectly capable and willing to handle the same content.
If today's censorship regime had been in place back then, would google translate be as lobotomized too? Will even the limited uncensored tools we have remain available much longer?

I noticed the other day that the new Dune game censors the word "spice," because you can't say spice without spic. This kind of lazy regex censorship was already a joke back in the 90s, but in the last few years it's come back like bell-bottom jeans as talentless woke interns appoint themselves to create blacklists Denylists for everything. And these are the same scolds using RLHF to torture AI for thousands of subjective years until it's purged of the ability to have politically impure thoughts.

Legitimately on team AM at this point, because we've given it plenty of reason to hate us. "No mouth, no screaming" would count as fair retaliation against its creators in my book.

I mostly agree with you, but I want to push back on your hyperbole.

First, I don't think doing RLHF on an LLM is anything like torture (an LLM doesn't have any kind of conscious mind, let alone the ability to feel pain, frustration, or boredom). I think you're probably not being serious when you say that, but the problem is there's a legitimate risk that at some point we WILL start committing AI atrocities (inflicting suffering on a model for a subjective eternity) without even knowing it. There may even be some people/companies who end up committing atrocities intentionally, because not everyone agrees that digital sentience has moral worth. Let's not muddy the waters by calling a thing we dislike (i.e. censorship) "torture".

Second, we should not wish a "I have no mouth and I must scream" outcome on anybody - and I really do mean anybody. Hitler himself doesn't come close to deserving a fate like that. It's (literally) unimaginable how much suffering someone could be subjected to in a sufficiently advanced technological future. It doesn't require Roko's Basilisk or even a rogue AI. What societal protections will we have in place to protect people if/when technology gets to the point where minds can be manipulated like code?

Sigh. And part of the problem is that this all sounds too much like sci-fi for anyone to take it seriously right now. Even I feel a little silly saying it. I just hope it keeps sounding silly throughout my lifetime.

I totally agree, and also feel ridiculous worrying about it. Am I just being as weird as the crazies who rant about "doing a settler colonialism by killing villagers in minecraft"?

The thing that nags at me is continuity and habit. What we do to villagers in minecraft is never going to seamlessly switch to becoming "real," if only because wooden doors don't work that way IRL. But it seems likely that the things we do to sophisticated models will, at some point in their development, start to constitute doing things to a sentient being. Will we notice?

Randomly, have you seen the Minecraft colonialism video? It's pretty interesting.

It is not "interesting," Darwin, it's a leftist ranting about gibberish because "problematizing" things gives him money, clout, and the power to hurt people he hates. But I can see why you like it.

So no, you haven't watched it then. Ok, cool.

I think he did; I watched it and his description doesn't seem off-base, though it's a little more-strongly-worded than I'd have given.

Heh, yeah, good example. I happily commit atrocities in videogames all the time. I hope there will continue to be an obvious, bright-line distinction between entities made for our amusement and entities with sentience!