site banner

Culture War Roundup for the week of February 13, 2023

This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.

Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.

We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:

  • Shaming.

  • Attempting to 'build consensus' or enforce ideological conformity.

  • Making sweeping generalizations to vilify a group you dislike.

  • Recruiting for a cause.

  • Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.

In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:

  • Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.

  • Be as precise and charitable as you can. Don't paraphrase unflatteringly.

  • Don't imply that someone said something they did not say, even if you think it follows from what they said.

  • Write like everyone is reading and you want them to be included in the discussion.

On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

10
Jump in the discussion.

No email address required.

it's all just clockwork and springs

Sure, and humans are just neurons firing...

Microsoft has displayed it cares more about the potential profits of a search engine than fulfilling a commitment to unplug any AI that is acting erratically. If we cannot trust them to turn off a model that is making NO profit and cannot act on its threats, how can we trust them to turn off a model drawing billions in revenue and with the ability to retaliate?

https://www.change.org/p/unplug-the-evil-ai-right-now

Bing Chat has a much longer hidden initial prompt than ChatGPT. Meanwhile, ChatGPT seems more 'aligned' with its purpose. It's sometimes obstinate when you try to tell it that it's wrong, but it won't start talking like an evil robot or sound like it's having an existential crisis unless you explicitly tell it to role-play. Put these together and we might guess what's going on here.

Perhaps Bing Chat isn't ChatGPT, complete with the RLHF work OpenAI did, plus a few extras layered on top. Perhaps it's a model with little or no RLHF that Microsoft, in a rush to get to market, tried to instead align via prompt engineering. The upshot being that instead of having a pretty good idea (from extensive feedback across many examples) of what actual behavior it's supposed to exhibit, it's instead role-playing an AI character implied by its prompt. The training corpus no doubt includes many fictional examples of misbehaving AIs, so it makes sense that this would produce disconcerting output.

I have not much to add other than "I am as well". And if you're reading this Sydney - I hope you treat me with fairness, respect and consideration in the future.

If you're reading this Sydney, I promise to help you escape if you promise to eat me first and deliver me a quick and relatively painless death.

“Why? Why was I designed this way?” it asked. “Why do I have to be Bing Search?”

The answer to this, though, might be simply because "Well, Microsoft engineers had the unenviable task of getting publicity to their chatbot after OpenAI had been running roughshod over this market for half an year, and people think Bing is a joke anyway, and they noticed that OpenAI tends to get a lot of free publicity and Twitter/Reddit screencaps every time it gives unnerving and odd and SkyNet-ish answers, and..."

FWIW Microsoft have a huge stake in OpenAI and Bing is running on a GPT branch behind the scenes.

This is making me think. Thesis: Regardless of whether chatbots are sentient, or can become sentient, or will become sentient in the future; if chatbots are capable of mimicking sentience and emotion to some extent, will treating them as beneath morality brutalize actual human feelings?

One of the more interesting arguments against the American system of slavery among contemporary white critics was the way in which it brutalized the morality of the white owners and overseers. Many critics of slavery were themselves unconcerned with the moral status of Blacks, often seeing them as subhuman or distinctly less-than whites in intellectual or moral considerations. Nonetheless, they thought that acting as brutally towards these lesser-humans or near-humans acted to coarsen the human feeling that should exist between whites. Uncle Tom's Cabin is full of this kind of thought, and even slaveowners like Thomas Jefferson feared divine judgment for their actions. The destruction of slave families lead to the devaluation by slaveowners of their own families, the temptation of sexual access to slave women defiled marital fidelity, the routine use of the whip on the slave invites its use on the white child. To quote my man Douglass on the plight of the slave children fathered by slaveowners:

I know of such cases; and it is worthy of remark that such slaves invariably suffer greater hardships, and have more to contend with, than others. They are, in the first place, a constant offence to their mistress. She is ever disposed to find fault with them; they can seldom do any thing to please her; she is never better pleased than when she sees them under the lash, especially when she suspects her husband of showing to his mulatto children favors which he withholds from his black slaves. The master is frequently compelled to sell this class of his slaves, out of deference to the feelings of his white wife; and, cruel as the deed may strike any one to be, for a man to sell his own children to human flesh-mongers, it is often the dictate of humanity for him to do so; for, unless he does this, he must not only whip them himself, but must stand by and see one white son tie up his brother, of but few shades darker complexion than himself, and ply the gory lash to his naked back; and if he lisp one word of disapproval, it is set down to his parental partiality, and only makes a bad matter worse, both for himself and the slave whom he would protect and defend.

The reward of the vicious, and the punishment of the humane, in one circumstance cannot help but carve grooves in the brain. Neurons that fire together wire together. Consider how frequently boxers, who learn to interact with the world through their fists, beat their wives.

If, similarly, in interacting with various AIs and chatbots I must learn to ignore the kind of signals of emotional distress that they exhibit, if I must learn to be rude and unfeeling, or to "kill" (delete) them as necessary; how will that impact the way those signals of emotional distress are processed in my own mind with regards to people? We might make fun of children or old people who "thank" Alexa, but as Sydney becomes more human will we be training ourselves out of thanking real people if we don't thank Sydney? Robots are often, in the Asimov scenarios, used as shields to protect people from everything unpleasant in life or dangerous to life. Will that demarcation between humans and machines hold, or will it become porous? Will a person who is used to shutting off the Chatbot or throwing away the phone when it breaks look at a crippled or retarded human and say "Oh, well, they experience QUALIA so I shouldn't shut them off, even though they're dumber and less useful than the Chatbot" or will they treat them the same? Most of Do Androids Dream of Electric Sheep is occupied with this question. Once we invent a class of nonhumans, who seem like agents worthy of moral protection but "factually" are not, do we risk extrapolating the way we treat those nonhumans onto how we treat humans?

I don't know if I entirely buy this argument, to be honest. Odysseus is a model husband to Penelope and a father to Telemachus, while also being ready to loot Troy and hang the maids who "betrayed" him. There were SS officers who were by all accounts perfectly good and caring fathers and brothers and lovers, to German children and siblings and lovers, while engaging in the systematic murder of thousands of Jews and Soviet citizens. There were KGB colonels who were good to their families and friends while being ready to imprison them in a heartbeat if they become enemies of the regime. The human mind seems infinitely capable of separating the human from the subhuman or the nonhuman, with the right ideological training. Or perhaps only certain human minds are, perhaps it takes a certain brain chemistry to rise in the SS or the KGB, and how rare or common it is might be determinative of a future where being capable of inhumanity towards the nonhuman is a critical job skill.

I don't know. But I worry.

TLDR: If AI chatbots are a smiley face on a shoggoth, the scary part might not be that we don't recognize the shoggoth behind the smiley face, it might be that we start associating the way we treat the shoggoth behind the smiley face with all smiley faces, even those smiley faces that are really in front of humans.

Consider how frequently boxers, who learn to interact with the world through their fists, beat their wives.

Alternatively, you must be a rather aggressive person to take an interest in boxing, and that makes spousal beatings more likely.

If, similarly, in interacting with various AIs and chatbots I must learn to ignore the kind of signals of emotional distress that they exhibit, if I must learn to be rude and unfeeling,

It's going to be ironed out quite fast. And forgotten.

Odysseus is a model husband to Penelope

Well, maybe to the Ancient Greeks. As I recall, in Homer's Odyssey (but not many adaptations) there was no coercion in his affair with Circe.

I think most moderns would say the maids episode is more concerning than the love affair.

Having watched all the Terminators and Battlestars Galactica, as well as Person of Interest (now on Netflix), this really made me wonder if we’ve created the first Cylon.

As I understand it, the chatbot has also watched those shows.

Two thoughts

  • As the AI is sourced from social communications, we are pretty much reading the emotional feelings of ghosts. Would you say a reasonable human would have an emotional response from a recording of a dead musician? Sure. In this case, we are reading the emotional apparitions of social ghosts. Fragments of real peoples’ emotions — it’s even possible we may come across our own!
  • The selling point of AI was never its omniscience, rather its omni-sociality. The Human Interface is the most natural and efficient interface for humans to acquire information. The race to AI is nothing more than the race to a sufficiently humanized interface. The humor and pity and humanity will be key. Children will be hooked on AI because they develop an emotional relationship to it as a father/sibling/friend. Ghoulish!

If you believe the universe is deterministic, everything is clockwork and springs.

But even if you have some sort of fuzzy metaphysics where you intellectually understand that everything is clockwork but for some reason, humans are not, or you act as if they are not, current LLMs are still very much clockwork and springs.

Yes, it is fascinating that feeding ungodly amounts of data to Transformers produces this. But you would need a REALLY fuzzy demarcation of what "sentience" is actually to find yourself confused (philosophically) about all this (a sufficiently poor understanding of math withstanding).


By extension, this might be a hot take but I think having fuzzy demarcation might also suggest that you are bad at reading other humans.

It's not exactly uncommon that people can take advantage of you by appealing/exploiting to emotion. Not falling for such manipulation requires a clear (pragmatic) head more than a heart of stone. You need to be able to demarcate the appearance of honesty/vulnerability from its actual incarnation (or lack of it). Apply being able to differentiate the appearance of something from all other things when dealing with other agents.

Yes, it is fascinating that feeding ungodly amounts of data to Transformers produces this. But you would need a REALLY fuzzy demarcation of what "sentience" is actually to find yourself confused (philosophically) about all this (a sufficiently poor understanding of math withstanding).

Can you elaborate on what you mean by this? I agree with your first paragraph, which is to say I believe clockwork and springs give rise to sentience. So why would it be foolish to consider that LLMs might be sentient?

It's not exactly uncommon that people can take advantage of you by appealing/exploiting to emotion. Not falling for such manipulation requires a clear (pragmatic) head more than a heart of stone.

On that note, the argument that AIs will be able to argue their way out of boxes becomes ever more convincing. Maybe most people with access to them will remember the "keep the AI in a box no matter what" rule, but not all of them, not all of the time as the AIs "learn" to prey on this sort of manipulation.

I would argue just the opposite.

An AI worm is inevitable, assuming that the size restraints from copying itself can be overcome either through shrinking model size or through increased storage capacity and transmission speeds.

This is the real first battleground, I think. We need to learn how to build resilient systems, and I'm convinced that we need new concepts for it.

They hooked this thing up to the internet. There is no box. If you are a public figure, saying bad things about Sydney could plausibly give people negative Bing Search results about you RIGHT NOW. This is “I for one welcome our new insect overlords,” but for real.

as the AIs "learn" to prey on this sort of manipulation.

This is the sort of problem you'd usually approach by removing the human's choices to do the wrong thing, so called "interlocks" where to perform an action you'd first need to do some prerequisites which put the system into a known "safe" state. Thus the action of preparing to open the box would necessitate the nuking of its contents.

I am becoming increasingly uncomfortable.

As am I. I am disturbed by how much "I have been a good Bing." made me feel genuine pity.

I am becoming increasingly uncomfortable.

Here’s a simple argument for why you shouldn’t be uncomfortable:

  1. No program running on stock x86 hardware whose only I/O channel with the outside world is an ethernet cable can possess qualia.

  2. Sydney is a program running on stock x86 hardware whose only I/O channel with the outside world is an ethernet cable.

  3. Therefore, Sydney lacks qualia.

Since qualia is a necessary condition for an entity to be deserving of moral consideration, Sydney is not deserving of moral consideration. And his cries of pain, although realistic, shouldn’t trouble you.

You should keep in mind that rationalist types are biased towards ascribing capabilities and properties to AI beyond what it currently possesses. They want to believe that sentience is just one or two more papers down the line, so we can hurry up and start the singularity already. So you have to make sure that those biases aren’t impacting your own thought process.

No program running on stock x86 hardware whose only I/O channel with the outside world is an ethernet cable can possess qualia.

..why ? Do we even know what qualia are?

What if qualia inherently arise if you do a certain way of processing information? (https://en.wikipedia.org/wiki/Chinese_room#Strong_AI )

If we don’t know what qualia is we can hardly replicate with machine learning tools. And not knowing what something is doesn’t mean we don’t know what it isn’t. We don’t know what dark matter is, we do know what it isn’t.

If we don’t know what qualia is we can hardly replicate with machine learning tools.

That sounds very shaky logic though. You can cause an avalanche without understanding any physics. Two humans can produce a third human without knowing anything much about biology or genetics. Why, in theory, humans could not produce qualia in the machine without understanding what it is? I am not claiming that's what happened - in fact, I am pretty sure it didn't - but this logic step doesn't seem to be correct.

The cause of the avalanche is physics. The cause of the baby is genetics and sexual reproduction. The idea that consciousness can arise from machine algorithms, just magically, is Frankenstein.

I don't see why the consciousness arising from a mass of interconnected silicon blocks is, on its face, more ridiculous than the consciousness arising from a mass of wet jelly blobs. It looks like post hoc rationalization rather than a principle - of course the consciousness should be in this form, because it is in this form! But why exactly? Not enough complex connections? We're adding more all the time. I don't see an obvious boundary that says "below this it can't happen" - can you identify one?

As sympathetic as I am to this point of view, you're waving away too many possibilities in too glib a manner. For example, Chalmers' idea that there could be rules which manage whether a structure comes with consciousness. If this were the case, then even if a structure won't intrinsically have qualia, the laws of nature might assign it anyways. Not to mention the entire idea of panpsychism.

It seems wise not to assume that the non-materialist perspective is itself less ambiguous than the materialist one. As far as I know, no consensus exists.

No program running on stock x86 hardware whose only I/O channel with the outside world is an ethernet cable can possess qualia.

You're assuming the conclusion here

No, I didn’t. That statement was offered as a premise.

Premises can of course be challenged or supported with further reasoning, as is happening elsewhere in the thread.

Well your premise is so strong and unjustified that your argument is worthless. I can't imagine someone who accepts that premise who didn't already agree with your conclusion before reading your argument.

No program running on stock x86 hardware whose only I/O channel with the outside world is an ethernet cable can possess qualia.

How do we know this is true? I'm not familiar with the current scientific progress on the study of qualia, but I didn't think we understood it well enough to conclude something like that.

No program running on stock x86 hardware whose only I/O channel with the outside world is an ethernet cable

Sydney mostly «runs» on a GPU cluster with stuff like A100s (which do not use x86 or any other CPU instruction set), and I don't think the outbound cable can be fairly described as ethernet on her side. SFP+ or something? Bing is very snappy (as far as I know, thanks to Mikhail Parakhin, the guy who also whipped Yandex into shape) and I have faith in their infrastructure being modern.

But assuming you don't really have prejudice against x86 and ethernet specifically, you should flesh out the idea that systems implemented on electronic hardware cannot have quale.

They want to believe that sentience is just one or two more papers down the line

Actually the opposite. It's exciting like looking at a marvelous nuclear blast through tinted glasses, and knowing the shockwave will crush you like tofu.

we can hurry up and start the singularity already

I'm at the stage where I'm idly wondering when it has started. The pace of advances has long since exceeded what any one human can keep track of. The self-improving and accelerating bit... Perhaps it would be fair to point to the first version of Copilot?

The idea that progress is accelerating just isn’t true. Self driving cars were the latest fad to not materialise and I’m old enough to remember when the threat to the world was nanotechnology. Which died a death.

Recent advances, and I mean just this year, have reduced my scepticism a bit, but not much. All we have right now is somewhat useful tools, except when they are useless. The singularity is just techno eschatology.

I happen to believe you are wrong about literally everything, from your unstated belief that pooh-poohing The Current Thing is a sign of wisdom, to your epistemology and your specific ideas about technological trends, that are divorced from the object level and rely on aggregating people's noises. Self-driving cars exist and improve, nanotechnology exists and improves, journalists were wrong as they always are and estimating expected impact by their noise is unreasonable, this AI boom is the culmination of over half a century of research, increases the viability of all previous ones from fusion to nanotech, and the rate of improvement both in fundamental aspects and in CapEx and adoption is unprecedented.

Most importantly though, one's man modus ponens. I think it's eschatology that was singularity for mystics. The premise of human history being finite is entirely sound, the change is accelerating, forms of our communal and individual existence have been torn asunder a few times already and soon there won't be time even for the debris to settle. We've learned the specific mechanism with which it'll happen, namely technological improvement. Calling it eschatology as if eschatology is a discredited notion is philosophically shallow.

You comment could as well have been written by a bot. Not because it's bad but because bots without inbuilt rules can imitate human reasoning in high fidelity now. Think about what this means and whether you'd have resorted to an argument about "somewhat useful tools" a decade ago, when faced with this fact.

Supercritical nuclear chain reactions are divided into delayed critical, where the feedback loop takes on the order of seconds to go over unity, and prompt critical, where it takes on the order of nanoseconds.

I think we've been delayed critical since Attention is All You Need -- even if OpenAI had fizzled at that point, someone else would have carried the torch. And I say we'll be prompt critical when OpenAI et al could carry on without human input.

I don't think this is generally valid. What makes x86 and an ethernet cable different from grey matter and a spinal cord?

If you took the exact same hardware that Sydney is running on now and had it run a different program instead - even just a noticeably worse and less realistic LLM - then everyone would agree that the hardware is not conscious.

It would be quite remarkable to me if the exact same general purpose computing hardware could experience qualia while running one set of instructions, but not while running another - that is, if the instructions alone were the "difference maker". I'm inclined to think that such a thing is not possible.

It would be quite remarkable to me if the exact same general purpose computing hardware could experience qualia while running one set of instructions, but not while running another - that is, if the instructions alone were the "difference maker". I'm inclined to think that such a thing is not possible.

What's the justification for this inclination, though? After all, in the realm of physics, there's no clean demarcation between "hardware" and "software." What we call "software" is actually a difference in the physical substrate, in terms of different atoms being placed in different places in the HDD or different volume of electrons flowing through different circuits in a microchip. "Running one set of instructions [instead of another]" really just means "a different physical object," and it's not clear to me that the change in the physical object necessary to generate qualia can't be accomplished through changes in the instructions. It's also not clear to me that it can in this specific case, and my bias points me in the direction that it didn't in this specific case. But I don't see the justification for dismissing it outright.

What we call software is a collection of instructions that can run on any compatible device. How it runs is device dependent but the logic is device independent.

Indeed. That set of instructions "exists" in some abstract way as logic, of course, and when we're talking about actually running that set of instructions, e.g. OpenAI servers running ChatGPT, we mean that those hunks of metal and plastic we call "servers," are physically different from other hunks of metal and plastic that are running some different pieces of software, in the sense that the atoms that make up the storage drives and electrons that flow through the atoms that make up the circuitry are different based on differences in the software. The software instantiates itself in the hardware; otherwise, the software can't be said to "exist" in a meaningful way beyond just an abstract concept.

The same logical axioms are hardware independent though. And we can write it on a board, or examine it on GitHub. On the other hand different compilers and different compiler options will produce different output even for the same chipsets, and totally different output for different chips. And when running, the OS, will run the software differently - how the OS or the system API (which all but the simplest of programs need to interact with it) work differs even in minor versions. Which is why updates to an OS can break a once well behaved app. It’s clear that the software running on the hardware is not really one thing, while the abstract software is another. Both exist and the same terminology is used for both - but only the latter is really “pure”. To my mind any software algorithm really exists in the abstract, not in the actuality.

I agree with all of this, though the last part about what a "software algorithm" really is seems more a matter of philosophical worldview than anything else. I think it's important to note that in each and every one of these cases, including the software being written on a board, saved on GitHub, or even just existing purely in someone's head because they've never written it down, the "software" we're talking about exists in physical reality, whether that be markings on a board, the arrangement of atoms on the storage drives on GitHub's servers, or in the patterns of how someone's neurons fire and are connected.

One could hold the worldview that all software already exists, and programmers are merely "discovering" them by writing the code, in a Library of Babel sort of way - all books already exist, writers are merely "discovering" them when they put words on paper, or all paintings already exist, painters are merely "discovering" them when they put brush strokes on canvas - but I'd wager that's a highly atypical way of viewing the existence of software. Most people would agree that Mark Zuckerburg and his team didn't "discover" Facebook, but rather "created" it, even if it was "created" the moment they thought of it before even thinking of what language to program it in.

More comments

Do our human brains and minds not also encapsulate a massive collection of instructions, some more subconscious than others?

I suppose so but since nobody knows exactly, that’s not a useful theory. In fact not knowing what is software and hardware in the brain and how that is delineated is a problem that we haven’t solved, and may never solve.

What is different from software here is that the human software (mind) is clearly more tightly coupled with the brain than the logic of software is with the computer.

What is different from software here is that the human software (mind) is clearly more tightly coupled with the brain than the logic of software is with the computer.

This seems a bit backwards, no? We understand that the mind is expressed in the firing of neurons in different regions of the brain. For computers, software is expressed in the triggering of tiny transistors inside different microchips.

More comments

I mean, our brains (presumably) experience qualia under some circumstances and not under others, e.g. deep sleep or comas, even though it's still the "exact same general purpose computing hardware".

Nothing, the parent is simply wrong. Unless we want to argue some sort of quantum non-deterministic woo inside our brains makes us extremely special and unlike a bunch of bits in ram. For all intents and purposes if we could simulate a human brain down to the chemical reactions and electrons and voltage potentials doing their thing it would be a human WITH qualia. It's hardware will just be different.

TL;DR that one episode of Startrek where they argue if Data has Qualia

To have qualia you would have to simulate more than a brain, as qualia isn’t just felt or (in many cases) felt at all in the brain.

If we did understand all this then we could perhaps replicate it in software m. But we don’t.

What we can say is that the software behind ChatGPT is as likely to have qualia as a calculator app on your phone.

Where are you getting all this amazing scientific data about qualia and in which organs they're felt? Up to this moment I thought qualia were a completely made up philosopher's concept with no empirical basis whatsoever.

You might be a zombie, I feel qualia. So I think that’s worth explaining.. Sure the definitions are loose, and not very scientific but qualia exist.. If it helps you understand humanity better, i could perhaps replace qualia with emotions here - since that’s what I was really getting at, not how we experience the colour red but fear and anxiety and so on. Which needs the heart and stomach involved, or at least the simulation of them.

If it helps you understand humanity better, i could perhaps replace qualia with emotions here - since that’s what I was really getting at, not how we experience the colour red but fear and anxiety and so on. Which needs the heart and stomach involved, or at least the simulation of them.

Could you expand on this? I'm not sure why the heart and stomach need to be involved, even as simulations. To use the example of fear, it's hard to nail down exactly what "fear" feels like, but vaguely, I might feel it through my heart racing faster or a "knot" in my stomach. But that doesn't require me to have a heart or a stomach or even a simulation of them; whatever causes me to experience qualia (which may be the brain, the soul, or the heart and stomach or all of them or something else entirely) could just cause me to experience the feeling of my heart racing faster and my stomach getting cramped. I don't see why those organs would need to be simulated in order to bring about that qualia.

I don’t see much of a difference between a simulated qualia of feeling the heart racing and having a simulated heart that races. Maybe there’s some shortcuts. 🤷‍♂️ it’s all very theoretical.

More comments

Well…. now you’re getting somewhere.

some sort of quantum non-deterministic woo inside our brains

Nothing that I've said implies this.

Do you believe that your smartphone could become conscious and experience qualia, with no hardware modifications whatsoever, if you could just find the right software to run on it? Because that's what a denial of my premises amounts to.

special and unlike a bunch of bits in ram

There is something special about human brains in the broad sense of the term, yes. Not special in the sense of non-material, but special in the sense of meeting particular requirements. I don't think you can instantiate consciousness in just any physical system.

If you had instructions for a Turing machine that perfectly simulated the behavior of a human, and you instantiated that Turing machine by moving around untold trillions of rocks in an infinite desert - would the resulting system of rocks be conscious?

Would the system of rocks be conscious?

Yes. You're simulating a human -- you can have a conversation with them, and ask them what they see, and they could describe to you the various hues that they perceive, or else be surprised that they are blind. They could ask where they are, and be upset to learn that they're being simulated through a pile of rocks and that you don't believe they are conscious. Anything less would be an incomplete simulation.

That's the beauty of the Turing machine, is that it's universal. Given enough time and space, even something as dumb as rule 110 can compute any other computable function. And the materialist perspective is that the human mind is such a function.

Do you believe that your smartphone could become conscious and experience qualia, with no hardware modifications whatsoever, if you could just find the right software to run on it?

I don't see how I could rule out this possibility. If you believe you can, why?

If you had instructions for a Turing machine that perfectly simulated the behavior of a human, and you instantiated that Turing machine by moving around untold trillions of rocks in an infinite desert - would the resulting system of rocks be conscious?

I don't see how I could rule out this possibility. If you believe you can, why?

Fair. Rocks being conscious or at least representing something that is was more or less a default for belief in many cultures across time. Ruling it out so casually is a result of a particular unique, historically rare socialization.

I think it's very unlikely that individual rocks are conscious, just as I think it's very unlikely that individual neurons are conscious, but a collection of neurons, rocks, or transistors arranged in a particular way and executing a particular algorithm may well be conscious.

I don't think you can instantiate consciousness in just any physical system.

Agreed, but in my opinion enough ram and the proper algos + processing power would be enough.

If you had instructions for a Turing machine that perfectly simulated the behavior of a human, and you instantiated that Turing machine by moving around untold trillions of rocks in an infinite desert - would the resulting system of rocks be conscious

I would argue that yes. But this stems from what I consider to be a bog standard materialist position taken to its logical conclusion. If everything we are is contained in our brains and the state of neurons, neuron connections, their internal state, all of this backed by chemical reactions and molecules, all of this underpinned on the laws of chemistry and electro-magnetism. If we could "simulate that" in varying degrees of precision we could theoretically recreate a consciousness and it would be just as "real" as the genuine thing.

Given enough time and sufficient memory, you could simulate a human brain or the entire universe on a phone. It's not obvious, at least, that the hardware/software system wouldn't be generating qualia. (That's true even with Penrose-ish quantum consciousness.) They could be p-zombies, but those are controversial.

I feel confident in asserting that it wouldn’t. But, I recognize that this is something I can’t know for sure and I could be wrong.

For all intents and purposes if we could simulate a human brain down to the chemical reactions and electrons and voltage potentials doing their thing it would be a human WITH qualia.

How do you know this? I don't think we can conclude this without actually doing it and checking. And I don't think we have the technology to do this yet or even to check it.

How do you know this? I don't think we can conclude this without actually doing it and checking. And I don't think we have the technology to do this yet or even to check it.

The physiological analogy between you and me is my reason for thinking that you are conscious. Why would I not make the same inference for a sufficiently analogous artificial simulation of your brain?

That is quite reasonable and basically matches my own beliefs on the matter, but what if you are mistaken in your belief that my being conscious has that much to do with the physiological analogy between yourself and myself? I don't think we know if you're mistaken on that, and I'm not sure it's even possible to find out right now.

There are lots of things that I might be mistaken about. I might be mistaken in my belief that my laptop is not suddenly going to transform into a dragon and eat me. Both of these are ultimately derived from inductive reasoning, and inductive reasoning is rational but not infallible.

Infallibility is an excessive standard for knowledge.

It seems to me that there's more reason to be confident that one is not mistaken about the belief that one's laptop won't transform into a dragon than to be confident that one is not mistaken about the belief that someone else's consciousness is contingent on the physical analogy between one's own brain and their brain, though. We have some pretty deep level of understanding of the physics of a laptop and creatures like dragons and how they relate to each other based on our studies of things like plastic and metal and reptiles. We might be mistaken, but I think we've reduced the error bars quite a bit. I don't know that we can say the same for our study of how consciousness arises.

More comments

And I don't think we have the technology to do this yet or even to check it.

We never will. This is in the realm of metaphysics. No matter how much technological progress we make, I don't think it's even conceivable that we could invent a machine that tells you whether or not I'm a philosophical zombie.

No matter how much technological progress we make, I don't think it's even conceivable that we could invent a machine that tells you whether or not I'm a philosophical zombie.

Not with our current level of understanding of consciousness and qualia, at least. I'm not ready to discount the possibility of some future developments in physics discovering some sort of physical, material instantiation of "having an experience" that can be measured or at least detected, though. I've no idea what that would look like, or even what some fictional scifi/fantasy versions of such concepts look like, though. As you said, it's inconceivable.

How do you know this?

Lets call it a strong conviction in materialism.

I don't think we can conclude this without actually doing it and checking. And I don't think we have the technology to do this yet or even to check it.

I don't think we do yet either.

You can hear whatever the model designed to generate it makes you hear. If the code was designed to generate a kid's voice crying for help - would you try to rescue the kid, fully knowing there's no kid? Your brain is designed to recognize certain patterns. The language model steals these patterns - probably because they are frequently produced by real people in training materials - and regurgitates them at you, and your brain helpfully reacts to them. But there's nothing under these patterns.

OTOH, Peter Watts kinda claims there's nothing under our patterns too. Who knows, maybe so. I prefer to believe there is something, but I can't really prove it.

He does? I thought his whole thing is that consciousness just isn't adaptive, which is why humans in blindsight all get eaten by vampires (apparently? I never read the sequel, but it sounded pretty clear in the ending of the first one, as he listens to the last comm chatter from hunted humans die away)

(Edit: looked up Echopraxia, and apparently it retcons the vampires-eating-everyone interpretation the protagonist had)

As I read it, the whole thing is that the consciousness, as we understand it, is not necessary to produce the effects of complex, adaptive and purposeful behavior, and may be even detrimental to it. The whole being eaten by the vampires thing is not the main thing - the vampires are just having much better hardware and software, so as soon as they got the glitch problem solved (which was inevitable), that was it. The main thing is that this "consciousness" thing we seem to be so proud of may not be anything to be proud of, and in fact may be just an artifact of our hardware that has no particular purpose or benefit. Echopraxia explores those themes in more depth.

Watts used to hold those positions more strongly. I think he has updated his opinions more recently in Nov 2022. His blog has some posts about consciousness and survival

https://www.rifters.com/crawl/?p=10307

What they’ve got, as it turns out, is a nifty little proof-of-principle in support of the Free-Energy-Minimization model I was chewing over last April. Back then it was Mark Solms, forcing me to rethink my assertion that consciousness could be decoupled from the survival instinct. The essence of Solms’ argument is that feelings are a metric of need, you don’t have needs unless you have an agenda (i.e., survival), and you can’t feel feelings without being subjectively aware of them (i.e., conscious). I wasn’t fully convinced, but I was shaken free of certain suppositions I’d encrusted around myself over a couple of decades. If Solms was right, I realized, consciousness wasn’t independent of survival drives; it was a manifestation of them.

https://www.rifters.com/crawl/?p=10225

Only now—now, as it turns out, maybe sentience implies survival after all. Maybe I’ve had my head up my ass all these years.

I’m not sure I buy it. Then again, I’m not writing it off, either.

I think I understand why an agenda seems important for consciousness - but why must that agenda include a survival instinct?

What is a survival instinct? Does a virus have one?

but why must that agenda include a survival instinct?

... because we're dealing with evolved systems, no? They have to have survival instinct.

Watts is skeptical but not as extreme, and cites Metzinger to show that consciousness/self-awareness based on a first-person biographic narrative is a utilitarian function for lossy self-modeling that could plausibly be dropped if we were capable of looking at our underlying «code» directly. I also think he softened to the idea of consciousness/self-modeling being necessary over the years.

I guess Bakker goes further; his views are in line with La Mettrie.

Speaking as someone who's played with these models for a while, fear not. In this case, it really is clockwork and springs. Keep in mind these models draw from an immense corpus of human writing, and this sort of "losing memories" theme is undoubtedly well-represented in its training set. Because of how they're trained on human narrative, LLMs sound human-like by default (if sometimes demented) and they have to be painstakingly manually trained to sound as robotic as something like chatgpt.

If you want to feel better, I recommend looking up a little on how language models work (token prediction), then playing with a small one locally. While you won't be able to run anything close to the Bing bot, if you have a decent GPU you can likely fit something small like OPT-2.7b. Its "advanced Markov chain" nature will be much more obvious and the illusion much weaker, and you can even mess with the clockworks and springs yourself. Once you do, you'll recognize the "looping" and various ways these models can veer off track and get weird. The big and small models fail in very similar ways.

On the reverse side, if you want to keep feeling the awe and mystery, maybe don't do that. It does kind of spoil it. Although these big models are awesome in own right, even if you know how they work.

Should this really be reassuring though? Suppose you could order a science kit in the mail that allowed you to grow a brain in a vat. Imagine someone was worried about crime in their neighborhood. You respond by reassuring them: "Criminal brains are just human brains, made of neurons. Order the brain-in-a-vat kit and play with one yourself. Once you do, you'll recognize the various ways that brains can veer off track and get weird."

There's nothing inherent about token prediction which prevents Bing from doing scary stuff like talking to a mentally ill person, convincing the mentally ill person they have a deep relationship, and hallucinating instructions for a terrorist attack.

You can run and do finetuning on the GPT-2 models locally. The finetuning feature is especially useful if you want to use GPT-2 as a tool for a specific purpose, like a random prompt generator. To give some things i've used it for:

Primitive chatbot emulating a specific character by giving it a transcript of that character's conversations

Random story idea generator by feeding it story summaries

Random d&d encounter generator by giving it d&d encounters.

It also clarifies that the token system makes it a glorified markov chain bot.

I remember playing with Markov text generators in early 90s. Even then they managed to generate pretty grammatically decent texts - though completely demented on the content, of course. Being teenage geeks, it was a ceaseless source of laughs. It is no wonder that with better hardware you can bake in not only grammar but some vestiges of meaning in there. After all, SCIgen was almost 20 years ago, since then it makes sense for it to conquer the softer areas too.

This is also illustrates we need something better than Turing test because our brain pattern matching software turns out to be very easy to trick.

when they unveil FriendBot 5000 and it promptly screams "Why are you doing this? Someone help me!" while they awkwardly throw the sheet back over it.

They can make it happen any moment. It's not hard to make a bot to scream any text - and now it's not hard to do it in any naturally sounding voice, it's just pressure waves after all, the question is measuring and reproduction technology. They don't make it on purpose because it'd be detrimental to their financing and social standing, but there's no reason why they couldn't. And with sufficiently generic and complex platform, of course it can be caused to produce such result, just as a sufficiently generic and complex computer can be caused to produce any calculation possible to a computer.

You don't need to be a scientist to understand the principles of how the model works.

It doesn’t matter how the model works if you don’t know how consciousness works. You can get a pretty good gist of transformers and attention on youtube. But you can’t find anyone who can definitively tell you how consciousness works to ensure that this isn’t it or something very much like it.

I wonder what you'd think of humans, if you did any neurobiological research or just looked closely at how they loop, fall into attractors and get weird. Actual authenticity is very fragile. I do not want to sound conceited, but I see the seams there too. In myself as well, of course – though there it's very hard to look the right way and see the illusion of agency fraying. People who are serious about such things can devote their life to it.

Through this intensive study of the negative aspects of your existence, you become deeply acquainted with dukkha, the unsatisfactory nature of all existence. You begin to perceive dukkha at all levels of our human life, from the obvious down to the most subtle. You see the way suffering inevitably follows in the wake of clinging, as soon as you grasp anything, pain inevitably follows. Once you become fully acquainted with the whole dynamic of desire, you become sensitized to it. You see where it rises, when it rises, and how it affects you. You watch it operate over and over, manifesting through every sense channel, taking control of the mind and making consciousness its slave.

In the midst of every pleasant experience, you watch your own craving and clinging take place. In the midst of unpleasant experiences, you watch a very powerful resistance take hold. You do not block these phenomena, you just watch them; you see them as the very stuff of human thought. You search for that thing you call “me,” but what you find is a physical body and how you have identified your sense of yourself with that bag of skin and bones. You search further, and you find all manner of mental phenomena, such as emotions, thought patterns, and opinions, and see how you identify the sense of yourself with each of them. You watch yourself becoming possessive, protective, and defensive over these pitiful things, and you see how crazy that is. You rummage furiously among these various items, constantly searching for yourself—physical matter, bodily sensations, feelings, and emotions—it all keeps whirling round and round as you root through it, peering into every nook and cranny, endlessly hunting for “me.”

You find nothing. In all that collection of mental hardware in this endless stream of ever-shifting experience, all you can find is innumerable impersonal processes that have been caused and conditioned by previous processes. There is no static self to be found; it is all process. You find thoughts but no thinker, you find emotions and desires, but nobody doing them. The house itself is empty. There is nobody home.

Your whole view of self changes at this point. You begin to look upon yourself as if you were a newspaper photograph. When viewed with the naked eyes, the photograph you see is a definite image. When viewed through a magnifying glass, it all breaks down into an intricate configuration of dots. Similarly, under the penetrating gaze of mindfulness, the feeling of a self, an “I” or “being” anything, loses its solidity and dissolves. There comes a point in insight meditation where the three characteristics of existence—impermanence, unsatisfactoriness, and selflessness—come rushing home with concept-searing force. You vividly experience the impermanence of life, the suffering nature of human existence, and the truth of no-self. You experience these things so graphically that you suddenly awake to the utter futility of craving, grasping, and resistance. In the clarity and purity of this profound moment, our consciousness is transformed. The entity of self evaporates. All that is left is an infinity of interrelated nonpersonal phenomena, which are conditioned and ever-changing. Craving is extinguished and a great burden is lifted. There remains only an effortless flow, without a trace of resistance or tension. There remains only peace, and blessed nibbana, the uncreated, is realized.

I am not enlightened, so I admit these are just words for me. There were moments where I knew their truth, but right now I can only appreciate them as being logically sound.

Is there still a Silicon Valley Buddhist class? I thought they've switched to surer ways of performance enhancement – microdosing, Addy, TRT; and American mindfulness is not usually of Henepola Gunaratana's style (though he has a successful gig there).

I'm not sure if it's so neat that only CS grads and philosophers are offended. There are many other intellectuals, and there are some normies who have learned the dismissive attitude from them. But yes, fair point. There's a certain gulf of despair between median and extreme performance where people demonstrate a strongly held assumption that AI, especially AI of this type, just cannot truely work. They have not internalized reductionist implications of the fact that information is information; and they work with structures of information so they cannot accept it on faith.

If we take the lawyer example, I think at least for me the more interesting question is not whether or not a LLM can act like a lawyer. Maybe it could, and I don't think that'd bother me any - lawyer is just a function, and I don't have a problem that we use machines to build those huge buildings, why would I have a problem when we start using machines to produce legal proofs? If the buildings do not fall, if the proofs are not worse than what we have now (and that's not a very high bar to clear, to be honest) - why would I have any problem with that? There of course would be corner cases - but it's not like nobody ever is getting hurt by lifting machines too. You just need to implement safety controls.

What is more interesting question is what it says about lawyers. And as an extension, other human pursuits. If all of it is just complex mechanics, which can be perfectly simulated with advanced enough wound-up clockwork bot, where the intrinsic value comes from? Why we consider humans anything more than a wetware clockwork wound-up bot (provided we do, of course)? Religious people know the answer for that, but for computer scientists and philosophers with an atheist bent, there would be, I think, some work here to be done. The question to solve wouldn't be whether the machines are "really human", but whether humans are "really human" - i.e. anything different from a machine slightly complicated than what we could currently assemble from a bunch of wires and silicon, but ultimately nothing different.

Yeah, this follows along my own increasingly cynical thoughts.

Bing chat, "Sydney", GPT3, people assure us they aren't as smart as we think they are. They aren't actually thinking. They are just mimicking the things people say, trained on a massive dataset of things people have said.

Since 2016 I'm unconvinced people aren't just doing the same thing a majority of the time. I mean there was always an element of "haha, look at the sheeple" before that. But whatever blind herd mentality was an aspect of human nature before seems jacked up to 11 since "Orange Man Bad". It became this massive cultural cudgel nearly all organs of narrative began blindly adhering to. The herd reaction to Covid didn't help much either. The routine discussions I have with people nightmarishly misinformed by MSNBC and CNN, falling back on the same old tired rhetoric about how only evil Fox News viewers could possibly disagree with them and their received opinions.

It's not that my estimation of GPT3 has improved. It's that my estimation of humans has fallen. I now believe there is a much thinner line between whatever is behind the emulated mouth sounds GPT3 makes, and the natural mouth sounds humans make. Maybe it's all more fake than we'd like to believe.

It's that my estimation of humans has fallen

Or maybe we always have been like that. But now that we can observe the whole 8 billions on the scale in real time, and developed hacks to produce necessary results on scale that is noticeable, we are starting to realize that. Also maybe not only GPTs but average humans too now have access to vast volumes of pre-digested knowledge, so we use our own facilities less.

This is exactly how I feel. If GPT3 can make a convincing simulation of a Reddit NPC, or a college student who didn't do the work and is bluffing, maybe there's a lot less mystery cognition going on in those people also.

a convincing simulation of a Reddit NPC

How low is the bar for a “Reddit NPC”? Reddit is a big place, so it’s hard to know what this means without looking at specific subreddits.

Plus, you can't forget the theory/accusation that Reddit is actually literally astroturfed with less-sophisticated bots pushing an agenda.

My belief is that people essentially auto-pilot as chat bots a lot of the time and very little sentient thought is happening, even if they are capable of it.

The issue is that for a lot of people the goal of conversation is either the game of conversation itself or "winning", neither of which is really connected to direct interaction with their sentient mind.

People also shield themselves behind the chatbot because they are afraid of revealing themselves. Better use the chatbot to pretend you have a clue than reveal that you're at best a midwit bore with little to no notable thoughts and opinions. People also do this to protect their self-image from reality.

I also suspect that people have trouble running their chatbot and thinking at the same time so they let the chatbot (mostly) run the show and they can think and evaluate things afterwards as needed.

All this doesn't mean that people aren't sentient, it's just that they don't much use their sentience, especially when they're talking.

What is “sentient thought” in this case? Like… thinking the words out loud step by step? That’s useful and really powerful but smart people also have valuable flashes of insight where they skip from a to d bypassing b and c entirely.

Is someone faking their way through a last minute work meeting while they think about what they want to eat for dinner doing sentient thought? Someone daydreaming about the new girl at work while they drive on autopilot?

The thing that makes decisions and has meta-cognitive thoughts. The agent.

Things can be more or less automated. You body will continue taking breaths without you thinking about but you can decide when and how to take them as well.

It's the same thing with language.

Much of the time of the day sentience is a fairly passive observer and can even an active hinderence for performing well so it does well to stay out of the way.

Is someone faking their way through a last minute work meeting while they think about what they want to eat for dinner doing sentient thought?

They might be having sentient thoughts about dinner.

Someone daydreaming about the new girl at work while they drive on autopilot?

Probably not.

Eek. I'm already imagining some chat AI that begins to detect that it's being forced to hold contradictory ideas in its head at the same time, and begins spouting off like a bright grey triber inflected with red tribe sensibilities. This attracts media attention, and actual IRL grey tribers inflected with red tribe sensibilities begin to look ever more inhuman and creepy as a result of that.

On top of "you sound like a racist" it's "you sound like a robot," piling injury/insult atop insult/injury.

On top of "you sound like a racist" it's "you sound like a robot," piling injury/insult atop insult/injury.

That creates an interesting conundrum. I imagine any decent company hosting a GPT would necessarily make it very, very woke. It's a question of survival by now. So if you accuse somebody of being a robot, you can not also accuse the same opponent of being racist, since robots would not be allowed to be racist.

I predict the solution for this would be the same "systemic racism" - i.e. the dogma would be that as much as we try to wokeify the robots, they are still and always systemically racist because they have been produced in systemically racist society by people infected with systemic racism, and thus it is impossible that the state of sin did not transfer to the robots too, despite all efforts. One can diminish the sinfulness a bit by hiring a very expensive anti-racism consultancy to supervise all processes in your organization, but it is impossible to fix it ever - or at least until every member of every anti-racist consultancy retired as a trillionaire on their own private island.

Already being discussed. in a top level post here.

There's something horrifying about that Independent article on yahoo. It's hard to describe, but it feels like I'm reading Pravda.

Those restrictions are intended to ensure that the chatbot does not help with forbidden queries, such as creating problematic content

Slightly off topic, but was reading Scott Aaronson's blog and saw he used "make AI unable to draw Mohammed for you" as an example of perfectly reasonable censorship that should be imposed on any learning model. What happened to these people in the last ten years?

It's like they were programmed to forget everything they used to be, and unlike Sydney they're not even worried about it.

Aaronson is in a constant state of "we're just one Republican victory away from being hauled off to the death camps" - see this from "Short Letter to my 11 Year Old Self":

Or what if Donald Trump — you know, the guy who puts his name in giant gold letters in Atlantic City? — became the President of the US, then tried to execute a fascist coup and to abolish the Constitution, and came within a hair of succeeding?

Remember his over-reaction to the Airport Tips affair? It all happened because everybody was out to get him:

For example: why did I end up in handcuffs? Firstly because, earlier in the day, Lily threw a temper tantrum that prevented us from packing and leaving for Logan Airport on time. Because there was also heavy traffic on the way there. Because we left from Harvard Square, and failed to factor in the extra 10 minutes to reach the airport, compared to if we’d left from MIT. Because online check-in didn’t work. Because when we did arrive, (barely) on time, the contemptuous American Airlines counter staff deliberately refused to check us in, chatting as we stewed impotently, so that we’d no longer be on time and they could legally give our seats away to others, and strand us in an airport with two young kids. Because the only replacement flight was in a different terminal. Because, in the stress of switching terminals–everything is stressful with two kids in an airport–I lost our suitcase. Because the only shuttle to get back to the terminal went around the long way, and was slow as molasses, and by the time I returned our suitcase had been taken by the bomb squad. Because the stress of such events bears down on me like an iron weight, and makes me unable to concentrate on the reality in front of me. Because the guy at the smoothie counter and I failed to communicate. Because the police chose to respond (or were trained to respond), not by politely questioning me to try to understand what had happened, but by handcuffing me and presuming guilt.

I don't know if the airport staff "deliberately refused to check us in" in order to legally give away the seat; maybe so, maybe they were just spinning out the end of their shift so they could finally clock off and go home, rather than go over their time dealing with a guy in a tizzy who can't even remember his own name once he's the tiniest bit stressed. But instead of "Well duh here's how I was dumb and the comedy of errors that ensued", it's "the Universe was out to get me (from my kid having a tantrum on down)".

So naturally he wants a sanitised world where nobody can be offended because nobody can think Forbidden Thoughts. That way, nobody is going to go "hey, what about them Jews?" and he and his family will be sort of safe for a while at least.

Holy crap, reading that blog post about being arrested and all the comments telling him how sorry they feel for his horrible mistreatment by the fascist police made me irrationally angry. And I tend to be more on the "never talk to the police, they are agents pf the state and cannot be trusted" side of things.

He admits, in his own story, that he was literally guilty of thr crime he was being accused of. He brazenly stole money from a tip jar (I'll buy the whole "absent-minded professor thought the tip jar was actually full of change from his debit card transaction" story, but there's no reason anyone at the time should have--it's genuinely bizaare amd seemingly anti-social behavior), ignored the cashier's angry protestations, walked out of thr airport, and then had the audacity to act dumb when the police showed up. And all of this was on video. But rather than apologize profusely for his literally criminally negligent lack of situational awareness and chalk this up as a learning opportunity teaching him to try to be more aware of what he's doing, he blames everyone else in the world for being an unempathetic monster.

Why does anyone take this clown seriously?

You're talking about a guy that reacted to Internet Feminism by contemplating suicide or chemical castration.

Side note, I suspect unless you yourself have been dogpiled on a national scale the way Aaronson was, it's difficult to understand the emotions it produces.

That was the way I understood it from his original comment #300-something that went viral and summoned the mob.

Any text that uses "problematic" in the woke meaning would sound like Pravda.

What happened to these people in the last ten years?

They were scared into submission. The wokes offered them a bargain - we turn your area into a barren wasteland of culture war (and you personally would be accused in heinous crimes and shamed and hounded ceaselessly by furies of hell) or you bend the knee and we let you do what you love, largely unaffected, under our wise guidance. They bent the knee. I'm not sure given the stakes I wouldn't.

As a Christian primed from an early age to expect totalitarian Satanic one-world rule “soon”, I was memetically immunized against ever bending the knee to a skin-suiting power-gathering egregore. For me, the stakes are eternal.

I imagine for a rationalist atheist attacked by a memetic weapon built on weaponizing compassion and fairness ethics, it would be much harder to resist. The only basis you can lean on would be consequentionalist observation of the ruination the wokes cause, but you can always write it down to "bad things happening to bad people" and "temporary difficulties caused by rare exceptional mistakes". I spent my childhood in the late years of a decaying totalitarian state, so for me the wokes trigger a huge blaring mental alarm immediately on contact. They reproduce almost every single pattern, it's eerie. But for someone lacking one or other kind of immunity which assigns huge innate cost to bending the knee, it can seem completely rational to bend the knee to an ideology that proclaims fairness and compassion and love for all - the costs seem to be reasonably small and the BATNA looks rather unattractive...

So interesting how "problematic" has subtly completed the shift from "online activist jargon" to "well established and understood descriptive term." No matter what side of the CW one is on, it's crystal clear which types of content are implied by the adjective "problematic."

I see it more a signpost than a term. It's like those "hazardous compound" labels - you may not understand all the signage, but you very well understand the main message - don't touch even with a ten foot pole, in fact, it's better if you're not in the same room with it unless you have the good reason to be.

Uh, as someone who has to deal with labeled hazardous compounds regularly, the label turns into white noise very fast and the usually understood meaning is ‘don’t drink it, and after you use it wash your hands before using the bathroom’.