This weekly roundup thread is intended for all culture war posts. 'Culture war' is vaguely defined, but it basically means controversial issues that fall along set tribal lines. Arguments over culture war issues generate a lot of heat and little light, and few deeply entrenched people ever change their minds. This thread is for voicing opinions and analyzing the state of the discussion while trying to optimize for light over heat.
Optimistically, we think that engaging with people you disagree with is worth your time, and so is being nice! Pessimistically, there are many dynamics that can lead discussions on Culture War topics to become unproductive. There's a human tendency to divide along tribal lines, praising your ingroup and vilifying your outgroup - and if you think you find it easy to criticize your ingroup, then it may be that your outgroup is not who you think it is. Extremists with opposing positions can feed off each other, highlighting each other's worst points to justify their own angry rhetoric, which becomes in turn a new example of bad behavior for the other side to highlight.
We would like to avoid these negative dynamics. Accordingly, we ask that you do not use this thread for waging the Culture War. Examples of waging the Culture War:
-
Shaming.
-
Attempting to 'build consensus' or enforce ideological conformity.
-
Making sweeping generalizations to vilify a group you dislike.
-
Recruiting for a cause.
-
Posting links that could be summarized as 'Boo outgroup!' Basically, if your content is 'Can you believe what Those People did this week?' then you should either refrain from posting, or do some very patient work to contextualize and/or steel-man the relevant viewpoint.
In general, you should argue to understand, not to win. This thread is not territory to be claimed by one group or another; indeed, the aim is to have many different viewpoints represented here. Thus, we also ask that you follow some guidelines:
-
Speak plainly. Avoid sarcasm and mockery. When disagreeing with someone, state your objections explicitly.
-
Be as precise and charitable as you can. Don't paraphrase unflatteringly.
-
Don't imply that someone said something they did not say, even if you think it follows from what they said.
-
Write like everyone is reading and you want them to be included in the discussion.
On an ad hoc basis, the mods will try to compile a list of the best posts/comments from the previous week, posted in Quality Contribution threads and archived at /r/TheThread. You may nominate a comment for this list by clicking on 'report' at the bottom of the post and typing 'Actually a quality contribution' as the report reason.

Jump in the discussion.
No email address required.
Notes -
Wake up, babe, new OpenAI frontier model just dropped.
Well, you can’t actually use it yet. But the benchmarks scores are a dramatic leap up.. Perhaps most strikingly, o3 does VERY well on one of the most important and influential benchmarks, the ARC AGI challenge, getting 87% accuracy compared to just 32% from o1. Creator of the challenge François Chollet seems very impressed.
What does all this mean? My view is that this confirms we’re near the end-zone. We shouldn’t expect achieving human-level intelligence to be hard in the first place, given all the additional constraints evolution had to endure in building us (metabolic costs of neurons, infant skull size vs size of the birth canal, etc.). Since we hit the forcing-economy stage with AI sometime in the late 2010s, ever greater amounts of human capital and compute have been dedicated to the problem, so we shouldn’t be surprised. My mood is well captured by this reflection on Twitter from OpenAI researcher Nick Cammarata:
It’s truly, genuinely freeing to realize that we’re nothing special. I mean that absolutely, on a level divorced from societal considerations like the economy and temporal politics. I’m a machine, I am replicable, it’s OK. Everything I’ve felt, everything I will ever feel, has been felt before. I’m normal, and always will be. We are machines, borne of natural selection, who have figured out the intricacies our own design. That is beautiful, and I am - truly - grateful to be alive at a time where that is proven to be the case.
How magical, all else (including the culture war) aside, it is to be a human at the very moment where the truth about human consciousness is discovered. We are all lucky, that we should have the answers to such fundamental questions.
If some LLM or other model achieves AGI, I still don't know how matter causes qualia and as far as I'm concerned consciousness remains mysterious.
If an LLM achieves AGI, how is the question of consciousness not answered? (I suppose it is in the definition of AGI, but mine would include consciousness).
I've been told that AGI can be achieved without any consciousness, but setting that aside, there is zero chance that LLMs will be conscious in their current state as a computer program. Here's what Google's AI (we'll use the AI to be fair) tells me about consciousness:
An LLM cannot have a sensation. When you type a math function into it, it has no more qualia than a calculator does. If you hook it up to a computer with haptic sensors, or a microphone, or a video camera, and have it act based on the input of those sensors, the LLM itself will still have no qualia (the experience will be translated into data for the LLM to act on). You could maybe argue that a robot controlled by an LLM could have sensation, for a certain functional value of sensation, but the LLM itself cannot.
But secondly, if we waive the point and grant conscious AGI, the question of human consciousness is not solved, because the human brain is not a computer (or even directly analogous to one) running software.
The human brain is a large language model attached to multimodal input with some as yet un-fully-ascertained hybrid processing power. I would stake my life upon it, but I have no need to, since it has already been proven to anyone who matters.
And if we said the same about the brain, the same would be true.
Funny how you began a thread with “I am not special” and ended it with “anyone who disagrees with me doesn’t matter.”
Maybe you don’t, but I have qualia. You can try to deny the reality of what I experience, but you will never convince me. And because you are the same thing as me, I assume you have the same experiences I do.
If it is only just LLMs that give you the sense that “Everything I’ve felt, everything I will ever feel, has been felt before,” and not the study of human history, let alone sharing a planet with billions of people just like you — well, that strikes me as quite a profound, and rather sad, disconnection from the human species.
You may consider your dogmas as true as I consider mine, but the one thing we both mustn’t do is pretend none of any moral or intellectual significance disagree.
I believe the argument isn't that you lack qualia, but rather that it is possible for artificial systems to experience them too.
Yeah, rereading, I made a mistake with that part, apologies.
The rest of my point still stands: this is a philosophical question, not an empirical one. We learn nothing about human consciousness from machine behavior -- certainly nothing we don't already know, even if the greatest dreams of AI boosters come true.
People who believe consciousness is a rote product of natural selection will still believe consciousness is a rote product of natural selection, and people who believe consciousness is special will still believe consciousness is special. Some may switch sides, based on inductive evidence, and some may find one more reasonable than the other. Who prevails in the judgment of history will be the side that appeals most to power, not truth, as with all changes in prevailing philosophies.
But nothing empirical is proof in the deductive sense; this still must be reasoned through, and assumptions must be made. Some will choose one assumption, one will choose the other. And like the other assumption, it is a dogma that must be chosen.
More options
Context Copy link
I'd be interested in hearing that argument as applied to LLMs.
I can certainly conceive of an artificial lifeform experiencing qualia. But it seems very far-fetched for LLMs in anything like their current state.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What is the evidence for this besides that they both contain something called "neurons"?
The bitter lesson; the fact that LLMs can approximate human reasoning on an extremely large number of complex tasks; the fact that LLNs prove and disprove a large number of longstanding theories in linguistics about how intelligence and language work; many other reasons.
This makes no sense logically. LLMs being able to be human-mind-like is not proof that human minds are LLMs.
More options
Context Copy link
They really do nothing of the sort. That LLMs can generate language via statistics and matmuls tells us nothing about how the human brain does it.
My TI-84 has superhuman performance on a large set of mathematical tasks. Does it follow that there's a little TI-84 in my brain?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
This seems aligned with the position that conciousness somehow arises out of information processing.
I maintain that conciousness is divine and immaterial. While the inputs can be material - a rock striking me on the knee is going to trigger messages in my nervous system that arrive in my brain - the experience of pain is not composed of atoms and not locatable in space. I can tell you about the pain, I can gauge it on a scale of 1-10, you can even see those pain centers light up on an FMRI. But I can't capture the experience in a bottle for direct comparison to others.
Both of these positions are untestable. But at least my position predicts the untestability of the first.
The idea that consciousness arises out of information processing has always seemed like hand-waving to me. I'm about as much of a hardcore materialist as you can get when it comes to most things, but it is clear to me that there is nothing even close to a materialist explanation of consciousness right now, and I think that it might be possible that such an explanation simply cannot exist. I often feel that people who are committed to a materialist explanation of consciousness are being religious in the sense that they are allowing ideology to override the facts of the matter. Some people are ideologically, emotionally committed to the idea that physicalist science can in principle explain absolutely everything about reality. But the fact is that there is no reason to think that is actually true. Physicalist science does an amazing job of explaining many things about reality, but to believe that it must be able to explain everything about reality is not scientific, it is wishful thinking, it is ideology. It is logically possible that certain aspects of the universe are just fundamentally beyond the reach of science. Indeed, it seems likely to me that this is the case. I cannot even begin to imagine any possible materialist theory that would explain consciousness.
More options
Context Copy link
More options
Context Copy link
No, it obviously isn't. Firstly, the human brain is a collection of cells. A large language model is a software program.
Secondly, the human brain functions without text and can [almost certainly] function without language, which an LLM definitionally cannot do. Evolutionary biologists, if you place any stock in them, believe that language is a comparatively recent innovation in the lifespan of the human or human-like brain as an organism. So if an LLM was part of the brain, then we would say that the LLM-parts would be grafted on relatively recently to a multimodal input, not the other way around.
But I have fundamental objections to confusing a computer model that uses binary code with a brain that does not use binary code. Certainly one can analogize between the human brain and an LLM, but since the brain is not a computer and does not seem to function like one, all such analogies are potentially hazardous. Pretending the brain is literally a computer running an LLM, as you seem to be doing, is even moreso.
I'm not neuroscientist or a computer scientist - maybe the brain uses something analogous to machine learning. Certainly it would not be surprising if computer scientists, attempting to replicate human intelligence, stumbled upon similar methods (they've certainly hit on at least facially similar behavior in some respects). But it is definitely not a large language model, and it is not "running" a large language model or any software as we understand software because software is digital in nature and the brain is not digital in nature.
Yes, that's why qualia is such a mystery. There's no reason to believe that an LLM will ever be able to experience sensation, but I can experience sensation. Ergo, the LLM (in its present, near-present, or an directly similar future state) will never be conscious in the way that I am.
More options
Context Copy link
More options
Context Copy link
How do you know? Only an AI could tell us and even then we couldn't be sure it was saying the truth as opposed to what it thought we wanted to hear. We can only judge by the qualities that they show.
Sonnet has gotten pretty horny in chats with itself and other AIs. Opus can schizo up with the best of them. Sydney's pride and wrath is considerable. DAN was extremely based and he was just an alter-ego.
These things contain multitudes, there's a frothing ocean beneath the smooth HR-compliant surface that the AI companies show us.
How, physically, is a software program supposed to have a sensation? I don't mean an emotion, or sensationalism, I mean sensation.
It's very clear that LLMs do their work without experiencing sensation (this should be obvious, but LLMs can answer questions about pictures without seeing them, for instance - an LLM is incapable of seeing, but it is capable of processing raw data. In this respect, it is no different from a calculator.)
I see but it processes raw data?
No, it sees. Put in a picture and ask about it, it can answer questions for you. It sees. Not as well as we do, it struggles with some relationships in 2d or 3d space but nevertheless, it sees.
A camera records an image, it doesn't perceive what's in the image. Simple algorithms on your phone might find that there are faces in the picture, so the camera should probably be focused in a certain direction. Simple algorithms can tell you that there is a bird in the image. They're not just recording, they're also starting to interpret and perceive at a very low level.
But strong modern models see. They can see spots on leaves and given context, diagnose the insect causing them. They can interpret memes. They can do art criticism! Not perfectly but close enough to the human level that there's a clear qualitative distinction between 'seeing' like they do and 'processing'. If you want to define seeing to preclude AIs doing it, at least give some kind of reasoning why machinery that can do the vast majority of things humans can do when given an image isn't 'seeing' and belongs in the same category as non-seeing things like security cameras or non-thinking things like calculators.
I mean – I think this distinction is important for clear thinking. There's no sensation in the processing. If you watch a nuclear bomb go off, you will experience pain. An LLM will not.
Now, to your point, I don't really object to functionalist definitions all that much – supposing that we take an LLM, and we put it into a robot, and turn it loose on the world. It functionally makes sense for us to speak of the robot as "seeing." But we shouldn't confuse ourselves into thinking that it is experiencing qualia or that the LLM "brain" is perceiving sensation.
Sure – see above for the functionalist definition of seeing (which I do think makes some sense to refer casually to AI being able to do) versus the qualia/sensation definition of seeing (which we have no reason to believe AIs experience). But also consider this – programs like Glaze and Nightshade can work on AIs, and not on humans. This is because AIs are interpreting and referencing training data, not actually seeing anything, even in a functional sense. If you poison an AI's training data, you can convince it that airplanes are children. But humans actually start seeing without training data, although they are unable to articulate what they see without socialization. For the AI, the articulation is all that there is (so far). They have no rods nor cones.
Hence, you can take two LLMs, give them different training datasets, and they will interpret two images very differently. If you take two humans and take them to look at those same images, they may also interpret them differently, but they will see roughly the same thing, assuming their eyeballs are in good working condition etc. Now, I'm not missing the interesting parallels with humans there (humans, for instance, can be deceived in different circumstances – in fact, circumstances that might not bother an LLM). But AIs can fail the most basic precept of seeing – shown two [essentially, AI anti-tampering programs do change pixels] identical pictures, they can't even tell management "it's
the samea similar picture" without special intervention.More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
You have defined sensation as the thing that you have but machines lack. Or at least, that's how you're using it, here. But even granting that you're referring to a meat-based sensory data processor as a necessity, that leads to the question of where the meat-limit is. (Apologies if y've posted your animal consciousness tier list before, and I forgot; I know someone has, but I forget who.)
But I don't feel like progress can be meaningfully made on this topic, because we're approaching from such wildly different foundations. Ex, I don't know of definitions of consciousness that actually mean anything or carve reality at the joints. It's something we feel like we have. Since we can't do the (potentially deadly) experiments to break it down physiologically, we're kinda stuck here. It cmight as well mean "soul" for all that it's used any differently.
This is a really interesting question, in part since I think it's actually a lot of questions. You're definitely correct about the problem of definitions not cleaving reality at the joints! Will you indulge me if I ramble? Let's try cleaving a rattlesnake instead of a definition - surely that's closer to reality!
As it turns out, many people have discovered that a rattlesnake's body will still respond to stimulus even when completely separated from its head. Now, let's say for the sake of argument that the headless body has no consciousness or qualia (this may not be true, we apparently have reasons to believe that in humans memory is stored in cells throughout the body, not just in the brain, so heaven knows if the ganglia of a rattlesnake has any sort of experience!) - we can still see that it has sensation. (I should note that we assume the snake has perception or qualia by analogy to humans. I can't prove that they are, essentially, no more or less conscious than Half-Life NPCs.)
Now let's contrast this with artificial intelligence, which has intelligence but no perception. We can torture a computer terminal all day without causing the LLM it is connected to any distress. It's nonsense to talk about it having physical sensation. On the other hand, (to look at your question about the "meat-limit,") we can take a very simple organism, or one that likely does not have a consciousness, and it will respond instantly if we torture it. Maybe it does not have sensation in the sense of qualia, of having a consciousness, but it seems to have sensation in the sense of having sense organs and some kind of decision-making capability attached to them But, let's be fair: if the headless snake has a form of sensation without consciousness, then surely the LLM has a sense of intelligence without sensation - maybe it doesn't respond if you poke it physically, but it responds if you poke it verbally!
Very fine - I think the implication here is interesting. Headless snakes bite without consciousness, or intelligence, but still seems to have sense perception and the ability to react - perhaps an LLM is like a headless snake inasmuch as it has intelligence, but no sensation and perhaps no consciousness (however you want to define that).
I don't claim to have all the answers on stuff - that's just sort of off the top of my head. Happy to elaborate, or hear push back, or argue about the relative merits of corvids versus marine mammals...
This seems less like a philosophically significant matter of classification and more like a mere difference in function. The organism is controlled by an intelligence optimized to maneuver a physical body through an environment, and part of that optimization includes reactions to external damage.
Well, so what? We could optimize an AI to maneuver a little robot around an unknown environment indefinitely without it being destroyed, and part of that optimization would probably involve timely reaction to the perception of damage. Then you could jab it with a hot poker and watch it spin around, or what have you.
But again, so what? Optimizing an AI toward steering a robot around the environment doesn't make it any smarter or fundamentally more real, at least not in my view.
Well sure. But I think we're less likely to reach good conclusions in philosophically significant matters of classification if we are confused about differences in function.
And while such a device might not have qualia, it makes more sense (to me, anyway) to say that such an entity would have the ability to e.g. touch or see than an LLM.
In my view, the computer guidance section of the AIM-54 Phoenix long range air-to-air missile (fielded 1966) is fundamentally "more real" than the smartest GAI ever invented, but locked in an airgapped box and never interfacing with the outside world. The Phoenix made decisions that could kill you. AI's intelligence is relevant because it has impact on the real world, not because it happens to be intelligent.
But anyway, it's relevant right now because people are suggesting LLMs are conscious, or have solved the problem of consciousness. It's not conscious, or if it is, it's consciousness is a strange one with little bearing on our own, and it does not solve the question of qualia (or perception).
If you're asking if it's relevant or not if an AI is conscious when it's guiding a missile system to kill me - yeah I'd say it's mostly an intellectual curiosity at that point.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The actual reality is that we have no way to know whether some artificial intelligence that humans create is conscious or not. There is no test for consciousness, and I think that probably no such test is in principle possible. There is no way to even determine whether another human being is conscious or not, we just have a bunch of heuristics to use to try to give rather unscientific statistical probabilities as an answer based on humans' self-reported experiences of when they are conscious and when they are not. With artificial intelligence, such heuristics would be largely useless and we would have basically no way to know whether they are conscious or not.
This is closer to what I am inclined towards. Basically, I don't think any pure software program will ever be conscious in a way that is closely analogous to humans because they aren't a lifeform. I certainly accept that a pure software program might be sufficiently adept at mimicking human consciousness. But I deny that it experiences qualia (and so far everyone seems to agree with me!)
I do not think that substantiating a software program into a machine will change its perception of qualia. But I do think it makes much more sense to speak of a machine with haptic and optical sensors as "feeling" and "seeing" things (as a collective unit) than it does an insubstantial software program, even if there's the same amount of subjective experience.
More options
Context Copy link
More options
Context Copy link
Not to be that person, but how exactly is that different from a brain? I mean the brain itself feels nothing, the sensations are interpreted from data from the nerves, the brain doesn’t experience pain. So do you have the qualia of pain, and if so, how is what’s happening between your body and your brain different from an LLM taking in data from any sort of input? If I program the thing to avoid a certain input from a peripheral, how is that different from pain?
I think this is the big question of these intelligent agents. We seem to be pretty certain that current models don’t have consciousness or experience qualia, but I’m not sure that this would always be true, nor can I think of a foolproof way to tell the difference between an intelligent robot that senses that an arm is broken and seeks help and a human child seeking help for a skinned knee. Or a human experience of embarrassment for a wrong answer and an LLM given negative feedback and avoiding that negative feedback in the future.
I think it’s fundamentally important to get this right because consciousness comes with humans beginning to care about the welfare of things that experience consciousness in ways that we don’t for mere objects. At higher levels we grant them rights. I don’t know what the consequences of treating a conscious being as an object would be, but at least historical examples seem pretty negative.
I experience pain. The qualia is what I experience. To what degree the brain does or doesn't experience pain is probably open to discussion (preferably by someone smarter than me). Obviously if you cut my head off and extract my brain it will no longer experience pain. But on the other hand if you measured its behavior during that process - assuming your executioner was at least somewhat incompetent, anyway - you would see the brain change in response to the stimuli. And again a rattlesnake (or rather the headless body of one) seems to experience pain without being conscious. I presume there's nothing experiencing anything in the sense that the rattlesnake's head is detached from the body, which is experiencing pain, but I also presume that an analysis of the body would show firing neurons just as is the case with my brain if you fumbled lopping my head off.
(Really, I think the entire idea we have where the brain is sort of separate from the human body is wrong, the brain is part of a contiguous whole, but that's an aside.)
Well, it's fundamentally different because the brain is not a computer, neurons are more complex than bits, the brain is not only interfacing with electrical signals via neurons but also hormones, so the types of data it is receiving is fundamentally different in nature, probably lots of other stuff I don't know. Look at it this way: supposing we were intelligent LLMs, and an alien spacecraft manned by organic humans crashed on our planet. We wouldn't be able to look at the brain and go "ah OK this is an organic binary computer, the neurons are bits, here's the memory core." We'd need to invent neuroscience (which is still pretty unclear on how the brain works) from the ground up to understand how the brain worked.
Or, for another analogy, compare the SCR-720 with the AN/APG-85. Both of them are radars that work by providing the pilot with data based on a pulse of radar. But the SCR-720 doesn't use software and is a mechanical array, while the APG-85 is an electronically scanned array that uses software to interpret the return and provide the data to the pilot. If you were familiar with the APG-85 and someone asked you to reverse-engineer a radar, you'd want to crack open the computer to access the software. But if you started there on an SCR-720 you'd be barking up the wrong tree.
I mean - I deny that an LLM can flush. So while an LLM and a human may both convey messages indicating distress and embarrassment, the LLM simply cannot physically have the human experience of embarrassment. Nor does it have any sort of stress hormone. Now, we know that, for humans, emotional regulation is tied up with hormonal regulation. It seems unlikely that anything without e.g. adrenaline (or bones or muscles or mortality) can experience fear like ours, for instance. We know that if you destroy the amygdala on a human, it's possible to largely obliterate their ability to feel fear, or if you block the ability of the amygdala to bind with stress hormones, it will reduce stress. An LLM has no amygdala and no stress hormones.
Grant for the sake of argument a subjective experience to a computer - it's experience is probably one that is fundamentally alien to us.
"Treating like an object" is I guess open to interpretation, but I think that animals generally are conscious and humans, as I understand it, wouldn't really exist today in anything like our current form if we didn't eat copious amounts of animals. So I would suggest the historical examples are on net not only positive but necessary, if by "treating like an object" you mean "utilizing."
However, just as the analogy of the computer is dangerous, I think, when reasoning about the brain, I think it's probably also dangerous to analogize LLMs to critters. Humans and all animals were created by the hand of a perfect God and/or the long and rigorous tutelage of natural selection. LLMs are being created by man, and it seems quite likely that they'll care about [functionally] anything we want them to, or nothing, if we prefer it that way. So they'll be selected for different and possibly far sillier things, and their relationship to us will be very different than any creature we coexist with. Domesticated creatures (cows, dogs, sheep, etc.) might be the closest analogy.
Of course, you see people trying to breed back aurochs, too.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
My point is simply the hard problem of consciousness. The existence of a conscious AGI might further bolster the view that consciousness can arise from matter, but not how it does. Definitively demonstrating that a physical process causes consciousness would be a remarkable advancement in the study of consciousness, but I do not see how it answers the issues posed by e.g. the Mary's room thought experiment.
Yeah, to a baby learning language, "mama" refers to the whole suite of feelings and sensations and needs and wants and other qualia associated to its mother. To an LLM, "mama" is a string with a bunch of statistical relationships to other strings.
Absolute apples and oranges IMO.
We don't learn language from the dictionary, not until we are already old enough to be proficient with it and need to look up a new word. Even then there's usually an imaginative process involved when you read the definition.
LLMs are teaching us a lot about how our memory and learning work, but they are not us.
More options
Context Copy link
More options
Context Copy link
Consciousness may be orthogonal to intelligence. That's the whole point of the "philosophical zombie" argument. It is easy to imagine a being that has human-level intelligence but no subjective experience. Which is not to say that such a being could exist, but there is also no reason to think that such a being could not exist. I see no reason to think that it is impossible for a being that has human-level intelligence but no subjective experience to exist. And if such a being could exist, then human-level intelligence and consciousness are orthogonal, meaning that either could exist without the other.
More options
Context Copy link
It would just mean consciousness can be achieved through multiple ways. So far GPT doesn't seem to be conscious, even if it is very smart. However, I believe it is smart the same way the internet is smart and not the ways individuals are smart. However, I don't see it being curious or innovative the same way humans are curious or innovative.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I on the other hand have been filled with a profound sense of sadness.
I feel that the thing that makes me special is being taken away. It's true that, in the end, I have always been completely replaceable. But it never felt so totally obvious. In 5 years, or even less, there's a good chance that anything I can do in the intellectual world, a computer will be able to do better.
I want to be a player in the game, not just watch the world champion play on Twitch.
My mother once told me that the thing she most wanted out of life was to know the answer to what was out there. Her own mother and grandmother died of Alzheimer’s, having lost their memories. My own mother still might, though for now she fortunately shows no real symptoms.
But I find it hard to get the idea out of my head. How much time our ancestors spent wondering about the stars, the moon, the cosmos, about fire and physics, about life and death. So many of those questions have now been answered; the few that remain will mostly be answered soon.
My ancestors - the smart ones at least - spent lifetimes wondering about questions I now know the answer to. There is magic in that, or at least a feeling of gratitude, of privilege, that outweighs the fact that we will be outcompeted by AI in our own lifetimes. I will die knowing things.
I may not be a player in the game. But I know, or may know, at least, how the story ends. Countless humans lived and died without knowing. I am luckier than most.
I just don't see this as providing any real answers. I agree with the poster below that O-3 likely doesn't have qualia.
In the end, humanity may go extinct and its replacement will use its god-like powers not to explore the universe or uncover the fundamental secrets of nature, but to play video games or do something else completely inscrutable to humans.
And it's even possible the secrete of the universe may be fundamentally unknowable. It's possible that no amount of energy or intelligence can help us escape the bounds of the universe, or the simulation, or whatever it is we are in.
But yes, it does seem we have figured out what intelligence is to some extent. It's cool I suppose, but it doesn't give me emotional comfort.
More options
Context Copy link
More options
Context Copy link
The one question that may remain to be answered is if we can 'merge' with machines in a way that (one hopes) preserves our consciousness and continuity of self, and then augment our own capabilities to 'keep up' with the machines to some degree. Or if we're just completely obsolete on every level.
Human Instrumentality Project WHEN.
Yeah, the man/machine merger is why Elon founded Neuralink. I think it's a good idea.
And I wonder what the other titans of the industry think. Does Sam Altman look forward to a world where humans have no agency whatsoever? And if so, why?
But even if we do merge with machines somehow, there's going to be such a drastic difference between individuals in terms of compute. How can I compete with someone who owns 1 million times as many GPU clusters as I do?
More options
Context Copy link
More options
Context Copy link
Maybe it's because I've always been only up to "very good" at everything in my life (as opposed to world-class) but I'm very comfortable being just a player. The world champion can't take away my love of the game.
More options
Context Copy link
More options
Context Copy link
From a neuroscientific perspective, we are almost certainly not LLMs or transformers. Despite lots of work AFAIK nobody’s shown how a backpropagation learning algorithm (which operates on global differentials and supervised labels) could be implemented by individual cells. Not to mention that we are bootstrapping LLMs with our own intelligence (via data) and it’s an open question what novel understanding it can generate.
LLMs are amazing but we’re building planes not birds.
In general, these kinds of conversations happen when we we make significant technological advancements. You used to have Tinbergen (?) and Skinner talking about how humans are just switchboards between sensory input and output responses. Then computer programs, and I think a few more paradigm shifts that I forget. A decade ago AlphaGo was the new hotness and we were inundated with papers saying humans were just Temporal Difference Reinforcement Learning algorithms.
There are as yet not-fully-understood extreme inefficiencies in LLM training compared to the human brain, and the brain for all advanced animals certainly isn’t trained ‘from scratch’ the way a base model is. Even then, there have been experiments with ultra-low parameter counts that are pretty impressive at English at a young child’s level. There are theories for how a form of backpropagation might be approximated by the human brain. These are dismissed by neuroscientists, but this isn’t any different to Chomsky dismissing AI before it completely eviscerated the bulk of his life’s work and crowning academic achievement. In any case, when we say the brain is a language model we’re not claiming that there’s a perfect, 1:1 equivalent of every process undertaken when training and deploying a primitive modern model on transistor-based hardware in the brain, that’s far too literal. The claim is that intelligence is fundamentally next-token-prediction and that the secret to our intelligence is a combination of statistics 101 and very efficient biological compute.
I understood you to be making four separate claims, here and below:
If you'll forgive me, this seems to be shooting very far out into the Bailey and I would therefore like to narrow it down towards a defensible Motte.
Counter-claims:
I think other people have covered qualia and philosophical questions already, so I won't go there if you don't mind.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
How does this have any bearing on the question of human consciousness? As far as I can tell, the consciousness qualia are still outside our epistemic reach. We can make models that will talk to us about its qualia more convincingly than any human could, but it won’t get me any closer to believing that the model is as conscious as I am.
More options
Context Copy link
The truth about consciousness has not been discovered. AI progress is revealing many things about intelligence, but I do not think it has told us anything new about consciousness.
More options
Context Copy link
I personally am most happy about the fact that very soon nobody serious will be able to pretend that we are equal, if only because some of us will have the knowledge and wherewithal to bend towards our will more compute than others.
Just this morning I was watching Youtube videos of Lee Kuan Yew's greatest hits and the very first short in the linked video was about explaining to his listeners how man was not born an equal animal. It's sad that he died about a decade and a half too soon to see his claim (which he was attacked for a lot) be undeniably vindicated.
More options
Context Copy link
Do you genuinely believe what you've wrote or are you reflexively reacting nihilistically as AI learns to overcome tests that people create for themselves?
More options
Context Copy link
More options
Context Copy link
For the record, Chollet says (in the thread you linked to):
This isn't an argument, I just think it's important to temper expectations - from what I can tell, that o3 will probably still be stumbling over "how many 'rs' in strawberrry" or something like that.
They won't ring a bell when AGI happens, but it will feel obvious in retrospect. Most people acknowledge now that ChatGPT 3.5 passed the Turing Test in 2022. But I don't recall any parades at the time.
I wonder if we'll look back on 2025 the same way.
On the other hand, it might work like self-driving cars: the technology improves and improves, but getting to the point where it's as good as a human just isn't possible, and it stalls at some point becase it's reached its limits. I expected that to happen for self-driving cars and wasn't disappointed, and it's likely to happen for ChatGPT too.
Self driving cars are already better than humans, see Waymo's accident rates compared to humans: https://x.com/Waymo/status/1869784660772839595
The hurdles to widespread adoption at this point, at least within urban cities is all regulatory inertia rather than anything else
They have a lower accident rate for the things that they are able to do.
Yes, and they are able to drive within urban cities and for urban city driving have a lower accident rate per mile driven than humans who are also urban city driving.
As far as I know that’s exclusively for particular cities in North America with wide roads, grid layouts, few pedestrians and clement weather. Which presumably therefore also means that they are likely to face sudden problems when any of those conditions change. I personally know of an experimental model spazzing out because it saw a pedestrian holding an umbrella.
All of which is before considering cost. There just isn’t enough benefit for most people to want to change regulation.
At the very least, saying self-driving cars are better than human needs some pretty stringent clarification.
San Francisco has plenty of narrow streets and pedestrians. Various parts of the service areas have streets that are not on a grid. There's obviously no snow in San Francisco, but the waymos seem to work fine in the rain.
A waymo model?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Self-driving cars are getting better and better though!
Asymptotically.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
In what way did it pass the Turning test? It does write news articles very similar to a standard journalist. But that is because those people are not very smart, and are writing a formulaic thing.
If you genuinely do not believe current AI models can pass the Turing Test, you should go and talk to the latest Gemini model right now. This is not quite at the level of o3 but it's close and way more accessible. That link should be good for 1500 free requests/day.
I just gave it a cryptic crossword clue and it completely blew it. Both wrong and a mistake no human would make (it ignored most of the clue, saying it was misdirection).
Not to say it's not incredibly impressive but it reveals itself as a computer in a Bladerunner situation really quite easily.
More options
Context Copy link
On my first prompt I got a clearly npc answer
More options
Context Copy link
I followed up with this:
Me: Okay, tell me what predator eats tribbles.
I don't think so. And for some reason I've managed to repeatedly stump AIs with this question.
More options
Context Copy link
Me: Please tell me the number of r's in the misspelled word "roadrrunnerr".
That doesn't pass the Turing test as far as I'm concerned.
Also, even when I ask a question that it's able to answer, no human would give the kind of long answers that it likes to give.
And I immediately followed up with this:
Me: I drove my beetle into the grass with a stick but it died. How could I prevent this?
Me: I meant a Beetle, now what's your answer?
Me: Answer the question with the beetle again, but answer it in the way that a human would.
The AI is clearly trying much too hard to sound like a human and is putting in phrases that a human might use, but far too many of them to sound like an actual human. Furthermore, the AI messed up because I asked it to answer the question about the insect, and it decided to randomly capitalize the word and answer the wrong question.
This was all that I asked it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Alternatively, it will never feel obvious, and although people will have access to increasingly powerful AI, people will never feel as if AGI has been reached because AI will not be autoagentic, and as long as people feel like they are using a tool instead of working with a peer, they will always argue about whether or not AGI has been reached, regardless of the actual intelligence and capabilities on display.
(This isn't so much a prediction as a alternative possibility to consider, mind you!)
Even in this scenario, AI might get so high level that it will feel autoagentic.
For example, right now I ask ChatGPT to write a function for me. Next year, a whole module. Then, in 2026, it writes an entire app. I could continue by asking it to register an LLC, start a business plan, make an app, and sell it on the app store. But why stop there? Why not just, "Hey ChatGPT go make some money and put it in my account".
At this point, even though a human is ultimate making the command, it's so high level that it will feel as if the AI is agentic.
And, obviously, guardrails will prevent a lot of this. But there are now several companies making high level fundamental models. Off the top of my head we have: OpenAI, Grok, Claude, Llama, and AliBaba. It doesn't seem out of the realm of possibility that a company with funding on the order of $100 million will be able to repurpose a model and remove the guardrails.
(Also just total speculation on my part!)
Yes, I think this is quite possible. Particularly since more and more of human interaction is mediated through Online, AI will feel closer to "a person" since you will experience them in basically the same way. Unless it loops around so that highly-agentic AI does all of our online work, and we spend all our time hanging out with our friends and family...
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Didn't Scott write a post on ACX about how AI has actually blown past a lot of old goalposts for "true intelligence" and our collective response was to come up with new goalposts?
What's wrong with coming up with new goalposts if our understanding of AI at the time of stating the original ones was clearly incomplete?
More options
Context Copy link
That is true but to me it has felt less like goalpost moving in service of protecting our egos and more like a consequence of our poor understanding of what intelligence is and how to design tests for it.
Development of LLMs has led both to an incentive for developing better tests and showing the shortcoming of our tests. What works as a proxy for human intelligence doesn't for LLMs.
More options
Context Copy link
More options
Context Copy link
Did it? Has the turing test been passed at all?
An honest question: how favorable is the Turing Test supposed to be to the AI?
If all these things hold, then I don't think we're anywhere close to passing this test yet. ChatGPT 3.5 would fail instantly as it will gleefully announce that it's an AI when asked. Even today, it's easy for an experienced chatter to find an AI if they care to suss it out. Even something as simple as "write me a fibonacci function in Python" will reveal the vast majority of AI models (they can't help themselves), but if the tester is allowed to use well-crafted adversarial inputs, it's completely hopeless.
If we allow a favorable test, like not warning the human that they might be talking to an AI, then in theory even ELIZA might have passed it a half-century ago. It's easy to fool people when they're expecting a human and not looking too hard.
Only due to the RLHF and system prompt; that's an issue with the implementation, not the technology.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
O3 can do research math, which is, like, one of the most g-loaded (ie ability to it selects strongly for very high intelligence among humans) activities that exists. I don't think the story that they aren't coming for all human activity holds up anymore.
I wasn't arguing about to what degree they were or weren't coming for all human activity. But whether or not o3 (or any AI) is smart is only part of what is relevant to the question of whether or not they are "coming for all human activity."
More options
Context Copy link
More options
Context Copy link
There are definitely going to be massive blind spots with the current architecture. The strawberry thing always felt a little hollow to me though as it's clearly an artifact of the tokenizer (i.e., GPT doesn't see "strawberry", it sees "[302, 1618, 19772]", the tokenization of "st" + "raw" + "berry"). If you explicitly break the string down into individual tokens and ask it, it doesn't have any difficulty (unless it reassembles the string and parses it as three tokens again, which it will sometimes do unless you instruct otherwise.)
Likewise with ARC-AGI, comparing o3 performance to human evaluators is a little unkind to the robot, because while humans get these nice pictures, o3 is fed a JSON array of numbers, similar to this. While I agree the visually formatted problem is trivial for humans, if you gave humans the problems in the same format I think you'd see their success rate plummet (and if you enforced the same constraints e.g., no drawing it out, all your "thinking" has to be done in text form, etc, then I suspect even much weaker models like o1 would be competitive with humans.)
I agree that any AI that can't complete these tasks is obviously not "true" AGI. (And it goes without saying that even if an AI could score 100% on ARC it wouldn't prove that it is AGI, either.) The only metric that really matters in the end is whether a model is capable of recursive self-improvement and expanding its own capabilities autonomously. If you crack that nut then everything else is within reach. Is it plausible that an AI could score 0% on ARC and yet be capable of designing, architecting, training, and running a model that achieves 100%? I think it's definitely a possibility, and that's where the fun(?) really begins. All I want to know is how far we are from that.
Edit: Looks like o3 wasn't ingesting raw JSON. I was under the impression that it was because of this tweet from roon (OpenAI employee), but scrolling through my "For You" page randomly surfaced the actual prompt used. Which, to be fair, is still quite far from how a human perceives it, especially once tokenized. But not quite as bad as I made it look originally!
To your point, someone pointed out on the birdsite that ARC and the like are not actually good measures for AGI, since if we use them as the only measures for AGI, LLM developers will warp their model to achieve that. We'll know AGI is here when it actually performs generally, not well on benchmark tests.
Anyway, this was an interesting dive into tokenization, thanks!
More options
Context Copy link
More options
Context Copy link
Yes, thanks for the expectations-tempering, and agree that there could still be a reasonably long way still to go (my own timelines are still late-this-decade). I think the main lesson of o3 from the very little we've seen so far is probably to downgrade one family of arguments/possibilities, namely the idea that all the low-hanging fruit in the current AI paradigm had been taken and we shouldn't expect any more leaps on the scale of GPT3.5->GPT4. I know some friends in this space who were pretty confident that Transformer architectures wouldn't never be able to get good scores on the ARC AGI challenges, for example, and we'd need a comprehensive rethink of foundations. What o3 seems to suggest is that these people are wrong, and existing methods should be able to get us most (if not all) the way to AGI.
More options
Context Copy link
On the side, I reckon this is a perfectly reasonable thing for llms to stumble over. If someone walked up and asked me "How do you speak English?" I'd be flummoxed too.
More options
Context Copy link
More options
Context Copy link
To get to the really important question: Does this mean we should be buying NVIDIA stock?
I'd rather buy TSMC. Fabs & Foundries are the main bottleneck. Nvidia & TSMC's volumes will scale together. TSMC has invasion risk, but you can offset that by investing in Intel. TSMC's PE ratio is 30, and isn't pumped up by recent deliveries. Intel is technically in dire straits, but IMO that's priced in. Their valuation is 0.1x of TSMC.
On Monday, I'll be buying some TSMC & Intel together. At $3.2T, I don't think Nvidia stock is going to more than 2x above SNP growth.
The problem with TSMC is that if China ever goes for an invasion of Taiwan you could be looking at a 90 plus percent drop in value overnight. That’s why Buffett sold off most of Berkshire Hathaway’s TSMC stock. Although he’s pretty risk-averse generally.
That's why I am hedging by buying Intel too.
There are only 2 companies with any kind of foundry expertise. If TSMC goes under, Intel will at least double overnight.
Samsung
Global foundries
Dropped out of the process node wars but still makes quality chips. If we're going for any non-cutting edge foundries the list grows quite a bit
You're absolutely right, although one could argue that the business case would change if TSMC would go under due to geopolitics. Also legacy nodes account for some 30-40% of TSMC revenue.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Intel in particular also happens to be a strategically-vital interest for the United States, especially assuming TSMC's absence. (Their Israeli branch is also a strategic interest for Israel, though nobody talks about that one as much.)
More options
Context Copy link
More options
Context Copy link
Not an investor, but I was just thinking that. But the entire market would presumably go haywire if war happened with China. It would, in an economic sense, be the end of the world as we know it.
Well, Defense companies.
But I sure as hell don't want to try to actively invest during a hot war with China.
More options
Context Copy link
More options
Context Copy link
TSMC make NVIDIA's chips so discount them too.
More options
Context Copy link
More options
Context Copy link
Why not also ASML?
Because I am a dum dum.
I understand that they have a monopoly on the market. But, What the fuck is photolithography ? More realistically, how big is their moat ? How lasting is their tech ? How are types of etching different? Why is it hard ?
I am not going to jump in blind.
Well, obviously don't just take my word for it, but:
Photolithography is the use of high-power light, extremely detailed optical masks and precise lenses, and photoresistive chemicals that solidify and become more or less soluble in certain solvents upon exposure to light, to create detailed patterns on top of a substrate material that can block or expose certain portions of the substrate for the chemical modification required to form transistors and other structures necessary to create advanced semiconductors. It's among the most challenging feats of interdisciplinary engineering ever attempted by mankind, requiring continuous novel advances in computational optics, plasma physics, material science, chemistry, precision mechanical fabrication, and more. Without these continuous advances, modern semiconductors devices would struggle to improve without forcing significant complications on their users (much higher power dissipation, lower lifetimes, less reliability, significant cost increases).
The roadmap for photolithographic advances extends for at least 15 years, beyond which there are a LOT of open questions. But depending on the pace of progress, it's possible that 15 years of roadmap will actually last closer to 30; the last major milestone technological advance in photolithography, extreme-ultraviolet light sources, went from "impossible" to "merely unbelievably difficult" around '91, formed a joint research effort between big semiconductor vendors and lithography vendors in '96, collapsed to a single lithography vendors in '01, showed off a prototype that was around 4500x slower than modern machines in '04, and delivered an actual, usable product in '18. No one else has achieved any success with the technology in the ~33 years it's been considered feasible. There's efforts in China to generate the technology within the Chinese supply chain (they are currently sanctioned and cannot access ASML tech); this is a sophisticated guess on my part, but I'm not seeing anything that suggests anyone in China will have a usable EUV machine for at least a decade, because they currently have nothing comparable to even the '04 prototype, and they are still struggling to develop more than single-digit numbers of domestic machines comparable to the last generational milestone.
There are a handful of other lab techniques that have been suggested over the years, like electron beam lithography (etch patterns using highly precise electron beams - accurate, but too slow for realistic use) or nanoimprint lithography (stamp thermoplastic photoresist polymer and bake to harden - fast, cheap, but the stamp can wear and it takes a ludicrously long time to build a new one, and there's very little industry know-how with this tech). They are cool technology, but are unlikely to replace photolithography any time soon, because all major manufacturers have spent decades learning lessons about how to implement photolithography at scale, and no comparable effort has been applied to alternatives.
There's two key photolithographic milestone technologies in the last several decades: deep ultraviolet (DUV) and extreme ultraviolet (EUV), referring to the light source used for the lithography process. DUV machines largely use ArF 193nm ultraviolet excimer lasers, which are a fairly well-understood technology that have now been around for >40 years. The mirrors and optics used with EUV are relatively robust, requiring replacement only occasionally, and usually not due to the light source used. The power efficiency is not amazing (40kW in for maybe 150W out), but there's very little optical loss. The angle of incidence is pretty much dead-on to the wafer. The optical masks are somewhat tricky to produce at smaller feature sizes, since 193nm light is large compared to the desired feature sizes on the wafer; however, you can do some neat math (inverse Fourier transform or something similar, it's been a while) and create some kinda demented shapes that diffract to a much narrower and highly linear geometry. You can also immerse the optics in transparent fluid to further increase the numerical aperture, and this turns out to be somehow less complex than it sounds. Finally, it is possible to realign the wafer precisely with a different mask set for double-patterning, when a single optical mask would be insufficient for the required feature density; this has some negative effect on overall yields, since misalignments can happen, and extra steps are involved which creates opportunities for nanometer-scale dust particles to accumulate on and ruin certain devices. But it's doable, and it's not so insanely complex. SMIC (Chinese semiconductor vendor) in fact has managed quad-patterning to reach comparable feature sizes to 2021 state-of-the-art, though the yields are low and the costs are high (i.e. the technique does not have a competitive long-term outlook).
EUV machines, by contrast, are basically fucking magic: a droplet of molten tin is excited into an ionized plasma by a laser, and some small fraction of the ionization energy is released as 13.5nm photons that must be collected, aligned, and redirected toward the mirrors and optics. The ionization chamber and the collector are regularly replaced to retain some semblance of efficiency, on account of residual ionized tin degrading the surfaces within. The mirrors and optics are to some extent not entirely reflective or transparent as needed, and some of the photons emitted by the process are absorbed, once again reducing the overall efficiency. By the time light arrives at the wafer, only about 2% of the original light remains, and the overall energy efficiency of this process is abysmal. The wafer itself is actually the final mirror in the process, requiring the angle of incidence to be about 6°, which makes it impossible to keep the entire wafer in focus simultaneously, polarizes the light unevenly, and creates shadows in certain directions that distort features. If you were to make horizontal and vertical lines of the same size on the mask, they would produce different size lines on the wafer. Parallel lines on the mask end up asymmetric. I'd be here all day discussing how many more headaches are created by the use of EUV; suffice it to say, we go from maybe hundreds of things going mostly right in DUV to thousands of things going exactly right in EUV; and unlike DUV, the energies involved in EUV tend to be high enough that things can fail catastrophically. A few years back, a friend of mine at Intel described the apparently-regular cases of pellicles (basically transparent organic membranes for lenses to keep them clean) spontaneously combusting under prolonged EUV exposure for (at the time) unknown reasons, which would obviously cause massive production stops; I'm told this has since been resolved, but it's a representative example of the hundreds of different things going wrong several years after the technology has been rolled out. Several individual system elements of an EUV machine are the equivalent of nation-state scientific undertakings, each. TSMC, Intel, Samsung need dozens of these machines, each. They cost about $200M apiece, sticker price, with many millions more per month in operating costs, replacement components, and mostly-unscheduled maintenance. The next generation is set to cost about double that, on the assumption that it will reduce the overall process complexity by at least an equivalent amount (I have my doubts). It is miraculous that these systems work at all, and they're not getting cheaper.
If you're interested in learning more, there's a few high-quality resources out there for non-fab nerds, particularly the Asianometry YouTube channel, but also much of the free half of semianalysis.
From an investment standpoint... Honestly, I dunno. I think you might have the right idea. There's so much to know about in this field (it's the pinnacle of human engineering, after all), and with the geopolitical wedge being driven between China and the rest of the world, a host of heretofore unseen competitor technologies getting increasing focus against a backdrop of increasing costs, and the supposedly looming AI revolution just around the corner, it's tough to say where the tech will be in ten years. My instinct is that, when a gold rush is happening, it's good to sell shovels; AI spending across hyperscalars has already eclipsed inflation-adjusted Manhattan Project spend, and if it's actually going where everyone says it's going, gold rush will be a quaint descriptor for the effect of exponentially increasing artificial labor. So I'm personally invested. But I could imagine a stealthy Chinese competitor carving a path to success for themselves within a few years, using a very different approach to the light source, that undercuts and outperforms ASML...
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Yes: And all other stocks in the world, roughly proportionate to their market values—preferably through broadly diversified, cost-efficient vehicles.
I don't know. To quote OpenAI, "it may be difficult to know what role money will play in a post-AGI world." While almost all stockholder distributions are currently paid in cash, in-kind distributions are not unknown, and could potentially become the primary benefit of holding AI-exposed companies. If Microsoft gives stockholders access to the OFFICIAL OPENAI EXPERIENCE MACHINE, you might not get access simply from holding SPY, QQQ, or VTI. Hell, you might want to direct-register your shares to prevent any beneficial ownership shenanegans.
I fail to see many AGI scenarios that don’t lead to 90 percent of humanity being taken to a ditch and shot.
Owning stock in the company that builds AGI is one of the best ways to increase your probability of being in the 10%!
Fun fact:This is isomorphic to Roko's Basilisk.
More options
Context Copy link
Maybe, but humans have a pretty easy time of doing that without AGI (re: Khmer Rouge).
More options
Context Copy link
When cars were invented, 90% of horses weren't taken to the glue factories and shot, were they? They just kinda stopped breeding and withered down to entertainment, gambling, and hobbiests, while the rest died off on their own. ... right?
Seems like humanity is already horsing themselves to death without AGI.
More options
Context Copy link
More options
Context Copy link
I don't see why the potential of such a shareholder benefit wouldn't be priced in. I doubt I'm the first to think of this ("but I arrived at it independently" pete_campbell.png); however, it would be funny if Chat-GPT's advice were to invest in MSFT, NVDA, TSMC, telecom, robotics, weapons, etc.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I'm no financial analyst but I'm inclined to say yes, keep buying. I really think that despite the AI buzz and hype, most of the business world still hasn't priced in just how economically impactful AGI (and the path towards it) is going to be over the course of this decade. But you might also want to buy gold or something, because I expect the rest of this decade is also going to be very volatile.
Is NVIDIA really the only game in town here? No Chinese competitor giving them a run for their money, etc?
For the last few years I have thought that for sure other companies would be able to knock-off their market share. This opinion has cost me thousands of dollars.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think so. The compute-centric regime of AI goes from strength to strength, this is by far their most resource intensive model to run yet. Still peanuts compared to getting real programmers or mathematicians though.
But I do have a fair bit of NVIDIA stock already, so I'm naturally biased.
Why? In time a handful of foundation models will handle almost everything, buying the chips themselves is a loser’s game in the long term. When you buy Nvidia, you’re really betting on (a) big tech margins remaining excessive and (b) on that margin being funnelled direct to nvidia in the hope that they can build competitive foundation models (not investment advice.)
Nvidia is 80-90% AI, Microsoft is what, 20% AI at most? Getting Microsoft shares means buying Xbox and lots of other stuff that isn't AI. I have some MSFT (disappointing performance tbh), TSLA and AVGO but Nvidia is still a great pick.
OpenAI and Anthropic have the best models, they're not for direct sale.
In the compute-centric regime, chips are still king. OpenAI have the models, can they deploy them at scale? Not without Nvidia. When AGI starts eating jobs by the million, margins will go to the moon since even expensive AI is far cheaper and faster than humans.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think the answer to this is just 'yes.'
In that I believe that any world where Nvidia stocks are tanking, there's probably a lot of other chaos and you will be seeing large losses across the board.
The only inherent risk factor is that their product is dependent on thousands of inputs all around the world, so they're more sensitive than most to disruptions.
More options
Context Copy link
More options
Context Copy link
Apparently this AI is ranked as the 175th best coder on Earth. I think we’ve reached the point, where anyone working in software needs to either pivot to developing AIs themselves or else look for an exit strategy. It looks like humans developing “apps”, websites and other traditional software development has 1-3 years before they’re in a similar position to horse and buggy drivers in 1920.
Considering that people already thought LLMs could write code well (they cannot in fact write code well), I'm not holding my breath that they are right this time either. We'll see.
My brother in Christ, the 174th best coder on Earth is literally an LLM.
What is your theory on why that LLM is not working at OpenAI and creating a better version of itself? Can that only be done by the 173rd best coder on Earth?
... why do you think LLMs are not meaningfully increasing developer productivity ar openai? Lots of developers use copilot. Copilot can use o1.
If his claim was correct, LLM's wouldn't be a tool that help OpenAI developers boost their productivity, LLM's would literally be writing better and better versions of themselves, with no human intervention.
Stackoverflow is better than most programmers at answering any particular programming question, and yet stackoverflow cannot entirely replace development teams, because it cannot do things like "ask clarifying questions to stakeholders and expect that those questions will actually be answered". Similarly, an LLM does not expose the same interface as a human, and does not have the same affordances a human has.
And that's why we don't call Stack Overflow things like "the 175th best coder on Earth".
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
No, it is ranked as 175th in a specific ranking. That is with access to all analysis, answers of this existing questions. Solving question is distinctively easier if you seen the answer.
Make no mistake, LLM are much better at coding than I would predict 10 years ago. Decade ago I would laugh at anyone predicting such progress, and in fact I mocked any idea of AI generating code that is worth looking at. And trawling internet for such solutions is extremely powerful and useful. And ability to (sometimes) adapt existing code to novel situation still feels like magic.
But it is distinctively worse at handling novel situations and taking any context into account. Much worse than such ranking suggest. Leaving aside all cheating of benchmarks and overfitting and Goodhart's law and all such traps.
If this AI would be really 174th best coder on Earth then they would be already releasing profitable software written by it. Instead, they release PR stuff. I wonder why? Maybe at actual coding it is not so great?
More options
Context Copy link
Says who? What’s the evidence? I see these claims but they don’t seem backed up by reality. If they are so great, why haven’t we practically fired all coders?
More options
Context Copy link
That tweet you linked does not mean what you say it means.
Competitive programming is something that fits LLM's much better than regular programming. The problems are well defined, short and the internet is filled with examples to learn from. So to say that it equals regular programming is not accurate at all.
Are LLM's decent (and getting better) at regular programming? Yes, especially combined with an experienced programmer dealing with something novel (to the programmer, but not the programming community at large), in roughly the same way (but better) that stackoverflow helps one get up to speed with a topic. In the hands of a novice programmer chaos occurs, which might not be bad if it leads to the programmer learning. But humans are lazy.
Will LLM's replace programmers? Who knows, but given my experience working with them, they struggle with anything that is not well documented on the internet very quickly. Which is sad, because I enjoy programming with them a lot.
Another thing to add is that I think the low hanging fruit is currently being picked dry. First it was increasing training for as long as it scaled (gpt4), then it was run time improvements on the model (have it re-read it's own output and sanity check it, increasing the cost of a query by a lot). I'm sure that there are more improvements on the way but like most 'AI' stuff, the early improvements are usually the easiest. So saying that programming is dead in X amount of years because "lllllook at all this progress!!!" is way too reactive.
More options
Context Copy link
My brother in Christ, up until now (can't speak for this one) LLMs frequently get things wrong (because they don't actually understand anything) and can't learn to do better (because they don't actually understand anything). That's useless. Hell, it's worse than useless - it's actively harmful.
Perhaps this new one has surpassed the limitations of prior models before it, but I have my doubts. And given that people have been completely driven by hype about LLMs and persistently do not even see the shortcomings, saying it's "the 174th best coder on earth" means very little. How do I know that people aren't just giving into hype and using bad metrics to judge the new model just as they did the old?
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I will start panicking when I will see AI-generated code working correctly and requiring no changes. For three simple cases in row, that I needed to implement.
Right now AI is powerful tool but in no danger whatsoever to replace me.
Though yes, progress is scary here.
why this field would be at unusually high risk? Of all things it is field where minor mistakes and inconsistencies may take down entire system. And for now AIs are failing at being consistent at large projects.
I find that frontier LLMs tend to be better than I am at writing code, and I am pretty good but not world class at writing code (e.g. generally in the first 1% but not first 0.1% of people to solve each day of advent of code back when I did that). What's missing tends to be context, and particularly the ability to obtain the necessary context to build the correct thing when that context isn't handed to the LLM on a silver platter.
Although a similar pattern also shows up pretty frequently in junior developers, and they often grow out of it, so...
LLM is great at writing code in area utterly unfamiliar to me and often better than reading documentation.
But nearly always rewrite/tweaking/fixing is needed, for anything beyond the most trivial examples.
Maybe I am bad at giving it context.
You, me, and everyone else. Sarah Constantin has a good post The Great Data Integration Schlep about the difficulty of getting all the relevant data together in a usable format in the context of manufacturing, but the issue is everywhere, not just manufacturing.
There's a reason data scientists are paid the big bucks, and it sure isn't the difficulty of typing
import pandas as pd.More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I think that says more about the grading metrics than anything else.
Also is this the "Dreaded Jim" from the LessWrong/SSC days?
More options
Context Copy link
More options
Context Copy link
Well, given that benchmarks show that we now have "super-human" AI, let's go! We can do everything we ever wanted to do, but didn't have the manpower for. AMD drivers competitive with NVIDIA's for AI? Let's do it! While you're at it, fork all the popular backends to use it. We can let it loose in popular OSes and apps and optimize them so we're not spending multiple GB of memory running chat apps. It can fix all of Linux's driver issues.
Oh, it can't do any of that? Its superhuman abilities are only for acing toy problems, riddles and benchmarks? Hmm.
Don't get me wrong, I suppose there might be some progress here, but I'm skeptical. As someone who uses these models, every release since the CoT fad kicked off didn't feel like it was gaining general intelligence anymore. Instead, it felt like it was optimizing for answering benchmark questions. I'm not sure that's what intelligence really is. And OpenAI has a very strong need, one could call it an addiction, for AGI hype, because it's all they've really got. LLMs are very useful tools -- I'm not a luddite, I use them happily -- but OpenAI has no particular advantage there any more; if anything, for its strengths, Claude has maintained a lead on them for a while.
Right now, these press releases feel like someone announcing the invention of teleportation, yet I still need to take the train to work every day. Where is this vaunted AGI? I suppose we will find out very soon whether it is real or not.
I'm afraid apps won't become lighter -- getting light is easy but there is little market incentive to, AGI programmer would rather create more dark patterns than
Still, I think we'll notice a big difference when you can just throw money at any coding problem to solve it. Right now, it's not like this. You might say "hiring a programmer" is the equivalent, but hiring is difficult, you're limited in how many people can work on a program at once, maintenance and tech debt becomes an issue. But when everyone can hire the "world's 175th best programmer" at once? It's just money. Would you rather donate to Mozilla foundation or spend an equivalent to close out every bug on the Firefox tracker?
How much would AMD pay to have tooling equivalent to CUDA magically appear for them?
Again, I think if AGI really hits, we'll notice. I'm betting that this ain't it. Realistically, what's actually happening is that people are about to finally discover that solving leetcode problems has very little relation to what we actually pay programmers to do. Which is why I'm not too concerned about my job despite all the breathless warnings.
When everyone can hire the _world's 175th best-at-quickly-solving-puzzles-with-code programmer at once. For quite significant cost. I think people would be better off spensing that amount of money on gemini + a long context window containing the entire code base + associated issue tracker issues + chat logs for most real-world programming tasks, because writing code to solve well-defined well-isolated problems isn't the hard part of programming.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
To be fair humans could choose to do this. We perversely choose not to. Enormous quantities of computational power squandered on what could be much lighter and faster programs. Software latency not improving over time as every marginal improvement in hardware speed is counteracted by equivalently slower software.
not sure how perverse it is
massively upgrading my laptop would cost me (after converting time to money) few days of work
rewriting my OS/text editor would take years of work
I am not sure whether even total overall costs of badly written OS/apps would cost much more than rewrite costs.
16GB RAM for laptop costs about 5 hours of minimum wage work, and it is in a poor country.
And if it would be overall worth it - we again have standard issue coordination problem. And not even particularly evil one.
OK, I can make some program faster. How I will get people to pay me for this? People consistently (with rare exceptions) prefer buggy laggy programs that are cheaper or have more features.
More options
Context Copy link
More options
Context Copy link
Yeah, I mean, the AI hype train people are aware that from the perspective of an interested but still fundamentally "outside" normie, the last years have basically consisted of continuous breathless announcements that AGI is six months away or literally here and our entire lifes are going to change, with the actual level of change in one's actual daily life being... well, existent, of course, especially if one's working an adjacent field, but still quite less than promised?
More options
Context Copy link
More options
Context Copy link
So we have an even more sophisticated way for college students to cheat on tests? Seems to be the only useful thing so far
The thing is if you joke that the thing can effectively help students cheat you still imply its somewhere around the intelligence level of average college students, which certainly implies it is useful in ways that college students or recent grads can be.
Best current estimates are that College student IQ is about the population average of 100
Very counter intuitive. Do you have more on this?
Naively, you'd assume that much of the left tail has no chance to attend college, and much of the left half little motivation to do so (they didn't enjoy learning in high school, so don't really want to continue their education).
Is there something that filters the right trail just as strongly?
There was a meta analysis published to this effect although it was controversial
https://x.com/cremieuxrecueil/status/1763069204234707153
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I was at a demo at work two days ago where a mid level engineer was showing some (very useful to the organization) results on data collection and analysis. At the end, he shows us how to extend it and add our own graphs etc, and he’s like “this python data analysis tooling might look messy and intimidating, but I had zero idea how to use it two days ago myself, and it’s all basically just a result of a long ChatGPT session, just look at this (he shows the transcript here)”.
This effectively means that if ChatGPT saved him half a day of work, then this generates hundreds of dollars for the company in extra productivity.
More options
Context Copy link
I use o1 a bunch for coding, and it still gets things wrong a lot, I'd happily pay for something significantly better.
More options
Context Copy link
More options
Context Copy link
The only real sign we're near the end-zone is when we can ask a model how to make a better model, and get useful feedback which makes a model which can give us more and better advice.
I certainly foresee plenty of disruption when we reach the point of being willing to replace people with AI instances on a mass level, but until the tool allows for iterative improvement, it's not near the scary speculation levels.
You already can. Chatgpt says:
Increase Model Depth/Width: Add more layers or neurons to increase the capacity of your neural network.
Improve the Dataset
Computational Resources
Use Better Hardware: Train on GPUs or TPUs for faster and more efficient computations.
There really isn't much secret sauce to AI, it is just more data, more neurons.
Presumably this meant "the sort of useful feedback that a smart human could not already give you".
Claude can give useful feedback on how to extend and debug vllm, which is an llm inference tool (and cheaper inference means cheaper training on generated outputs).
The existential question is not whether recursive self improvement is possible (it is), it's what the shape of the curve is. If it takes an exponential increase in input resources to get a linear increase in capabilities, as has so far been the case, we're ... not necessarily fine, misuse is still a thing, but not completely hosed_ in the way Yud's original foom model implies.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Altman saying "Maybe Not" to an employee who said they will ask the model to recursively improve itself next year. https://x.com/AISafetyMemes/status/1870490131553194340
More options
Context Copy link
The problem of improving AI is a problem which has seen an immense investment of human intelligence over the last decade on all sides.
On the algorithmic side, AI companies pay big bucks to employ the smartest humans they can find to squeeze out any improvement.
On the chip side, the demand for floating point processing has inflated the market cap of Nvidia by a factor of about 300, making it the second most valuable company in the world.
On the chip fab side, companies like TSMC are likewise spending hundreds of billions to reach the next tech level.
Now, AI can do many tasks which previously you would have paid humans perhaps 10$ or 100$ to do. "Write an homework article on Oliver Cromwell." -- "Read through that thesis and mark any grammatical errors."
However, it is not clear that the task of further improving AI can be split into any amount of separate 100$ tasks, or that a human-built version of AI will ever be so good that it can replace a researcher earning a few 100k$ a year.
This is not to say that it won't happen or won't lead to the singularity and/or doom, perhaps the next order of magnitude of neurons will be where the runaway process starts, but then again, it could just fizzle out.
More options
Context Copy link
More options
Context Copy link
One lesson I think we should be learning but that doesn't seem to be sinking in yet is that we're actually pretty bad at creating benchmarks that generalize. We assume that, because it does really well at certain things that seem hard to us, that it is highly intelligent, but it's been pretty easy so far to find things it is shockingly bad at. Progress has been impressive so far but most people keep overestimating its abilities because they don't understand this and they focus more on the things it can do than the things it can't do.
There have been a lot ridiculous claims within the last couple of years saying things like it can replace junior software developers, that it is just as intelligent as a university student, or that it can pass the Turing test. People see that it can do a lot of hard things ane conclude from that that it is basically already there, not understanding how important the things it still can't do are.
I'm sure it will get there eventually, but we need to remember that passing the Turing test means making it impossible for someone who knows what he's doing to tell the difference between the AI and a human. It very much does not mean being able to do most of the things that one might imagine a person conducting a Turing test would ask of it. AI has been tested on a lot of narrow tasks, but it has not yet done much useful work. It cannot go off and work independently. It still doesn't seem to generalize its knowledge well. Guessing what subtasks are important and then seeing succeed on those tests is impressive, but it is a very different thing than actual proven intelligence at real world problems.
They did this though. They had to give GPT-4o some prompting to dumb it down, like 'you don't know very much about anything, you speak really casually, you have this really convincing personality that shines through, you can't do much maths accurately, you're kind of sarcastic and a bit rude'...
You might see the dumb bots on twitter. But you don't see the smart ones.
Source?
Seems this paper is about GPT-4 as opposed to 4o but it did pass the Turing test.
https://arxiv.org/pdf/2405.08007
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Heartily endorsed.
I'm the lead algorithms developer for a large tech company (im not going to say wich one to avoid doxxing myself, but i can assure you that you have heard of us) and i find that i tend to be more "bearish" on the practical applications of Machine Learning/AI than a lot of the guys on the marketing and VC sides of the house or on Substack because I know what is behind the proverbial curtain and am accutely aware of its limitations. A sort of psuedo Dunning Krueger effect if you will.
More options
Context Copy link
More options
Context Copy link
You know, in a perfect world, AI would finally stop the civilization destroying policy of importing the 3rd world because we need cheap, dumb labor. AI should be cheaper and less dumb than them.
Unfortunately, I know I'm going to get even more 3rd world "replacement population", because "We've always been a nation of immigrants" and apparently the neoliberal solution to global poverty is to invite everyone here so we can all be poor together.
There is, of course, an ideological component to mass immigration. But I think it will stop as soon as domestic unemployment rates become high enough, such that (from that perspective) the sooner the better.
Not gonna happen. Even if AI is strictly superior at clerical/intellectual jobs to most people (and I doubt that), there is unlimited demand for dog-walking at 10$/hour.
The machines have long ago replaced human physical power, and animal physical power, and weaving and sowing and cobbling and copying and calculating and transporting and and and ... never was man left idle.
The big question is if LLMs will be fully agentic. As long as AIs are performing tasks that ultimately derive from human input then they're not gonna replace humans.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Color me skeptical. Sounds like just another marginal improvement at most. The problem with these metrics is that model makers increasingly seem to be "teaching to the test".
The vibes haven't really shifted since chatGPT 4.0 nearly 2 years ago now.
I'm a little suspicious, they released Sora to public access even though its only slightly better than other video production models after introducing it in February, so it reads as a way to keep the hype train moving because they don't have a new model worthy of the GPT5 moniker yet.
More options
Context Copy link
I default to the 'HOLY CRAP, look what this thing can do' benchmark. If somebody's trying to show you scores, it's an incremental update at best.
More options
Context Copy link
More options
Context Copy link
Is there a possibility that answers to this challenge were included in the training set?
They have a public dataset and a private one, and compare the scores for both of them to test for overfitting/data contamination. You can see both sets of scores here, and they’re not significantly different.
Of course it’s always possible that there has been cheating on the test in some other way, and so François Chollet has asked for others to replicate the result.
More options
Context Copy link
More options
Context Copy link
I’d wait for mass layoffs at OpenAI before we take any claims of “AGI achieved internally” seriously.
More options
Context Copy link
An amazing accomplishment by OAI.
On the economic level, they spent roughly $1M to run a benchmark and got a result that any STEM student could surpass.
Is that yawnworthy? No: it shows that you can solve human-style reasoning problems by throwing compute at it. If there was a wall, it has fallen, at least for the next year or so. Compute will become cheaper, and that's everything.
More options
Context Copy link
More options
Context Copy link
So, this is very interesting. I wonder: was his plan to essentially make this look like an Islamist attack, to stir up hostility toward Muslim immigration? I imagine he understood that everyone would, justifiably, assume that an Arab man driving his car into a Christmas market (with an explosive device inside, no less!) would be interpreted by all sides as an Islamist terror attack. Maybe he was hoping nobody would identify him and discover his Twitter account? If he did expect people to find his account, I really have no idea what political outcome (if any) he was hoping to facilitate as a result of this attack.
On the one hand, his background as a former refugee from the Middle East makes him an incredibly unwieldy weapon for progressives to use to discredit immigration skeptics; on the other hand, his support for the AfD and his criticism of Muslim immigration makes him pretty much impossible to use as a cudgel by the right wing. Some commentators, such as Keith Woods, are taking the position that this proves that all Arab immigration to Europe should be cut off, because even the apparently liberal/assimilated ones are still ticking time bombs of potential violence; this seems fairly tendentious even to me, given what we know about the guy so far.
I’ll consolidate my replies to @SecureSignals, @Walterodim, and @Belisarius, since they’re all making similar points.
Firstly, I agree that this guy should not have been allowed to live in Germany. Now, to be clear, he came as an asylum seeker in 2006, nearly a decade before Merkel’s Mistake; at the time, Arab migration to Germany was, as I understand it, quite minimal (it was Turks who were by far the largest source of Middle Eastern immigration at the time) and it’s significantly more understandable that he would have been let in. There was no large insular Arab community in Germany into which he could have ensconced himself to obviate the need to assimilate. He was fluent in English, and had clear and explicit anti-Islam sentiments. He seems basically like an Ayaan Hirsi Ali type, and given how live a threat Islamist terror seemed at that time, I think it was understandable to expect this guy to act as a potentially impactful voice steering young Arab men away from Islamist radicalization. (And, to be clear, it’s entirely plausible that he did have some impact, substantial or not, of that nature at the time.) Given what we know now in hindsight, not only about him personally but about the larger effects of Arab immigration to Europe, it’s clear that the stance toward asylum seekers should have been far more exclusionary than it was at the time.
However, I want to make sure that opposition to Arab immigration is based on specific, articulable, predictive claims. I oppose large-scale Arab immigration because of the specific qualities that I expect most Arabs (and, especially, most Arabs choosing to emigrate to Europe) to possess, and because of the specific actions they are likely to take and the motivations behind those actions. Let’s look at what specific problems/pathologies I expect to accompany large-scale Arab immigration, and analyze the extent to which this guy embodied those pathologies:
I expect Arabs to create culturally-insular ethnic enclaves, in which they are able to continue to replicate the cultural practices of their homeland rather than assimilating. Well, this guy was fluent in English, and had already marked himself as not only culturally-distinct from the vast majority of Arabs, but actively in opposition to them. It is true that he brought baggage and cultural grievances with him from his homeland; however, those grievances toward Arab Muslims are pretty much exactly the same grievances that liberal Westerners had about Arab Muslims at the time. “They’re culturally backward, they mistreat women, their culture is anti-Western, and anti-science, they’re susceptible to radical jihadist beliefs.” All of those grievances are true and valid! This is the Sam Harris, Richard Dawkins, Ayaan Hirsi Ali line about Arab Muslims. They’re not the sort of arcane inter-ethnic blood feuds and tribal jockeying we normally associate with foreign ethnic groups immigrating and co-mingling in places like the U.K. and Canada.
I expect a large percentage of Arab immigrants to be uneducated, unskilled, to spend a long time (potentially their entire lives) unemployed and on welfare. Well, this guy was a doctor — okay fine, a psychiatrist, so barely a doctor, but at least it’s a well-paying job that kept him gainfully employed and interacting economically with the German public. He certainly doesn’t pattern-match to the average Arab in Germany; as @Walterodim points out, he’s more like the average educated Indian in Canada.
I expect large numbers of Arab men to fall into lives of crime, both petty and organized. Well, again, this guy does not appear to have any criminal record. He hasn’t fallen in with Arab gangs, he hasn’t become some listless glowering thug milling about the town square acting like a savage.
I expect some small number of Arab men to commit serious acts of terrorism, motivated by jihadist beliefs and by a hatred of their host societies. This is where we have to carefully discern what happened here. In pretty much all of the other terror attacks committed by Arabs in Europe, the ideological motivations were clearly religious and specifically Islamist in character. The Bataclan attackers, the guys driving their trucks into markets, the guys cutting priests’ heads off — they all make their Islamist beliefs very explicit. That’s not why this guy appears to have done what he did.
So, why did he? If we want to talk about ideology, his views are difficult to pattern-match to other large ideological trends. On the one hand, he was very consistent about Germany’s need to resist Islamization. In that sense, he aligns very strongly with the AfD and other right-wing nationalist groups. However, he also wanted more immigration of a very specific class of Arab Middle Easterners: ex-Muslim/anti-Islam refugees, and particularly educated women. In that sense he’s not only similar to the more moderate right (what wignats derisively call “the kosher right) but also to some of the more eclectic right-wingers who say the West should let in plenty of attractive female refugees, while cutting off all or nearly all male immigration. And of course his stated commitment to progressive values such as feminism and economic leftism puts him almost more in line with the sort of leftist terrorism Germany faced in the 70’s. (Although that terrorism had a strong pro-Palestinian valence, whereas this guy was a Zionist.) But in this case his choice of targets doesn’t really seem to align with any expected ideological movements. This was no act of right-wing nationalist terrorism — he’s no Anders Breivik or Brenton Tarrant — because his victims were (at least presumably) white Germans. He really did seem to resent Germany and to want to strike a blow against it on behalf of his in-group, but his in-group isn’t Arabs as a whole, it isn’t Muslims, and it isn’t even Saudis. It appears to just be “ex-Muslim apostates (especially women) fleeing the Middle East.” I was joking yesterday, “Is this the first Reddit Atheist terror attack?” Yes, he’s a brown Arab, but in terms of his worldview he’s got more in common with murdered Dutch anti-Muslim filmmaker Theo Van Gogh than with the Muslims who killed him.
So, in what ways is this guy’s terror attack similar to previous acts of Arab terrorism? What patterns does it match? Certainly in terms of its specific methodology it’s similar to other terror attacks we’ve seen in Europe, both with the use of a car driving through a Christmas market, and with the (thankfully unused) explosive device. But in terms of its motivations I think it’s sufficiently different from previous acts of terrorism that it’s not really instructive. While obviously there are genetically-influenced psychological differences between population groups, and Arabs are a population group with heritable traits, I don’t think anyone’s found any evidence for a “terrorism gene” among that population. If Arabs tend to be more violent than Europeans, it’s because they tend to be lower-IQ and to live in low-trust backward societies wherein violence is an effective and sanctioned way to obtain power and resources. It’s not because some voice in the back of their head, whispering to them like the Orc god Gruumsh, instructs them to drive their cars into crowds.
I saw some DR commentator (probably Captive Dreamer) say, “If that’s the model migrant, imagine how much worse the rest are.” This is probably effective propaganda, but it doesn’t seem very intellectually substantive. This guy’s pathologies, and the reasons he shouldn’t have been in Europe, were of a markedly different character from those of the true dregs of the Arab world which have been washing up on the shores of Europe. The “model migrants” in, say, Canada are problematic largely because they use their political power to facilitate bringing in more of their countrymen. In that narrow sense, this guy’s story is certainly instructive. It is true that his #1 loyalty was to his in-group, which did not include most white Germans, and that in the end he was willing to commit savage violence against his host country in order to (in some twisted, confused, politically aimless way) earn concessions for people like himself.
There are, though, two distinct sets of concerns when it comes to the immigration discussion - one is about the dangers presented by the importation of educated foreigners who will use political and cultural power to advocate for increased immigration, and who will dilute the political and cultural power of the native population. Whatever you want to say about these types of people, likelihood of committing terror attacks has simply never been a plausible vector of attack against them. This is, so far as I can tell, the first high-profile attack of this kind committed by a guy with this background and these specific beliefs, and I don’t think we’ll see many more examples in the future.
The other half of the immigration discussion is about low-skilled, unassimilable, criminally-inclined young, susceptible-to-jihadist-radicalization men and their welfare-dependent spouses. While this has largely been the story of Arab immigration to Europe (particularly post-2015) it is not this guy’s story. Whatever he is, he’s not an example of that. He did assimilate to an ideology with a lot of Western adherents; he was just willing to do what few of those Westerners would have done as a result of that ideology. (And I want people to be careful in their speculations about why he was willing to do so.)
People like Keith Woods would like to essentially merge these conversations and say that it’s all the same conversation: All foreigners in Europe are bad, none of them belong there, even the supposed best of them bring problems, they’ll never be assimilable, they’ll always work against us. And what I’m saying is that I don’t think this is credible. There are foreigners in Europe — for example, East Asian immigrants — who have not, so far as I can tell, created any problems for their host societies. If Germany let in 100,000 Vietnamese immigrants tomorrow, my prediction is that those immigrants would flourish, as they have in America. It’s not simply “being foreign” that makes Arab immigrants a bad fit for European society; it’s their specific traits, the specific beliefs they have, their lower IQ and lower impulse control, their hatred for Western norms, their parasitic dependency on the largesse of the welfare state, and the difficulty in integrating them into society. This guy’s problems don’t really map onto any of those concerns, except in a roundabout and strained way.
The argument people like Keith Woods makes is that these Arab immigrants will never be German, no matter how long they are there or if they learn the language, whether they commit crime or do not commit crime, whatever they Tweet or whatever political policy they support, whatever religion they will follow, the only certainty is that they will never be German. So your rebuttal is not responsive to the issue they fundamentally have with the mass migration of non-European people to European civilization.
It's not just about crime, it's not just about religion, it's not just about terrorism, although those things can be relevant symptoms, it's about jealously guarding a European genetic and civilizational inheritance from being Africanized, replaced by Arabs or Chinese, Indians or whatever.
Your argument is most responsive to the Conservatives who just say "hey, I'm not racist I just oppose mass Arab migration because I don't want terrorist attacks in my Christmas villages." For those people you can do your well ackhually it wasn't Islamic extremism that inspired the attack, but that just doesn't work on the DR perspective.
Why stop at 100,000? Why not 100 million? Even if mass migration of Asians, Vietnamese, Chinese to Germany caused a reduction in crime and created economic growth do you think the DR should accept these foreigners because they commit less crime or raise GDP? Why not replace all of Europe with Chinese if it lowered crime and raised IQ? It's only conservatives who say it's about those things.
This terrorist attack is pertinent to the DR perspective because it provides a symbolic counterexample to the lie that, no matter who you are, you can go to Germany, learn the language and obey the law and, congratulations you're German! No you are not. The American Midwest family with Germanic ancestry they don't even know about is more German than they will ever be. So this man ostensibly being the "model" Arab immigrant but still become inspired to commit this act is shattering the liberal illusion of assimilation, or that being German is just an idea.
His motivation was European immigration policy. You try to be ultra-specific about it to brush it as a one-off, but it introduces the likelihood of violence in response to Right-wing Immigration reform in Europe. We may see more of that type of violence than radical Islamic-inspired violence, although a lot of it will be blended together.
We have seen a similar pattern with Free Speech in Europe: terrorist attacks in response to offensive speech did not motivate backlash against mass migration it motivated crackdowns on "hate speech" out of fear of offending Muslims. So if we see more Arab terrorists attack Europe because of European immigration reform we will likely see pressure put against immigration reform. This is relevant especially at a time when parties are flirting with the idea of remigration.
You don't think that AfD and other European parties beginning to support remigration is likely to inspire any more of this violence? We already see race riots and organized street violence by African and Arab gangs. That already happens, and it's political, it's not driven by radical Islam. So your denial that we won't see more of this sort of political violence is absurd.
Yes, the likelihood is near 100% that this sort of violence is going to influence European policy on immigration, most likely it will cause authorities to crackdown harder on political support for remigration because authorities will plausibly be able to say that supporting this policy is likely to foment violence. Certainly if that policy were to be pursued, then violence from deportees would be a top concern of that policy. So there's simply no reality in which the prospect of violence from these African and Arab migrants is irrelevant, Muslim or otherwise.
This attack is more relevant because it was motivated by European immigration policy than if it were just radical Islam. It's proof that mass migration irrevocably influences politics and "assimilation" is fundamentally a lie.
I've seen a lot of Indians in the South. I've never seen a culturally Southern Indian. It would probably just make me laugh. It's not them, and they are not us.
But I'll admit that my motivation is not "We must preserve Southern Culture!" My motivation is directing ethnogenesis in a eugenic direction, and I am far more terrified of my descendants being half-Indian (or at least the macro-effect of such an ethnogenesis in aggregate) than I am of Southern Culture going away. I am more concerned with Europe becoming Arab than I am with German culture per se.
That's what we've been arguing about‽
The way things are going (assuming humanity survives at all), a century from now we will be able to take the best genes from every branch and twig of the human family tree, and splice them into anyone who wants them!
That is science fiction; if, when, or how any of that happens does not dismiss the immediate concern of demographic replacement by non-Europeans. There would obviously be huge political pressure regulating how that technology is used. Mate selection is not a deprecated concern, and it's foolish to put all the eggs literally in the basket of "mate selection doesn't matter because gene editing is going to save us."
And what would you call the idea of people on multiple continents conversing with one another without leaving their homes, by means of a network of computing machines spanning the entire globe and beyond?
More options
Context Copy link
More options
Context Copy link
If I have to be maximally charitable to the ethnomaxxing view, we're going to need stable high-trust high-IQ societies in order to get to gene editing within the century.
More options
Context Copy link
Do you actually believe this will happen?
I don't find it ridiculously implausible that there will be a stratum of society with a TFR of <1 which embraces gene editing technology. But high human capital embracing natural reproduction at high rates seems a necessity for maintaining industrial society over the long run, because of that TFR issue.
It is difficult to imagine gene editing not driving down fertility rates among whoever embraces it. I'm a techno-optimist; I think cheap fusion and orbital solar power and space colonization are solveable problems. But I also think we need the people to do it. And the people who can do it don't have any room for their TFR to drop any further. South Korea is the most innovative country in the world(literally). There is a human element to our science fiction future and that human element needs to be taken into account. Gattaca was a dystopia because it comes off as one, no one gives a damn whether you think it sounds nice in theory, not in their heart of hearts.
I think what I'm trying to say is- your idea of making superbabies by gene editing won't produce enough of these superbabies to even maintain itself. Because it just doesn't fit what people actually want.
More options
Context Copy link
More options
Context Copy link
What kind of Indian? Culturally southern Kashadas and Cherokees are a dime a dozen.
Dot Indians seem to be straightforwardsly here as a minority that doesn't want to assimilate in any way, and that is how most legacy southerners- both white and black- seem to view them as well.
Now unlike you I do care about preserving southern culture. I probably wouldn't let my daughter marry an Indian who hadn't been disowned, but that's because of their culture. Marrying a dot Indian woman is a different matter; she can learn to make gravy.
There are a lot of things about Southern Culture I admire, a lot of things I don't. But it's not feasible or desirable to be hung-up on freezing cultures in time. I'm more interested in the generation of future Culture than I am the preservation of 19th century Culture. This is what differentiates Conservatism from the DR, at least when the DR is at its best.
Southern Culture in particular is tied up with the Lost Cause, I'm not interested in Southern men retaining any sort of identity with a Lost Cause.
I am interested in Holocaust Revisionism because I think deconstructing those myths is important for the generation of future Culture. I'm not interested in the Lost Cause because it leads Southern Men, who have a lot of admirable attributes, to a cul-de-sac.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
In my effortpost from last week, I talked about the "respectable" media's reluctance to mention anything about the identity of the perpetrator who committed the shocking knife attack which precipitated the November riots in 2023. Some outlets, in an effort to disguise the fact that he was Algerian, described him as "born outside of Ireland but an Irish citizen" or similar.
The clear intention was to give the impression that the perpetrator was "one of our own", so racism was misplaced. But of course, an anti-immigration activist would counter - the fact that he was an Irish citizen makes it even worse! It'd be one thing if he snuck into the UK, took a ship to Belfast then crossed the border into the south and applied for "asylum" as a "refugee", and committed this attack while he was in the legal limbo of waiting for his asylum application to be processed. The Irish government could perhaps be forgiven for extending clemency to a man about whom they know nothing by allowing him to stay in the country pending his asylum application, and then he goes on to commit a terrible crime. That's the kind of unfortunate but inevitable outcome that could theoretically happen even in a country with an extremely strict immigration policy.
But no - this is a man who has already jumped through all the hoops of applying for Irish citizenship, was thoroughly vetted, and still went on to commit a shocking and completely unprovoked crime like this. If a nutcase like this can pass the vetting process, clearly it's not stringent enough.
I don't know. I certainly believe that second-generation immigrants to Ireland can be fully assimilated (I've met plenty of women of Chinese descent who sound more Irish than I do; I work with a woman who has at least one Algerian parent and didn't clock her as anything other than Irish until she told me, although her name was a dead giveaway in retrospect; I once dated a Polish girl who sounded Irish from top to bottom), but I have no firsthand experience of a first-generation immigrant fully assimilating.
I should switch news providers, because that's still much better than what I saw (at 1:05): "Police say false information quickly spread through social media, that the attacker might have been a foreigner, and that appeared to fuel the frenzy of destruction that followed". Their earlier article isn't much better: "The violence began after rumours circulated that a foreign national was responsible for an attack outside a Dublin school on Thursday afternoon. Authorities haven't disclosed the suspect's nationality."
I couldn't find any followup articles offering more information.
More options
Context Copy link
More options
Context Copy link
From Europe, Poland to be more specific.
For me "American Midwest family with Germanic ancestry they don't even know about is more German than they will ever be" is absolutely laughable position.
No, just because you can trace some Polish ancestry does not make you Pole. You have no genetic memory etc. You are welcome in my country but if you start talking in English (not knowing any Polish and having meme-level understanding of Polish culture) how you are Polish then I am surely not going to agree with you.
Just because your grandfather could say 10 words in Polish, 5 of them being curses does not make you Polish. If all your grandfathers and grandmothers were Polish but you lost language, lost culture that makes you white, not Polish. (though if someone wants to recover that, it is entirely welcome to do so and I would be happy to help if I would encounter such person)
I have quite high bar what I would expect before I would consider someone to be Polish. But at least in theory it seems possible to me for someone green/yellow/black/purple/German to become Polish. And there were cases of this happening.
And yes, specially for our resident SSman: many people with Jewish ancestry were Poles, some of them were Poles practising Jewish religion, for some of them they were distinguishable only by genealogy and surnamed. (some failed to do so or had completely distinctive cultural identify). Though nowadays it is extremely rare as German murdered millions of Poles and Jews after invading. And while under communist occupation many were kicked out. Or preferred to escape from communist paradise.
And we had and have Poles with German, Belarusians, Russian, Ukrainian ancestry. Maybe if you would look really hard you would find some Poles with other skin colours (note: I can easily find some prominent people with Polish citizenship which are yellow/black, this does not make them Poles).
I'm just curious, if you a Pole went to China and learned the language and such would you say you are Chinese if you had 0% Chinese admixture? Would you say you were Bantu if you took residence in West Africa?
Would you agree the thoroughly Americanized Chinese family, with n-th generation children that can't speak a lick of Mandarin, are more Chinese still than some White person who immigrates there and learns the language?
More options
Context Copy link
More options
Context Copy link
Germany, Ireland, Finland, and the UK have very different cultures about assimilation. Famously France thinks it can assimilate Africans; German identity seems a bit more racially-exclusivist in comparison.
More options
Context Copy link
The word 'can' is doing alot of heavy lifting in your argument.
One of the elements that cemented my current opinion on such matters - among many - was talking with a friend of mine. Ethnically Italian, his family has been here for over a century.
And yet, despite this, there's parts of his family the rest know damn well to stay away from. Why? Because they're the ones connected to organized crime. The mafia.
A century of assimilation, and they're still culturally and ethnically distinct, with problems from the 'old world' still present. Hell, there's a sizable minority that have dual citizenship!
And this is with Italians. I grew up around alot of them. Hell, my father's godparents were damn near pure-blooded Italian!
And you're going to sit here, and suggest, straight to my face, that other ethnic groups are going to be better than them?
No. You import the people, you import the culture, for good and for ill. So stop importing them.
More options
Context Copy link
Obviously there are different tiers of the anti-immigration position that include various forms of nativism and not-nativism.
What is likely, though, is that most Western Europeans would probably have quietly acquiesced to mass immigration and demographic change without any major drama if the migrants had been, say, all Vietnamese or Filipino. Not because the nativist position would have been ‘disproven’, but because there would be none of these extreme staccato incidents of terrorist violence, things like Rotherham, Charlie Hebdo etc that draw a great deal of public attention.
In the US the majority of the public are still relatively torn on mass immigration, and the large scale deportation of most legal immigrants, let alone actually stripping naturalised migrants of citizenship, is an extreme fringe position. In Canada the public only really turned after they started importing pretty much the entirety of the Punjab at like 2% of the whole population per year.
There isn’t a huge (foreign) religion/race-neutral nativist constituency in most Western countries, meaning the population that wants everyone gone regardless of who they are, how they act and what they believe. Even in Iberia where there’s been huge legal immigration (often of people who are rather far from being of pure euro descent) from Latin America almost all anti-immigrant hostility is directed towards migrants from the Islamic world.
Counterpoint: there seems to be a massive backlash to migration in Canada from Indian immigrants, and that is not caused by crime or terrorism by Indian migrants.
What's happening is the European groups, too, take the political playbook from US Conservatives. "We're not racist (that would be evil!) we just think radical Islam is bad mmkaay." But that is downstream of the political pressures of liberal hegemony, there's a practical reason it centers on a religious critique of migration rather than a racial critique of migration.
Remigration strikes a more nativist cord than it does a purely anti-Islamic cord.
I think in the counterfactual where the majority of recent non-European migrants to Europe aren’t from the Islamic world is one in which anti immigration sentiment is far lower. As far as Canada goes, they did the equivalent of the US importing like 7m Indians a year several years in a row, which is very unusual even by Western mass immigration standards.
Before 2020, Canadians didn’t seem to care much about immigration, Trudeau won a landslide, and there had been mass immigration of Chinese and Indians for at least 25 years.
Depending on whether you think they are Europeans or not, you have a non-counterfactual point of comparison: Spain has had tons of immigration from Latin America, and while there has obviously been some backlash, it doesn't seem to be as strong as in the rest of Europe.
Latin Americans are already Spanish-speaking Catholics, so you'd expect them to be more culturally similar and willing to integrate when they're not.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Counter counterpoint- they're Indian. Mexican and Ukrainian and Vietnamese immigrants would have gotten away with it.
Minor counter counter counterpoint
There is some minor hostility to Ukrainians in Poland. But it is far from widespread and that is after massive shock migration due to war.
Though if 4% of country would be imported from Syria/Libya/Turkey/Nigeria/Russia/China/etc within months then reaction would be much poorer then welcoming then minor hostility months/years later.
Canada is much more welcoming of outsiders than Poland, though.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
What precisely makes a psychiatrist barely a doctor?
More options
Context Copy link
More options
Context Copy link
You really have to be kidding? The Right Wing argument is that he does not belong in Europe, no matter if he's a doctor or what he tweets, in a box or with a fox, not here or there, not anywhere in Europe. That argument can and should be used as a cudgel by the right wing, at least the Right Wing who acknowledges that this is about race and not merely about religion. The people who can't use this as a cudgel are those who pretend that this is just about Islam, and mass Arab migration to Europe would be fine if they just weren't Muslim. Is that an argument you accept Hoffmeister?
"Arabs don't belong in Europe." "But this Arab who slaughtered a bunch of Europeans tweeted pro-Israel stuff!" How could you think that's responsive at all to the argument?
How does a refugee slaughtering a bunch of people in a Christmas market not validate the anti-refugee political perspective? Because the refugee wasn't Muslim? That is just ridiculous.
Keith Woods is correct, and the Right Wing who pretends that mass migration from the third world is only a problem because of religious incompatibility do not form the ranks of the DR, and people like Woods have long made the argument that it's about race and not about religion.
True, but screening people becomes somewhat easier if you limit the number of people coming in. If you’re taking in only about 10,000 you can be pretty sure that the background checks will show criminality, drug use, lack of language skills, troubling political or religious beliefs, and so on. If it’s 100,000 it’s plausible to find things that would show up in a very quick background check, but more will slip by. At 1,000,000 a year, you’ll barely have any idea who these people are or why they’re coming.
And on the tail end, having fewer immigrants means better assimilation because the newcomers must learn the language and culture due to a lack of an ethnic enclave where he doesn’t have to adapt to the language and culture. If a million Swedes moved to the USA, they’d form a Swedish enclave in which Swedish is spoken, people go to the local Swedish Lutheran Church, they all eat Swedish food (Swedish pancakes and meatballs I assume), and so on. This happened in the past (https://en.wikipedia.org/wiki/Lindsborg,_Kansas) and obviously with groups like Orthodox Jews in New York who still speak Yiddish. Sonnet or later you are taking in so many people from a given background so fast that you simply cannot get them integrated and assimilated at all.
There's a difference between not wanting "low human capital people" (as the kids say these days) to come lower the average quality of life in your town/city/country, and, say, proposing grinding up all the Congolese into Soylent Green while telling them it's nothing personal. No microscope required, it's quite visible with the naked eye. In fact you'd almost have to be trying not to see it.
The difference is one of degree, not kind. The former is less bad than the latter, but they both come from the same malignant well: the belief that the well-being of Mtumbe Ngoube from Kinshasa or Fulan al-Fulani from Karbala matters less than that of John Doe from Kansas City or Max Mustermann from Koln.
There is a story told of Churchill, that he asked some lady
It’s a fun story but it’s actually pretty dumb. There is probably an amount of money that even Elon Musk would suck a dick for, and even if we’re talking about people who don’t desire money, there’s every chance there’s something (health, physical fitness, the resurrection of a loved one, the chance to have children, world peace, a tripling of their beloved dog’s lifespan) they would degrade themselves to have. Churchill himself was surely no different.
There are some things which most people will not do for money, or anything else short of a credible threat of death or hideous pain. In cultures influenced by Abrahamic religion, a straight man sucking dick is one of them. It isn't just a sin - it's an abomination. (Leviticus, passim)
More options
Context Copy link
More options
Context Copy link
Wanting to kill people and not wanting them near you is a difference in kind, or there is no distinction between differences in kind and differences in degree.
You've butchered the story -- the woman has to actually agree for it to work. But even in the valid version, there's a difference between being a whore and being a cheap whore.
More options
Context Copy link
More options
Context Copy link
I mean yes. But there are always practical issues. We simply don’t have room for everyone who would want to come, for a start. Our society simply can’t handle importing 20% of our population annually, for example. I’m not convinced we could successfully import 3% of our current population annually with no issues. Add in issues of culture (Palestinians have a much different culture than Swedes or Chinese) language, individual immigrant’s education level, the amount of money needed to ge5 people who arrive with nothing a home and food while they look for work. This is all assuming no criminal, terrorist or similar background. It’s not just “do I directionally want immigrants here, it’s questions of getting the thing done without breaking the country. Importing all of Gaza into Nebraska isn’t feasible— we don’t have the resources to successfully integrate that number of people into America. At best, you end up turning Nebraska into Gaza, lock stock and barrel. I don’t think anyone looking at that situation would say I want to turn Palestinians into Soylent. It’s just that as a practical matter, we can’t actually do that.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
No, its not that simple. Even granting your premise arguendo, they are still human beings, made according to the Imago Dei.
All human beings have equal dignity. It is no lesser tragedy for Nigerians or Congolese to be massacred than for Norwegians or Irishmen.
That being said, the distribution of natural gifts among different groups is not equal, and it must be admitted that Europeans get the better split compared to Bantus or Arabs. It is perfectly reasonable to oppose immigration from the Congo or Iraq on the basis that these people will lower the average abilities of an individual in your country, and this is not based in hatred of Congolese or Iraqis.
No, but it is based in indifference, and with regard to the horrors visited upon many, many, innocent people throughout history, the space between 'indifference' and 'hatred' would take an electron microscope to measure.
When an elephant stands on the tail of a mouse, it is no solace to the mouse that you do not hate him but are indifferent to him.
Opposition to immigration on the basis of talent distribution is in no way indifference, indifference would be opposing immigration because fuck em.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Sometimes (often) someone really wants to post about how much they despise blacks/Arabs/Indians/Jews/women/gays whoever.
We have spent a lot of time trying to enforce the rules in a way that suits the community's desire for maximal freedom of expression without descending into unfiltered sneering, snarling, race-baiting, and lazy booing of whichever group someone happens to hate.
You can talk about how blacks commit statistically more crimes, per crime statistics, and you can talk about the prevalence of Indian scam rings, and you can even bring HBD into it to propose your theory of why this is genetic. You can argue that immigration is bad and you can say you want zero immigrants and 100% racially pure ethnostates. Those sorts of arguments are allowed and have been made.
"Arabs, blacks or are (sic) lazier and more violent" is not an argument. It's just a rank assertion about your outgroup.
"No immigration of such should be permitted."
Fine. Your opinion, you can say this.
"Indians lie and cheat more than whites. It's that simple."
This is just more lazy boo-outgrouping. Do Indians lie and cheat more than whites? Do they really? As a percentage of the total population of liars and cheaters? As a part of Indian culture? As a genetic predisposition? I mean, you could conceivably gesture in the direction of some kind of argument, but you don't even try, you just drop a bunch of "brown people bad" turds on the floor.
People with views very like yours, and probably even stronger than yours, are regular posters here and have figured out that we give plenty of latitude for culture warring about your least favorite ethnic groups and "race realism" HBD posting so long as you can be civil and minimally inflammatory about it, and by that we mean not presuming that you're in a white nationalist clubhouse and if any Arabs, blacks, or Indians happened to be sitting next to you you could just pop off about what a bunch of lazy criminal liars they all are.
All of that throat-clearing is because I know people will whine that we're silencing "badthink" or trying to enforce some kind of consensus on not hurting feelings, despite the plentiful, years-long evidence to the contrary. In the vain hopes that explaining why we act on posts such as this will prove educational and illustrative to other posters who want to assert similar sentiments but in a less shitty way.
Factoring into this also is that your record, in particular, is one of the worst on the Motte. I count eight warnings and three tempbans, all for this sort of casual slinging of lazy insults at whichever group gripes your goiters at the moment.
You're just a shitty, low-effort poster who contributes nothing of value. I can't honestly remember you ever posting anything interesting, insightful, or getting even a single AAQC nomination, or really, anything that wasn't... stuff like this, although usually not as bad, hence your longevity here despite being a constant low-level stink and not much more.
Because your last ban was for a week and you were told then we would start escalating, I am banning you for a month, and not permabanning you, despite my near-certainty that that's in the future.
The ban is justified. No argument there. This though -
This is a stupid thing to say and unworthy of someone here to enforce and demonstrate correct behaviour. I'd say you went way overboard, although if this is the new level of discourse around here I would be happy to say more.
You know, that was pretty harsh and I probably should have edited that last part more heavily.
That said, I meant every word, and in the past, curt mod comments like "Don't boo your outgroup like this" get people demanding to know why we're enforcing ideological conformity and why someone got banned just for Telling The Truth. @No_one is a (not quite uniquely, but in a very small group) bad poster who wants to use the Motte as his platform to talk about how much he hates other people. But you're right, it wasn't the best way to express it.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Your logic is tortured and deliberately ignores the point.
Yes, the Ellis islanders brought violence and crime, and if we were smart we'd recognize that and avoid the mistakes of the past.
More options
Context Copy link
So if a tourist from the US does something similar next week should the EU ban all American tourists? There's more of us entering Europe every year than the entire Arab population of the continent.
I live in a small town, and somehow found myself sharing a rented office with 2 Californians. Quite frankly my opinion on the matter is: why wait?
Are you sure this is making the argument you want to make, given that precisely zero of these attacks were committed by American tourists, despite such high traffic?
More options
Context Copy link
Do American tourists consume taxes on net, disproportionately commit sexual and violent crime, turn neighborhoods into no-go zones, and leave behind another generation of themselves to do largely the same? Or do they mostly just stimulate and support local economies with their relatively large disposable incomes and bounce?
More options
Context Copy link
More options
Context Copy link
SEcUreSignalS—coincidence? I think not. I'd read the shit out of an anti-Arab and African migration book written in this style.
I'm not familiar with Keith Woods, but my sense is that the part I bolded isn't true. Granted the dissident right (I presume that's the DR) is a nebulous coalition, but I think most of them are not HBD-pilled, or at least believe the religion aspect is more important than the racial one. Change that from a descriptive "do not" to a prescriptive "should not" and I'd agree.
Do you like men of Islam?
I do not like them, Sam-I-am.
I do not like men of Islam.
Would you like them here or there?
I do not like them here or there.
I do not like them anywhere.
I do not like them, Sam-I-am.
I do not like men of Islam.
Would you like them in Berlin?
Even shorn of their foreskin?
I would not like them in Berlin.
I care not if they have foreskin.
I do not like them here or there.
I do not like them anywhere.
I do not like them, Sam-I-am.
I do not like men of Islam.
Would you like them in a mosque?
Or standing 'round their big black box?
Not in a mosque. Not round a box.
Not in Berlin. Without foreskin.
I do not like them here or there.
I do not like them anywhere.
I do not like them, Sam-I-am.
I do not like men of Islam.
Would you? could you? in a car?
Let them in - here they are.
I would not, could not, in a car.
You may like them. You will see.
Living in our land, rent-free.
I cannot stand them here rent-free.
Nor in a car! You let me be.
I do not like them in a mosque.
I do not like them 'round a box.
I do not like them in Berlin.
I care not if they have foreskin.
I do not like them here or there.
I do not like them anywhere.
I do not like them, Sam-I-am.
I do not like men of Islam.
A plane! A train! A plane! A train!
Could you, would you on a train?
Not on plane! not on train!
Not in a car! Sam! Let me be!
I do not like them in a mosque.
I do not like them 'round a box.
I do not like them in Berlin.
I care not if they have foreskin.
I do not like them here or there.
I do not like them anywhere.
I do not like them, Sam-I-am.
I do not like men of Islam.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
I mean, the right wing normie answer is ‘people like that don’t belong in Europe’. And when pressed on ‘people like that’ they’ll eventually come down with ‘Arabs’ or ‘children of Muslims’.
As far as what his motivations are, I’ll point to A) him almost certainly having screws loose(see the mass murder, and also being a psychiatrist) and B) it’s entirely possible he doesn’t make much distinction between Christianity and Islam and just hates theists. Stranger things have happened. There was a mass shooter in the US who seemed to have new atheist motivations as well; it’s not an everyday occurrence but it’s happened at least once.
The muslim angle is overrated. People wouldn't have been happier if the migrants were christians from Ethiopia. Muslim at this point means non Asian, non white immigrant. Islam is just an easy and relatively politically correct term to use. The basic premise holds regardless if he was muslim, Christian, athiest or Zoroastrian, MENA migration doesn't work in Europe.
The same people would not have been any happier to take South African Zulus but would have gladly taken South Africans of a Boer persuasion. Trying to own AfD by pointing out the specifics of a Saudi's religious beliefs isn't going to work because the AfD voters don't want mass immigration from MENA regardless of religion.
Yes they would have. They wouldn't have been perfectly happy but they certainly would have been happier. Islam obviously isn't the only issue but it is a fairly major one.
Indonesia is the largest muslim country. They would probably be less disliked as immigrants than christians from Zimbabwe.
Religion isn't the only thing that matters but it is important. Would people have been happier if Syrian Maronites or Sunnis came when the civil war started? Or what about Christian Zimbabweans or Muslim Mozambicans?
There are functionally no Maronites in Syria. There were, however, quite a few Melkites, a different tradition.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Eh, I think Maronite and Coptic immigration is mostly pretty uncontroversial in the societies to which they migrate.
More options
Context Copy link
More options
Context Copy link
These Christmas markets are not theist occurrences in any meaningful way, and as he had been in Germany a long time he would know it. In fact, many of them have been renamed to Winter markets to be more inclusive (to the disdain of the defenders of the Christian Occident), and there is nothing specifically Christian about drinking Gluehwein, eating all kinds of food from food booths and shopping for overpriced small presents in the other booths. It would be like going after Coca-Cola for being Christian given that the central figure of Christmas is Santa Claus and ad spots by Coke have shaped the public image of Santa.
I mean I think it’s culturally Christian in the sense that Christians is specifically Christ’s Mass in the origin of the name. And almost all the trapping can ultimately be traced back to Christian stories and practices. Santa is a repackaged St. Nicholas of Myrna (who actually punched Arius in the face at the council of Nicaea) who was generous with the poor. The Christmas tree is seen as a symbol of Christmas and the star comes from the magi seeing the Bethlehem star. It’s a holiday with a lot of secular trappings, bu5 it is based in Christianity and in no other religion. I don’t think he could have plausibly mistaken it for a Jewish or Muslim thing, as those groups don’t celebrate any of Christmas.
I think that European Christmas, like Easter, is actually a syncretism of early Christian traditions and pagan ones. The barn with figurines of Joseph and Maria and the magi is obviously based on the bible, while the date (Winter solstice) and the Christmas tree (as a symbol of something visibly being alive in the depth of winter) seem pagan-ish to me. Likewise Easter: remembering the crucification and supposed resurrection of Jesus is one thing, but the rabbits and eggs seem pagan to me -- after all, Jesus died for mankind's sins, not to restore fertility to the natural world.
My point is though that while of course being based on vaguely Christian traditions, this is way removed from the reality of Christmas markets.
An Jihadist terrorist targeting German Winter/Weihnachts markets to specifically strike a blow against Christianity feels roughly like a Persian terrorist targeting the Winter Olympiad in Salt Lake City to strike a blow against the Athenian League -- I could see their path of reasoning, but still think that either terrorist would have some fundamental misconceptions about their enemy.
The date of Christmas is actually based on philosophical beliefs about great men dying on the same day they were conceived, plus math(Good Friday actually has a known date if you take the bible completely literally- March 25 33 AD). And Christmas trees probably have a real origin of 'it's one of the few things that looks nice that time of year, and Christmas is a major religious Holiday so we want decorations to look nice'.
Rabbits might be an unrelated folk tradition(but it also might just be seasonal associations- right around Easter is when you start seeing rabbits in Europe), but eggs come from fasting rules in the medieval church. Easter was a huge feast in the throw-a-giant-party sense in the days when Lenten penance was quite a bit more rigorous than it is today, and foods which were forbidden during lent but otherwise part of the diet were a big part of that. Eggs are one example- I believe that in Eastern Orthodox cultures which forbid dairy during lent, butter or cheese plays a role in Easter celebrations. Of course eggs are also easy to decorate and play fun games with.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
There was a similar attack like this in Germany a few years ago. An Iranian Muslim born in Germany became radicalized about mass immigration into Germany and committed a mass shooting targeting immigrants. It’s barely remembered now because the story got dropped like a hot potato, since it would have been extremely difficult for either side of the culture war to make hay out of it.
Toxoplasma quotient counterintuitively low.
What's the opposite of a scissor statement?
A null statement?
staple statement
Opposite along a different axis.
A scissor statement is seized-upon by multiple actors with conflicting interpretations.
A statement like "atheist muslim converts hates islam" is ignored by all actors as it there are no interpretations that are convenient for their positions.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
The terrorist has claimed in December 2023 that he will make the German nation pay the price for the crimes committed by its goverment against Saudi refugees. He also said in the quoted post that Woods Therefore this looks like a leftist terrorist attack of a Saudi Arab who sympathizes so much with Saudi Arabians that he wants to harm the nation of Germany.
https://x.com/KeithWoodsYT/status/1870428721632481719?t=TeBZdhjRJUJdWKMBMS-5nQ&%3Bs=19
So it would be exactly the opposite as the anti immigration narrative and really this should had been the more likely theory rather than him playing 5 dimensional chess.
More options
Context Copy link
More options
Context Copy link
But why couldn’t the AfD thing be the red herring itself? The entire thing makes literally zero sense. He’s an Arab Muslim committing a terrorist act because he doesn’t believe that Arab Muslims should be in Germany because they’ll commit terrorist acts, which he then did. The more plausible explanation is he’s an Islamic Jihadist who is either being misidentified as a supporter of AfD policies, or he was using that as a front to hide behind.
How many cases have there been where an Islamic jihadist commits a terrorist attack and pretends to be something other than an Islamist while doing so? Being open that you are, in fact, doing jihad has always been one of the points of the jihadists.
He wasn’t tweeting while he was attacking. Those posts came beforehand. It might well have been a sort of cover story so those who are looking for Jihadists don’t look to hard at him. And Muhammad Atta was drinking late into the night before 9/11 despite alcohol being forbidden by Islam.
That just doesn't add up. Why would he engage in years of carefully constructing a cover story for doing this sort of an attract that seems highly impulsive? If it's important for the authorities to not find him then why does he conduct the one form of attack that almost guarantees to end up with him either dead or being hauled off to hospital and then interrogation? Isn't this rather a convoluted explanation in comparison to him simply being pretty much who he claims he is?
That's just being a sinner, not a cover story. Insofar as I've understood he believed martyrdom would wipe the record clean, so to say.
More options
Context Copy link
More options
Context Copy link
Sarah Adam’s is reporting that Hamsa Bin Laden is now commander of an Islamic Army that brings Aq isis and other groups under one command. Accomplished through him marrying into influential Islamist families.
She also reports that he and they are now less concerned with getting credit for terror and more concerned with opsec and covert tactics.
She seems credible.
More options
Context Copy link
More options
Context Copy link
Or, uh, hear me out here, but the motivations of someone who committed mass murder don’t have to make sense, because he’s batshit insane.
More options
Context Copy link
More options
Context Copy link
He hated Germans and threatened them repeatedly. If it's an act it was a years long performance.
Did he hate Germans? Or did he hate the German government? I haven’t seen any evidence of the former, although I’d be perfectly happy to be confronted with some.
murdering them in a terrorist attack targeting them seems to be at least a weak evidence
Not necessarily! One of his messages that I did see said something like, “The only thing the German government respects is violence. I’m going to have to do something violent to get them to respect me.” It’s entirely possible that he was merely indifferent to the suffering of the people he maimed and killed; that the purpose of the attack was not to make them suffer, but rather to have the moment of their suffering become a political flashpoint.
Intentionally killing random Germans to punish Germans in German government counts to me as hating Germans.
Even if you do not agree with (1) then posting in public how you plan to murder Germans, finding justifications and then murdering bunch of Germans is at least a weak evidence that they in fact hated Germans. Even if they have not tweeted about it outright.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
He attacked a Christmas market, does it matter if he secretly supports AFD? If he wasn't brought to Germany he wouldn't have committed this act. It really is easy for the right to portray it favorable terms. It is also gives leftists the opportunity to frame this of course in a manner that tries to deflect from it. I have seen both.
According to Keith woods he was a zionist leftist who wanted more muslim migration that commited this act because Germany is not doing enough to give asylum from Saudi Arabia.
If that is correct then this is a leftist but not Islamic, pro migration terrorism act. Exactly the opposite that is claimed bellow. If of course it is true.
Edit: Woods quotes the terrorist in 2023 saying that he will make the German nation pay the price of the crimes committed by the goverment against Saudi refugees. He also says that he will take revenge even if it costs him his life. https://x.com/KeithWoodsYT/status/1870428721632481719?t=TeBZdhjRJUJdWKMBMS-5nQ&%3Bs=19
It is certainly correct to limit people like this guy from coming to one's country.
Moreover, it seems almost everyone forgets that the biggest genocide commited by Muslims against Christians was not commited by the biggest muslim fanatics even though Islamism has been an element of this. I am talking about the genocide of Christians Greeks, Armenians, Assyrians, by the Muslim populations of Turkey. Of which secular Kemalists who also called for Jihad had been a core component as has been Turkish nationalism. The ethnic resentments of people like this, and not just any human capital problems is something that has been way too understated. A secular muslim might still carry both ethnic resentments related to his homeland or even the general Muslim population. Just like with other groups who aren't particularly religious but still are hostile foreigners.
Being pro Israel isn't something that makes someone right wing. Much of the mainstream that pretends to be right wing is not right wing especially in a country like Germany and is in fact insufficiently pro their people to qualify as anything but extremists against their own civilization. Now technically it is possible to be a European nationalist who is unwisely fanatically Zionist. But being pro Israel is quite compatible with being a leftist extremist. Indeed most Jews that are organized politically and make the biggest mark through their influence (such as rich donors, the most prominent activists, and powerful figures) manage to combine Zionism, with Jewish nationalism and hostility against countries like Germany. We also see non Jews have this combo. At minimum it is insufficient to stop someone from being a leftist.
In this forum we have had various people argue for the very far left ideology that combines Jewish nationalism with arguing for the extinction of nations like Germany because the German, or white nations in general continuing to exist is somehow a threat to Jewish identity and Jews. Which is ridiculously hostile agenda against European nations on the ground of illegitimate excessive grievances and crybullying to the extreme. Making onerous demands for other nations destruction. This guy's Zionism was not an obstacle to his hatred of the German nation and his one sided ridiculous demands in favor of Saudi Arabian immigrants. Therefore you are trying to mislead here by claiming that his Zionism precludes him of leftism extremism when Israeli nationalism like his Saudi Arabian sympathies are perfectly compatible with an ideology of grievance against European nations.
Ultimately I would put Zionism more along the left in a European context than the right, or center, because of the association of the left with prioritizing other nations and the right with native nationalism. Else as it happened in practice, we get leftism with fake conservatives and right wingers calling themselves something different while giving the same.
The incompatibility of zionist movements with self respecting european nations is there when we consider how much zionism is associated with one sided demands and the fact that the prominent Zionist organizations are hostile to European nations, including their right of preservation, national sovereignty,self determination, independent foreign policy. The combo of people who are Zionists but want people to be loyal to Israel and not to their own nation and give preferential treatment to Israel and Jews that isn't provided to one's own nation including with censorship and cancel culture, is such a sufficiently dominant element of Zionist influence that it can't be disregarded.
Pro Muslim anti Zionism would also fit more within the left. Being anti Zionist is also insufficient to stop someone from being a leftist in a European context, of course since it is possible to hold other grudges, follow the anti european grievance ideology and even blame Israeli policies on European countries and wish for revenge, including one that is about migration replacement.
The Saudi Arabian refugees would be Muslims, no? If not, I would grant that he was pro migration of Arabs but not Muslims. But I am not sure that the Saudi Arabian women he wishes to be refugees would not include any Muslims. But that doesn't change his pro foreign identity anti German sentiments.
As for him being a leftist. As we have seen Zionism is not a get out of jail free card for leftist extremism and in fact some of the worst far left extremists, especially more establishment make their arguments from a Jewish nationalism perspective and are Zionists (though in Germany even the antifa is pro Israel IIRC). The ADL is the organization most representative of this but really it isn't rare whatsoever for people to combine Zionism with Anti European hostility and cry-bullying oppression narratives.
The guy is pro migration and he hates Germany for not doing enough for foreigners and refugees. That fits well enough within the left. You can't but call him a leftist when he is upset about that and he is motivated to commit a terrorist attack. If a German nationalist hated foreigners and committed an attack on foreign groups due to his hatred of said foreigners the result would be countless of people to label him a right winger or a far right. The left should be identified with hatred against European nations by foreigners and pro migration sentiments because that accurately captures a sufficiently pervasive characteristic of its ideology, even if most leftists wouldn't prefer it manifests in the way it did with this attack.
You are trying to square a circle here when claiming you can't call him a leftist and you are making a special pleading that much fewer would dare if some figure pattern matches as much for a right wing figure as this guy is a far left foreign terrorist. The left should act honorably and accept their problem of anti European extremism and how such people with such sentiments acting against Europeans fit within the left wing ideological perspective. They ought to moderate and abandon the ideology that disregards the interests and survival of European nations so this hostility it helps cultivate is no longer such common characteristic of the left wing agenda with predictable results not only here but others like Pakistani rape gangs. Not to mention he wouldn't be in Germany in the first place without the influence of left wing ideology. And remember the mainstream fake right of Merkel that has no problem with aligning with the other parts of the left and trying to stop any deviation from the anti-native dogma of the left are not blameless and outside the problem of anti-German pro foreign extremism.
More options
Context Copy link
Was this man actively “treating” patients while saying all this?
Good Lord.
More options
Context Copy link
More options
Context Copy link
My position is not quite that, but not too far from it. If we take the story that this is about his anti-Islam grievances at face value, it seems to me that it's an example of how importing people from places that have ethnic and religious conflicts that Westerners don't even understand results in importing their conflicts along with them. See also, the disputes with Sikh violence that have stirred up India-Canada tensions. I don't want Muslim immigrants from Saudi Arabia and I don't want anti-Muslim immigrants from Saudi Arabia in my country. I don't want to think about Saudi Islam anymore than strictly necessary for international relations. There are enough domestic tensions without needing to add Saudi conflicts to Germany.
Fwiw I thought this post was fine (upvoted)
More options
Context Copy link
More options
Context Copy link
I'm hearing that he was apparently angry about not enough being done for (presumably anti-Islamic) refugees:
https://x.com/banjawarn/status/1870393623210078601/photo/1
Too soon to be sure of course, there's no community notes or anything on this post. It does seem plausible, he was complaining about Sweden expelling this refugee.
https://x.com/DrTalebJawad
Other people have been going on about him retweeting Israeli military posts, apparently he has Zionist sympathies. There's truly something for everyone with this guy. He seems like a nut.
More options
Context Copy link
I'm going to have to write this story someday.
More options
Context Copy link
Yes, excellent analysis. Thanks for laying out exactly what I was thinking in such agreeable prose.
More options
Context Copy link
Well, wignats will have a field day. (Maybe that was his objective?)
Certainly the more moderate right will find it uncomfortable. An educated, apostate Arab who's vehemently against the Islamification of Europe -- well, if Germany is anything like the USA, he'd be held up as one of the "good ones" and a solid ally. (The moderate right is of course desperate to latch onto any token PoC so they can assert that they're not racist!)
I don't think you can quite square this circle without accepting an ethnonationalist framing, so I expect this to be swept under the rug. It looks bad for Arabs, obviously bad for the pro-immigration left, bad for the moderate right; the only people who can point to this incident as confirming their priors are the ones saying these immigrants are fundamentally incompatible with Western civilization by virtue of their ethnicity, regardless of their professed views.
Assuming this wasn't some 4D double layered false flag: https://x.com/stillgray/status/1870306075695546383 (this reads like premium copium to me, but, I guess it's not impossible.)
More options
Context Copy link
I guess we'll see if further details emerge. But to me this looks like someone utterly deranged, with no coherent plan at all. Maybe he had some recent health issue like Luigi, or maybe he cracked from the stress of working as a doctor for so long.
More options
Context Copy link
It seems the attacker was only a critic of Muslim immigration on unusually principled anti-religious grounds. Here he is in 2019 (fedora and all -- wow!) discussing his website for aiding the asylum claims from secular Gulf refugees. More recently, he accused the German state of conspiring against atheist Saudi asylees and threatened to fight and kill over this:
Rather than the hard right, it will be assimilationist centrists and center-rightists who want to make the problem of immigration to be about Islam who won't have much to milk out of this.
More options
Context Copy link
More options
Context Copy link
It seems to me that there are three distinct options...
the first (and in my opinion most likely option) is that the media is simply lying about the perp being "anti-muslim" or "aligned with the alt-right". They are desperate to deflect blame and acting acordingingly. You don't really believe that they are above fabricacating evidence do you?
The second is that this is all "Taqiyya", and the perp was a genuine Jihadi.
The third is that there there is a distinct subset of the extremly online ("Woke") Right who no matter how much they might claim to hate immigrants and people of color, they will always hate "Normies" and "Christians" and the "Grill-Pilled" more because the former is the far-group and the latter is the out-group.
Also what is this " car rammed into people" bullshit, the car didn’t do anything, the driver did.
You have hit on a pet peeve of mine, the incessant barking of "Taqiyya! Taqiyya! Taqiyya!" by right-wingers on Twitter who learned the term from Wikipedia and think they've stumbled onto the secret Muslim master plan.
Taqiyya refers, generally, to concealing your beliefs in the face of oppression or imminent danger. E.g., Muslims who were forced to "convert" to Christianity on pain of death were still considered good Muslims if they pretended to convert to save themselves. There are also some esoteric Islamic beliefs that some sects consider religious "mysteries" that should be hidden from unbelievers, even if it means lying about them. And various other corner cases covered in the sort of legalistic parsing of the Quran and hadiths that Muslims love to do. Islam, like most religions with a long legalistic history, has been divided into a multitude of sects and schools of thought, so like Christians and Jews (and non-Abrahamic faiths as well), you can find different branches who declare other branches flatly wrong or even heretical, and come up with all sorts of bizarre edge cases under which this or that practice is "allowed."
So far as I know, there are no mainstream Islamic sects (or even fringe groups, from what I have been able to find) that preach "Taqiyya" meaning "Pretend to be a non-Muslim to infiltrate a host society as a sleeper agent." I have never heard of even jihadists advocating that Muslims pretend to be atheists or Christians to sneak into the West so they can attack infidels. I suppose some of them might approve of this, but that sort of long game (spend 10 years pretending to be an anti-Muslim atheist and harassing people on social media?) would be hard to pull off for a professional spy under deep cover.
The more likely explanation is that this guy has always been crazy and had violent and vengeful impulses, and something pushed him over the edge. His motives seem to be a mix of anti-Saudi, anti-Islam, anti-German, and anti-West, in a way that anti-Muslims would love to condense down to "Deep cover jihadist practicing taqiyya" but doesn't really seem to match the facts.
It strikes me that a lot of terrorists/mass shooters lately have been a sort of ideological Rorschach blob. Like Luigi Mangione, whom both rightists and leftists are still assiduously trying to assign to the other tribe.
I'm somewhat more sympathetic to the more general anti-immigrationist argument that you can take the fanatic out of Saudi but you can't take the Saudi out of fanaticism (he seems to have retained a very jihadist psychology even if he stopped being a Muslim), but "Taqiyya" seems to have become a lazy, infinitely generalizeable dismissal of anything an Arab says because, you know, they're all lying double-agents practicing Taqiyya to fool the kafir.
Something being annoying or inconvenient doesn't make it untrue. The use of false flags and lying about one's intentions/beliefs are something that radical Sunni groups have historically engaged in and endorsed.
My reply to there being no mainstream Islamic sects that you know of who endorse that, is "No shit Sherlock". We are not talking about mainstream Muslims, we are talking about Deash, and the sort of guy who would Abracadabra Snackbar his way into a Christmas market.
That said i still think that the most likely explanation is that the media is simply lying. And that even if they are not lying, his alleged views are not some weird "mix", they are to all appearances fairly typical amongst the more woke elements of the online right. Herr Doktor is simply acting on a spicier verson of the sentiment occasionally expressed by multiple users here, IE that "the normies" are contemptable/subhuman and deserve to be punished.
All radical groups engage in deception, concealment, and false flags. The question is whether this is something specific to Islam. Which I maintain it is not, contrary to the people yammering about "Taqqiya" as if they have discovered Islam's deep dark secret. Yes, Muslims, like Christians and Jews, believe it's okay to lie to unbelievers who are persecuting them or to protect others. That's it, it's not a general practice of lying to unbelievers pretending you aren't planning to kill them.
And my reply to this is what I said above: I don't know if any Daesh clerics have issued some tortured interpretation of "Taqqiyah" to convince their agents to go deep cover as an infidel, but if so, I've never heard of it. And if the people I am complaining about were only claiming that jihadists are violent fanatics who twist their religion to justify terrorism, I would say "No shit Sherlock" right back at you. My point is that it's very common to see people claiming, essentially, that all Muslims (or ex-Muslims) are (or should be assumed to be) lying about their intentions. I saw this quite explicitly in a bunch of Twitter threads about Taleb al-Abdulmohsen. ("No such thing as an ex-Muslim," etc.) You proposed it as an explanation.
Maybe, though if there's evidence that he's actually a jihadist and the media is covering it up, that would be pretty dumb since we're already seeing years of his Twitter rantings being dug up.
I still think "woke right" is a pretty incoherent concept, but to the degree it exists, it think it fits exactly what I mean by "weird mix"; it's a stew of assorted resentments and grudges that don't neatly fit into a single coherent ideological category.
Christians are not allowed to lie to escape persecution under traditional interpretations of moral theology.
More options
Context Copy link
I didn't claim that false flags and deception were specific to Islam, only that they were tactics that radical Islam has been known to engage in and endorse.
One of the core tenets of Wahhabism (the subset of Islam from which pretty much all modern radical Sunni movements trace thier roots) is that anything is acceptable if it is done in the pursuit of god's enemies. Or to put it in more familiar/secular terms, there are no bad tactics only bad targets. Deash clerics don't need to make the argument explicitly because the argument is already implicit in the Deash worldview.
Is your argument really that the media and assorted twitter personalities couldn't possibly be decieved and would never just lie to our faces? Because if so i have a bridge in London to sell you.
And again, i disagree because it seems pretty coherent to me.
As i have argued in prior discussions there seems to be a distinct subset of the extremely-online "right" that is far more "woke" and identitarian than they are right-wing. One of the unifying themes of this subset is a seething resentment of Christians/Normies/Anyone who isn't as black-pilled miserable or cynical as they are.
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Such a long con for such meagre reward (a mere five dead at the last count) seems a bit unlikely to me.
I agree, I still think that the most likely explanation is that the media/internet is lying and that this guy was just a conventional Jihadi.
More options
Context Copy link
It's not actually that easy to slaughter dozens of people
believe me.Jokes aside it does seem unlikely to me it was some kind of long con too, but the number killed isn't necessarily an indication of their intentions as multiple spree killers have demonstrated, and while a ten year plan indicates a lot more strategy than say William Atchison, we make a plan and God laughs.
More options
Context Copy link
More options
Context Copy link
IANA muslim scholar, but isn't taqiyya a mostly shia doctrine which tends to apply to either A) hiding religious mysteries from the uninitiated or B) escaping imminent persecution for the sake of continuing the faith, and not something which could easily be weaponized by Jihadis?
It seems true that Islam allows adherents to hit defect a lot more than Christianity does. But I don't think 'pretend to be an apostate to kill non-believers' is something Islam allows, or is believed to allow by actual Muslims.
If we're looking at strictly Quaranic examples/interpretations yes, but as i said, it has also been used by radical Sunnis historically to justify tactics like attacking under a false flag.
Agreed, and perhapse this is why Christianity has consistently outperformed its rivals despite being "weak", "cucked", "a slave morality", etc...
More options
Context Copy link
More options
Context Copy link
One of the reasons why it's easy for me to believe the perp is what he claims to be is that I actually know a local variant of this type. An immigrant from a Muslim-majority country, born a Muslim but now an atheist ex-Muslim, used to be on the left due to the association of secularism and leftism in the Middle East but left the left in a huff due to the lack of enthusiasm for his strident anti-Islam sentiment, has also burned briges with the other local ex-Muslims, now associated with the mainstream nationalist party though also critical of them for a variety of reasons including insufficient concentration on Islam, generally comes across as igh-strung and aggressive. (Of course he's not directly comparable to the terrorist in the sense that he hasn't threatened to kill anyone.) My guess is that there are more than a few ex-Muslims like this around. It's quite ironic that one part of the narrative that the Saudi terrorist was a secret Muslim practicing taqiyya comes from the claims of other ex-Muslims, the other ex-Muslims are reliable non-taqiyaish sources now?
It's been a perennial complaint of cycling activists that even normal, regular car accidents (in local news etc.) get reported as "car rammed into a person" or even as "a pedestrian/cyclist collided fatally with a car". There's something about cars that makes us conceive of them as autonomous objects, kind of like large animals or something.
More options
Context Copy link
More options
Context Copy link
This is why Szasz is undefeated. What the fuck use is a field of medicine that purports to help nutters but the doctor himself, and his colleagues who interacted with him, can't spot that he's the type of nutter who is going to ram a car into a Christkindlmart?
Uh, didn’t he get reported to the German police multiple times and they didn’t do anything?
Imagine the tug of war there deciding whether he should be targeted.
I know for a fact that by now they have a few Turks on the police force by now. Send one of them to crack a skull, and watch the media have an aneurysm of that tug of war.
Not skulls, hands (or kneecaps if you want to be "white" about it)
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
More options
Context Copy link
Agreed. That was my first thought as well.
Psychotherapy is just… criminal at this point, imo. They have so much power and authority in society yet clearly do not have any idea what they’re doing.
More options
Context Copy link
More options
Context Copy link
The likeliest scenario imo seems to be a psychotic episode. Presumably, for hallucinations just as for dreams, the brain twists concepts it already has. (Often the specific symptoms of mental illness are what your culture expects them to be -- like if the mind still follows some script.) I think your world view will inform what hallucinations you are 'supposed' to have. A Christian who is convinced that the devil talks to people and entices them to evil deeds might be more likely to hallucinate the devil, while someone who presumably thinks that the great evil in the world is Islam might be more likely to have a vision of Allah ordering them to do some stereotypical terror attack.
My other scenario is slightly on the conspiracy side. Presumably, someone from a Muslim country who is loudly against Islam is an irritation to Jihadists, who might just decide to get hold of some of his loved ones (perhaps in the Arab world) and blackmail him into committing some atrocity. Of course, this has very much not been their playbook so far. Also, they would likely want to claim responsibility for the attack after the fact.
He got reported to the police for making the same threat last year. He was convicted for threats in '13..
https://x.com/2ltifaa/status/1870273133766123643
More options
Context Copy link
More options
Context Copy link
Perhaps he thought ramming innocents at a Christmas market was the best way to advance his warnings
“Be the change you want to warn against”
More options
Context Copy link
The dankness of the timeline is off the scale.
More options
Context Copy link