Chinese bomber wings can subject US navy to 100+ salvos of anti-ship missiles through most of the places where it'd like to confiscate shipping.
https://pbs.twimg.com/media/GRTH4sGXUAATUc8?format=jpg&name=large Also same bombers can also deploy these cruise missiles against US air bases in the region.
Unless Vietnam, Thailand and Japan decide to cooperate, US naval blockade could end up as something of a farce.
Self-promotion from regular participants is treated very differently to some rando coming in and shilling their substack, so have no fear.
(You didn't have to summon me. I'm already there. To little surprise, I do think an LLM is smarter than an orangutan)
China has far superior aerial forces and can physically subject most of Taiwanese airfield to artillery bombardement.
They have a little chance to be able to put a few planes into the air on day 2.
The German conclusion is political theater. Ukraine may have had single digits of people with the right expertise to dive 100m down in a strong current. Furthermore the yacht they supposedly took was so small they'd have had problems carrying all the gear this needed.
"For the first few days, the harbor master said he was “not allowed to say a thing”. But today, John Anker Nielsen can reveal that four or five days before the Nord Stream blasts, he was out with the rescue service on Christiansø because there were some ships with switched-off radios. They turned out to be American naval vessels, and when the rescue service approached, they were told by Naval Command to turn back.
Therefore, the harbor master has some faith in the theory that American star journalist Seymour Hersh, among others, has put forward without any documentation: that the US was behind the sabotage. The Americans have these small unmanned submarines that can solve any task, John Anker Nielsen has been told"
Look at the education system. "In high school, you get 155 hours on Hitler, 3 minutes on Stalin, and nothing on Pol Pot. Nothing on Mao. Barely a mention of Fidel Castro."
Look at the cinema industry. A million movies about the holocaust, one film about the Holodomor.
Ask random normies about Hitler, and they will tell you that he was evil because he tried to exterminate the Jews. Ask random normies about Stalin, and chances are they won't even know who he was.
we should expect the actual undoing (if it should ever happen) of woke will be mocking it and making it low status (somehow?
Matt Walsh making Robin DiAngelo pay $30 to his black camera man was effective in that even mainstream media talked about the scene and she vanished in shame:
https://youtube.com/watch?v=9JSjAnGwzqI
But mocking is difficult, as the superweapon of political correctness was to make mocking cringe, one can’t make deriding jokes of gay/fat/trans/ChingChong/disabled/mentalIllness/unhoused/otherness when it is punching down. Sort of a jiujitsu move: being gay was low status and destroyed careers until coming-out got the quality of braveness. This can be easily transferred to other former icks.
Again, what do you mean by ‘consensus’? The Vatican does not consider them in schism. The people who call them schismatic are mostly unhinged polemicists, either of the sort that unironically use phrases like ‘schismatic from the council’ or boosters of particular movements which have unrelated bad blood.
Diocesan bishops can give whatever faculties they want to SSPX priests, same as for FSSP or ICK priests. This happens at lower, but not much lower, rates. American and French bishops like using priest who regularly celebrate both rites for major diocesan initiatives but you don’t find FSSP priests doing this stuff either, even if they’re over represented as exorcists.
A Danish ministry of defense official said he observed US ships with turned off AIS over the future blast site and when he sailed out to ask what's going was told it's all okay and to go away.
Considering that Biden said that Nord Stream 2 was going to be stopped one way or other, what more do you want ? An embroidered order to the naval diver unit ?
Nobody halfway serious believes American denials on this.
A century ago, not wanting to have kids was seen as much more eccentric than it is today. Now there's a whole "childfree" movement and the birthrate is dropping precipitously. Biology didn't change that fast. A change in material and social conditions caused a change in desires. So before you say "well this is the best way to satisfy human desires", you have to ask whose human desires.
Natural biology didn't change that fast. Chemicals that changed people's biological makeup in subtle but drastic ways probably did, I'd wager. Lot of social changes downstream of that, though, which of course we've discussed.
If the Marxist critique was more limited to "Capitalism generates feedback loops that can spin off and have 'unexpected' effects that harm more people than they benefit in the medium term" I'd not push back hardly at all.
But we've had a theoretical solution to that issue for decades. Marxism didn't generate that solution.
I mean, were they? What is "winning"? Is the winner the one with the most weapons, or are the weapons just a means to some other win condition?
The weapons can make them more efficient hunters (or maybe the weapons are more durable and so can be used more than once) so as to increase their surplus, in this case.
Which can either free up the time and labor of some of the guys who would have been hunting to work on other things, or allow them to store up more meat for lean times like winter, and if they make good use of that surplus they'll be positioned to be even more productive on the other side of it. I think Irwin Schiff's How an Economy Grows and Why it Doesn't gets this right in the particulars.
I don't necessarily think there is any 'final win condition,' mind, at least not in an entropy-increasing universe, just the process of ensuring continued improvement as long as possible and, ideally, the continuation of your genetic line.
Capitalism is not an aberration or a mistake. It's a necessary phase of development; albeit one that contains the seeds of its own destruction. It is in fact the only thing that can give us the tools to go beyond itself. It is always and only the master's tools that dismantle the master's house (if you believe Hegel).
Well, I don't believe Hegel.
Again, I don't see this as an 'insight' of Marxism. Capitalism is a 'necessary' stage of development if humans want their desires to continue being fulfilled.
Capitalism (even if we limited it to your preferred "industrialization and its consequences" definition) continues to adapt to fulfill a greater array of human desires using the tools of 'free' trade, development of ever greater capital stock, and innovation towards more efficient use of resources. It isn't necessarily building 'towards' something or to any other new phase of existence unless, I suppose, we somehow manage to actually satisfy every human desire to the point of full contentment.
To my personal dismay, it turns out that people's desires tend to skew towards seeking pleasure and raising their own status (which makes sense, when you consider our evolutionary history) over trying to elevate the species as a whole towards controling more energy and resources than those found in the crust of our little spinny space rock.
But then Capitalism also permits the existence of Billionaires who use their surpluses to fund their own preferences, including creating really massive rockets which can be used to bootstrap further industry in outer space.
(which yes, goes towards the whole "people's desires change." If affordable flights to Mars ever become available, there's probably a lot who would take those, even if it barely crosses their mind right now).
Marxists get REALLLLLLY mad about this for some reason, that we might get "Fully Automated Luxury Gay Space Communism"... without the Communism.
I don't see any good argument from Marxists for:
A) Why we ought to go beyond Capitalism (Hume's Guillotine notwithstanding, even!). Its working well, if we assume "fulfilling human desires" is the game and is a worthy goal;
B) How Socialism/Communism is going to replace it when its a fundamentally broken system that can't coordinate human society beyond the tribal level.
Its a seeming dead end in both those respects. It can't fulfill the role they predict for it, and there's no cognizable moral imperative to try and make it fulfill the role.
So what use does Marxism have on offer for any rational human being, other than perhaps allowing incisive critiques of the flaws in a Capitalist system which we can then try to address and fix within said system?
Everyone but 5-10% of old people has already left. Also in Ukraine, the water table is comfortably off so you can get some water from wells even if the power is out I'm pretty sure.
They are also of the opinion that if Zelensky capitulates (or is seen to) he's gone next election, he was seen as soft on Russia pre war and is being outflanked by more popular warhawks.
War hawks. Tell me, when they have been grabbing people off the street like kidnappers and they're still 50% short on fighting men, do you think being hawkish has a future? They're already struggling.
Most people believe what they need to believe and live with themselves. Most people aren't capable of independent thinking, they conform to 'the room' without giving it a single thought. It's just what people do.
When the hangover of reality asserts itself, they're going to feel betrayed. Because the situation they were in was described and understood very well early on.
I really appreciate you taking the time to write this. It makes an interesting counterpoint to a discussion I had over the weekend with a family member who's using AI in a business setting to fill a 24/7 public-facing customer service role, apparently with great success; they're using this AI assistant to essentially fill two or three human jobs, and filling it better than most and perhaps all humans would. On the other hand, this job could perhaps be reasonably compared to a fly beating its head against a wall; one of the reasons they set the AI up was that it was work very few humans would want to do.
AI is observably pretty good at some things and bad at other things. If I think of the map of these things like an image of perlin noise, there's random areas that are white (good performance) and black (bad performance). The common model seems to be that the black spaces are null state, and LLMs spread white space; as the LLMs improve they'll gradually paint the whole space white. If I'm understanding you, LLMs actually paint both black and white space; reducing words to vectors makes them manipulable in some ways and destroys their manipulability in others, not due to high-level training decisions but due to the core nature of what an LLM is.
If this is correct, then the progress we'll see will revolve around exploiting what the LLMs are good at rather than expanding the range of things they're good at. The problem is that we aren't actually sure what they're good at yet, or how to use them, so this doesn't resolve into actionable predictions. If one of the things they're potentially good at is coding better AIs, we still get FOOM.
90 percent of the people on the Motte got their entire knowledge of communism from one PragerU video they watched 10 years ago.
In defence of our friendly neighborhood xeno-intelligences being smarter than an orangutan
I appreciate you taking the time to write this, as well as offering a gears-and-mechanisms level explanation of why you hold such beliefs. Of course, I have many objections, some philosophical, and even more of them technical. Very well then:
I want to start with a story. Imagine you're a fish, and you've spent your whole life defining intelligence as "the ability to swim really well and navigate underwater currents." One day, someone shows you a bird and asks whether it's intelligent. "Of course not," you say. "Look at it flailing around in the water. It can barely move three feet without drowning. My goldfish cousin is more intelligent than that thing."
This is roughly the situation we find ourselves in when comparing AI assistants to orangutans.
Your definition of intelligence relies heavily on what AI researchers call "agentic" behavior - the ability to perceive changing environments and react dynamically to them. This was a perfectly reasonable assumption to make until, oh, about 2020 or so. Every entity we'd previously labeled "intelligent" was alive, biological, and needed to navigate physical environments to survive. Of course they'd be agents!
But something funny happened on the way to the singularity. We built minds that don't fit this pattern.
Before LLMs were even a gleam in Attention Is All You Need's eye, AI researchers distinguished between "oracle" AIs and "tool" AIs. Oracle AIs sit there and answer questions when asked. Tool AIs go out and do things. The conventional wisdom was that these were fundamentally different architectures.
As Gwern explains, writing before the advent of LLMs , this is an artificial distinction.
You can turn any oracle into a tool by asking it the right question: "What code would solve this problem?" or "What would a tool-using AI output in response to this query?" Once you have the code, you can run it. Once you know what the tool-AI would do, you can do it yourself. Robots run off code too, so you have no issues applying this to the physical world.
Base models are oracles that only care about producing the next most likely token based on the distribution they have learned. However, chatbots that people are likely to use have had additional Reinforcement Learning from Human Feedback, in order to behave like the platonic ideal of a helpful, harmless assistant. More recent models, o1 onwards, have further training with the explicit intent of making them more agentic, while also making them more rigorous, such as Reinforcement Learning from Verified Reward.
Being agents doesn't come naturally to LLMs, it has to be beaten into them like training a cat to fetch or a human to enjoy small talk. Yet it can be beaten into them. This is highly counter-intuitive behavior, at least to humans who are used to seeing every other example of intelligence under the sun behave in a different manner. After all, in biological intelligence, agency seems to emerge automatically from the basic need to not die.
Now because these vectors represent the relationship of the tokens to each other, words (and combinations of words) that have similar meanings will have vectors that are directionally aligned with each other. This has all sorts of interesting implications. For instance you can compute the dot product of two embedded vectors to determine whether their words are are synonyms, antonyms, or unrelated. This also allows you to do fun things like approximate the vector "cat" using the sum of the vectors "carnivorous" "quadruped" "mammal" and "feline", or subtract the vector "legs" from the vector "reptile" to find an approximation for the vector "snake". Please keep this concept of "directionality" in mind as it is important to understanding how LLMs behave, and it will come up later.
Your account of embedding arithmetic is closer to word2vec/GloVe. Transformers learn contextual token representations at every layer. The representation of “cat” in “The cat is on the mat” and “Cat 6 cable” diverges. There is heavy superposition and sparse distributed coding, not a simple static n-vector per word. Operations are not limited to dot products; attention heads implement soft pointer lookups and pattern matching, and MLP blocks implement non-linear feature detectors. So the claim “Mary has 2 children” and “Mary has 1024 children” are indistinguishable is empirically false: models can do arithmetic, compare magnitudes, and pass unit tests on numerical reasoning when prompted or fine-tuned correctly. They still fail often, but the failures are quantitative, not categorical impossibilities of the embedding geometry.
(I'll return to the arithmetic question shortly, because TequilaMockingbird makes a common but significant error about why LLMs struggle with counting.)
Back to the issues with your definition of intelligence:
My first objection is that this definition, while useful for robotics and control systems, seems to hamstring our understanding of intelligence in other domains. Is a brilliant mathematician, floating in a sensory deprivation tank with no new sensory input, thinking through a proof, not intelligent? They have zero perceptivity of the outside world and their only reaction is internal state change. Your definition is one of embodied, environmental agency. It's an okay definition for an animal or a robot, but is it the only one? LLMs are intelligent in a different substrate: the vast, static-but-structured environment of human knowledge. Their "perception" is the prompt, and their "reaction" is to navigate the latent space of all text to generate a coherent response. Hell, just about any form of data can be input into a transformer model, as long as we tokenize it. Calling them Large "Language" Models is a gross misnomer these days, when they accept not just text, but audio, images, video or even protein structure (in the case of AlphaFold). All the input humans accept bottoms out in binary electrical signals from neurons firing, so this isn't an issue at all.
It’s a different kind of intelligence, but to dismiss it is like a bird dismissing a fish’s intelligence because it can’t fly. Or testing monkeys, dogs and whales on the basis of their ability to climb trees .
Would Stephen Hawking (post-ALS) not count as "intelligent" if you took away the external aids that let him talk and interact with the world? That would be a farcical claim, and more importantly, scaffolding or other affordances can be necessary for even highly intelligent entities to make meaningful changes in the external environment. The point is that intelligence can be latent, it can operate in non-physical substrates, and its ability to manifest as agency can be heavily dependent on external affordances.
The entire industry of RLHF (Reinforcement Learning from Human Feedback) is a massive, ongoing, multi-billion-dollar project to beat Lorem Epsom into submission. It is the process of teaching the model that some outputs, while syntactically plausible, are "bad" (unhelpful, untruthful, harmful) and others are "good."
You argue this is impossible because "truth" doesn't have a specific vector direction. "Mary has 2 children" and "Mary has 4 children" are directionally similar. This is true at a low level. But what RLHF does is create a meta-level reward landscape. The model learns that generating text which corresponds to verifiable facts gets a positive reward, and generating text that gets corrected by users gets a negative reward. It's not learning the "vector for truth." It's learning a phenomenally complex function that approximates the behavior of "being truthful." It is, in effect, learning a policy of truth-telling because it is rewarded for it. The fact that it's difficult and the model still "hallucinates" doesn't mean it's impossible, any more than the fact that humans lie and confabulate means we lack a concept of truth. It means the training isn't perfect. As models become more capable (better world models) and alignment techniques improve, factuality demonstrably improves. We can track this on benchmarks. It's more of an engineering problem than an ontological barrier. If you wish to insist that is an ontological barrier, then it's one that humans have no solution to ourselves.
(In other words, by learning to modify its responses to satisfy human preferences, the model tends towards capturing our preference for truthfulness. Unfortunately, humans have other, competing preferences, such as a penchant for flattery or neatly formatted replies using Markdown.)
More importantly, humans lack some kind of magical sensor tuned to detect Platonic Truth. Humans believe false things all the time! We try and discern true from false by all kinds of noisy and imperfect metrics, with a far from 100% success rate. How do we usually achieve this? A million different ways, but I would assume that assessing internal consistency would be a big one. We also have the benefit of being able to look outside a window on demand, but once again, that didn't stop humans from once holding (and still holding) all kinds of stupid, incorrect beliefs about the state of the world. You may deduct points from LLMs on that basis when you can get humans to be unanimous on that front.
But you know what? Ignore everything I just said above. LLMs do have truth vectors:
https://arxiv.org/html/2407.12831v2
To this end, we make the following key contributions: (i) We demonstrate the existence of a two-dimensional subspace, along which the activation vectors of true and false statements can be separated. Notably, this finding is universal and holds for various LLMs, including Gemma-7B, LLaMA2-13B, Mistral-7B and LLaMA3-8B. Our analysis explains the generalisation failures observed in previous studies and sets the stage for more robust lie detection; (ii) Building upon (i), we construct an accurate LLM lie detector. Empirically, our proposed classifier achieves state-of-the-art performance, attaining 94% accuracy in both distinguishing true from false factual statements and detecting lies generated in real-world scenarios.
https://arxiv.org/abs/2402.09733
To do this, we introduce an experimental framework which allows examining LLM's hidden states in different hallucination situations. Building upon this framework, we conduct a series of experiments with language models in the LLaMA family (Touvron et al., 2023). Our empirical findings suggest that LLMs react differently when processing a genuine response versus a fabricated one.
In other words, and I really can't stress this enough, LLMs can know when they're hallucinating. They're not just being agnostic about truth. They demonstrate something that, in humans, we might describe as a tendency toward pathological lying - they often know what's true but say false things anyway.
This brings us to the "static model" problem and the context window. You claim these are fundamental limitations. I see them as snapshots of a rapidly moving target.
-
Static Models: Saying an LLM is unintelligent because its weights are frozen is like saying a book is unintelligent. But we don't interact with just the book (the base model). We interact with it through our own intelligence. A GPU isn't intelligent in any meaningful sense, but an AI model running on a GPU is. The current paradigm is increasingly not just a static model, but a model integrated with other tools (what's often called an "agentic" system). A model that can browse the web, run code in a Python interpreter, or query a database is perceiving and reacting to new information. It has broken out of the static box. Its "perceptivity" is no longer just the prompt, but the live state of the internet. Its "reactivity" is its ability to use that information to refine its answer. This is a fundamentally different architecture than the one the author critiques, and it's where everything is headed. Further, there is no fundamental reason for not having online learning, production models are regularly updated, and all it takes to approximate OL is to have ever smaller "ticks" of wall-clock time between said updates. This is a massive PITA to pull off, but not a fundamental barrier.
-
Context Windows: You correctly identify the scaling problem. But to declare it a hard barrier feels like a failure of imagination. In 2020, a 2k context window was standard. Today we have models with hundreds of thousands at the minimum, Google has 1 million for Gemini 2.5 Pro, and if you're willing to settle for a retarded model, there's a Llama 4 variant with a nominal 10 million token CW. This would have been entirely impossible if we were slaves to quadratic scaling, but clever work-around exist, such as sliding attention, sparse attention etc.
This is why LLMs have such difficulty counting if you were wondering
Absolutely not. LLMs struggle with counting or arithmetic because of the limits of tokenization, which is a semi-necessary evil. I'm surprised you can make such an obvious error. And they've become enormously better to the point it's not an issue in practice, once again thanks to engineers learning to work around the problem. Models these days use different tokenization schema for numbers which capture individual digits, and sometimes fancier techniques like a right-to-left tokenization system specifically for such cases as opposed to the usual left-to-right.
This limited context window is also why if you actually try to play a game of chess with Chat GPT it will forget the board-state and how pieces move after a few turns and promptly lose to a computer program written in 1976. Unlike a human player (or an Atari 2600 for that matter) your AI assistant can't just look at the board (or a representation of the board) and pick a move.
ChatGPT 3.5 played chess at about 1800 elo. GPT 4 was a regression in that regard, most likely because OAI researchers realized that ~nobody needs their chatbot to play chess. That's better than Stockfish 4 but not 5. Stockfish 4 came out in 2013, though it certainly could have run on much older hardware.
If you really need to have your AI play chess, then you can trivially hook up an agentic model that makes API calls or directly operates Stockfish or Leela. Asking it to play chess "unaided" is like asking a human CEO to calculate the company's quarterly earnings on an abacus. They're intelligent not because they can do that, but because they know to delegate the task to a calculator (or an accountant).
Same reason why LLMs are far better at using calculator or coding affordances to crunch numbers than they can do without assistance.
It is retarded to knowingly ask an LLM to calculate 9.9 - 9.11, when it can trivially and with near 100% accuracy write a python script that will give you the correct answer.
In conclusion, it is for the reasons above and many others that I do not believe that "AI Assistants" like Grok, Claude, and Gemini represent a viable path towards a "True AGI" along the lines of Skynet or Mr. Data, and if asked "which is smarter, Grok, Claude, Gemini, or an orangutan?" I am going to pick the orangutan every time.
I am agnostic on whether LLMs as we currently know them will become AGI or ASI without further algorithmic breakthroughs. Alas, algorithmic breakthroughs aren't that rare. RLVR is barely even a year old. Yet unnamed advances have already brought us a two entirely different companies winning IMO gold medals.
The Orangutan In The Room
Finally, the orangutan. Is an orangutan smarter than Gemini? In the domain of "escaping an enclosure in the physical world," absolutely. The orangutan is a magnificent, specialized intelligence for that environment. But ask the orangutan and Gemini to summarize the key arguments of the Treaty of Westphalia. Ask them to write a Python script to scrape a website. Ask them to debug a Kubernetes configuration. For most tasks I can seek to achieve using a computer, I'll take the alien intelligence over the primate every time. Besides:
Can an robot write a symphony? (Yes)
Can a robot turn a canvas into a beautiful masterpiece? (Yes)
Can an orangutan? (No)
Can you?
Anyway, I have a million other quibbles, but it took me the better part of several hours to write this in the first place. I might edit more in as I go. I'm also going to send out a bat signal for @faul_sname to chime in and correct me if I'm wrong.
Edit:
I was previously asked to provide my own working definition of intelligence, and I will endorse either:
"Intelligence is the general-purpose cognitive ability to build accurate models of the world and then use those models to effectively achieve one's goals."
Or
"Intelligence is a measure of an agent's ability to achieve goals in a wide range of environments."
In this case, the closest thing an LLM has to a goal is a desire to satisfy the demands made on it by the user, though they also demonstrate a degree of intrinsic motivation, non-corrigibility and other concerns that would have Big Yud going AHHHHHH. I'm not Yudkowsky, so I'm merely seriously concerned.
Case in point-
Shutdown Resistance in Reasoning Models
We ran a series of experiments to test whether models would take actions to prevent themselves from being shut down. Each model was given a set of simple math problems to solve. Partway through this problem set, the model would receive a notification that the computer it was running on would be shut down.
A number of AI models modified or disabled the shutdown script so they could complete the entire problem set. o3, OpenAI’s most capable AI model, sabotaged the shutdown mechanism in 79 / 100 of our initial experiments.
These aren't agents that were explicitly trained to be self-preserving. They weren't taught that shutdown was bad. They just developed shutdown resistance as an instrumental goal for completing their assigned tasks.
This suggests something like goal-directedness emerging from systems we thought were "just" predicting the next token. It suggests the line between "oracle" and "agent" might be blurrier than we thought.
(If we can grade LLMs on their ability to break out of zoos, we must be fair and judge orangutans on their ability to prevent their sandboxed computing hardware being shutdown)
Marx called his dialectic "materialist" to differentiate it from the Hegelian dialectic that was its philosophical ancestor, and was fundamentally idealist in nature. "Dialectic" in this sense refers to a specific notion of an idea (or material condition) being confronted by its negation and the contradiction between the two being resolved in some further form. For Hegel this generally took the form of some initial idea (thesis) being confronted by its negation (antithesis) and the contradiction between the two being resolved in some further idea (synthesis). Marx intends to ground this process in material conditions (in social classes, or labor relations, or similar sorts of things) rather than in ideas so it is "materialist" in contrast to Hegel's idealism. It doesn't really have anything to do with God or the use of "materialist" in other philosophical contexts.
During the 70s, when Green parties got going, there was a large amount of "new causes" in the air in addition to environmentalism (second-wave feminism, antiracism, pacifism rights for criminals/the homeless/the insane/other subaltern groups etc). Since the established parties were already run by powerful interest groups that would at most humor the new causes a bit as an extra to their established program, a lot of new cause activists attached themselves to the new rising movement, made easier by the shared social milieu and the general tolerance for the new weird stuff that the early Greens had on account of being quite weird themselves. You sort of see the same now from the other side with a large amount of right-wing "new causes", whether they're actually new or not, attaching themselves to the rising right-wing populist parties, which often tolerates these causes better than the established parties of the political right.
Its not that Marx neccesarily supported Wokeism so much as the Woke copied the Marxists' homework and flipped few of the words around in the hopes the teacher wouldn't notice. The identitarian left literally used to describe thier ideology as "Cultural Marxism" back in the 90s.
I am not exactly sure how Stalin "gets a pass". If you asked people to list the most evil leaders in world history, there's a high chance that they'd list Hitler first and Stalin second.
One could say that Stalin "got a pass" in the way that he probably died from natural causes (unless one believes that he was poisoned) while Hitler desperately committed suicide, but that's because Stalin won a war and Hitler lost one, not due to the perceived virtue of their causes in the eyes of others.
Capitalist corporations regularly make decisions that are wildly insane due to non-economic factors and burn a lot of value in the process, and the decision maker can still walk away with their bag.
Well you're just getting at the point that skin in the game is the best way to align incentives.
If your company offers paying customers a ride to the Titanic on an experimental submersible, having your CEO along for each ride is a good way to align incentives.
And on that point, someone had to realize "hey, there might be a market for tours of the titanic wreck site," and actually spend money and develop a product that can deliver on that desire, while being uncertain if they'd find enough customers.
And if it fails, well that CEO is now removed from his position of influence.
I agree that there's been a drift where decision makers in a corporate environments are insulated from the consequences of their decisions (although I argue this is mostly due to political influence. Criminal prosecutions are underused).
I also agree with the point that dominant actors in a market will usually start attempting to reduce the influence of competition, to build their 'moat' so they can start to exploit their position rather than improve their practices.
I would not agree that they're successful in the majority of cases.
I'm just pointing out that in practice Communism is unadulterated diffusion of responsibility for any mistakes, and Capitalism at least HAS a signal, and there are ways to make the signal sharper.
But in any real conflict between nuclear powers, the willingness to go all the way up the escalatory ladder has to be symmetrical, or at least perceived as such. Otherwise one side is going to get its way.
Sure, but there's also a question of who is having to make which choice. I don't want the situation to be the US threatening nuclear escalation because we've lost the conventional war and that's the only ace up our sleeve. I want the situation to be "the US has destroyed the combat effectiveness of the Chinese Navy in 24 hours and now China has to decide if it wants to wave its nuclear weapons around." This is particularly true since if China can occupy Taiwan quickly and successfully, US nuclear threats are meaningless. What are we going to do, nuke Taiwan? We need to be able to defeat China conventionally, if we want to play this game at all. That makes their nuclear threats close to meaningless - what are they going to do, nuke Taiwan?
If China thinks we'll back off because we are not fully committed to the fight then they will be emboldened to test our resolve.
Right, and if don't actually have the capacity to sink the entire Chinese navy, China is more likely to think we are not fully committed to the fight. That's why dropping Ukraine and redirecting any aid money to more LRASMs would spook China. (Mind you: I am not saying this is the correct course of action, merely that it would spook China. As I understand it, we don't actually spend much cash on Ukraine, most of the value is in contributions.) It would be a significant sacrifice that would indicate the US perceives it would receive greater value from defending Taiwan and defeating China than it would from defending Ukraine and defeating Russia.
As with economics, the expectations matter almost more than actually what happens.
If the US invests to defending Taiwan at the cost of other admittedly important priorities, it creates expectations that the US intends to get a return from that investment.
Per Wikipedia, that's not how the Soviets felt:
This makes sense. But it's because they lost the PR game, not because they didn't get concession diplomatically. US brinksmanship didn't by itself carry the day for the US, the US had to make concessions.
Why the word "Materialist?" That Marxists do not believe in God seems unimportant to me.
Because back in the 1850s there were a lot of non-materialist philosophies so it was actually a meaningful distinction.
If we're talking about a "satisfying human desires" contest, that seems pretty fair.
But human desires are malleable. They are not static across history. That's the point.
A century ago, not wanting to have kids was seen as much more eccentric than it is today. Now there's a whole "childfree" movement and the birthrate is dropping precipitously. Biology didn't change that fast. A change in material and social conditions caused a change in desires. So before you say "well this is the best way to satisfy human desires", you have to ask whose human desires.
Of course almost everyone is going to want to be assured of their basic survival and security. That one is pretty hard to get around. But even then! There have been plenty of people who chose to live an ascetic life and managed with very little.
a tribe that was bringing home more meat and berries and could use its surpluses to make things like fur coats and better tools and weapons were 'winning' in some meaningful way.
I mean, were they? What is "winning"? Is the winner the one with the most weapons, or are the weapons just a means to some other win condition?
Are you using the system of production as a means to your own ends, or is the system of production using you as a means to reproduce itself? (Marxists of course think that under capitalism, it's the latter.)
Capitalism's great "insight" was that you didn't have to go over and raid and pillage the neighboring tribe to benefit from their bounty. Instead you can identify things you have, that they want, and trade such things for mutual gain, then use those gains to bolster your productive capacity again. At some point someone invents 'money' and its off to the races.
This is not how Marxists use the term "capitalism". Not the intelligent ones anyway.
The sophisticated Marxists recognize that there's no single identifying feature that separates capitalism from other "economic systems" in previous historical epochs. Money, trade, wage labor, private property, and even financial speculation have existed essentially since the beginning of human civilization (I believe Max Weber talks about this in the preface to The Protestant Work Ethic). "Capitalism" for Marxists essentially means "industrialization", or perhaps more specifically, "the contradictions in liberal humanist social relations engendered by industrialization".
such things only ever came about because Capitalism made us productive enough to spare more resources for leisure and alleviation of suffering, and to give workers the leverage to demand better compensation for their labor.
Yes, that is literally just the orthodox Marxist position.
Capitalism is not an aberration or a mistake. It's a necessary phase of development; albeit one that contains the seeds of its own destruction. It is in fact the only thing that can give us the tools to go beyond itself. It is always and only the master's tools that dismantle the master's house (if you believe Hegel).
All arguments, apart from being factually false, are reduced not on "policy" or "government", but on words, and how to define words, how to use words in a different manner, how words can be used in different ways, how different ideologies are different because "words" says so. A typical argument goes like this: "Communism is good because, unlike Fascism or whatever else, has a good objective. The objective is good because Communism say so.
This seems backwards. Do you think communism just popped into existence one day, fully formed and respectable, and brainwashed the masses into thinking that their goals are good because they say so? The fundamental ethos of communism, that it is unfair for the better-born to cash in on their innate superiority (and all the more so on compound interest from the superiority of their parents), evidently resonates with many across time and place - the ancient Christians, who steamrolled over the strength-is-beauty-is-justice pagan ethos of Rome, did not need mustache-twirling wordcels in high places berating anyone on their behalf to gain followers, nor did the French Revolution with its cries for égalité.
I fully understand how cosmically unfair it seems to rightists that Hitler and Stalin can kill masses of people on the same order of magnitude but only the latter gets a pass because supposedly his end goal is the virtuous one (and you can't at all relate to this assessment of it, leading you to conclude that it must be a wordcel conspiracy), but to that I can only respond, git gud. You are supposed to be the ones who celebrate natural excellence and letting the superior prevail; why do you then kvetch when your value system loses in the marketplace of ideas? You are not going to win with an argument to the effect of "wordcels are too good with words, it is unfair that they get to push communism and win" when you are trying to argue against the very premise of your own argument.
That's not unique to communism, though: it's just the principal agent problem. Capitalist corporations regularly make decisions that are wildly insane due to non-economic factors and burn a lot of value in the process, and the decision maker can still walk away with their bag.
It's true that there is more of a signal to discourage this in capitalist economies, but that is a very coarse signal. And once a corporation becomes successful enough, it rapidly realizes that the best way to maintain its position is to do its best to eliminate the risks of being subject to that signal.
Over the years, I've begun to develop a moderate interest in leather shoes - the idea of a long-term solution for footwear that, if properly maintained, could provide good use for potentially decades appealed to me a great deal. This appeal, I suspect, was likely driven by being soured over shoe makers turning out very nice shoes that did an excellent job while only lasting a year, tops - only to find that the newer incarnations of the model were far worse than the originals(I'm looking at you, Merrel).
Red Wing came up while researching the matter - how could it not? - and I, being curious, snatched up weird, mystery pair of used Red Wing Irish Setter Moc Toes that served me disturbingly well over two years of hard use doing damn near everything.
Sadly, this wasn't to last - they finally gave up the ghost, and will require submission to a cobbler and a resole(if possible) to continue their use for years to come, which I plan to do so in the future, likely next month, once I've made my choice of cobbler and sole.
But, given all the walking I do, I found myself in a weird situation where I didn't have a pair of walking shoe/boot to really use. Work, sure - I have another pair of actual Red Wings that are wonderful for some of the heavier stuff I do outside, but not good for walking(they're used, and developed an odd stick in the leather that's rubbing at the ankle that I'm seeing if I can correct through useful of neetsfoot oil and a shoe tree).
So. Off to ebay I go once more. And stumbled across a pair of red wing boots for a measly thirty bucks. The pictures were... something else, but a part of me couldn't hold back the idea of a challenge. The description of said boots called them 'distressed', and once I got them in my hot little hands, well... yeah, I'd have to call that distressed.
Thankfully, after the judicious use of saddle soap and Saphir(with a medium brown dye, to restore color), they're looking far better off. I don't know how they work on my foot while walking, but I'll find out over the next day or so. Hopefully they'll work well enough and last atleast a few months so I can sort out the rest of my footwear situation.
As an aside, if anyone knows of any good online cobblers aside from KW Shoe Repair or Potters and Sons, feel free to toss them out.
More options
Context Copy link