P-Necromancer
No bio...
User ID: 3278
'Socialism' is a word with multiple contradictory meanings -- in that sense it's even worse than 'fascism,' which people generally agree means one thing, even if they can't agree on what counts. Marx used 'socialism' and 'communism' interchangeably to refer to the stateless, classless society that would emerge after the old order was torn down completely. Needless to say, this socialism isn't incumbent anywhere and never has been. Lenin used the term 'socialism' freely to describe his own form of ultra-authoritarian Vanguardism, and that form is mainly today embodied by North Korea, which does describe itself as 'socialist.' And, yes, in much of Europe the word 'socialism' is used today to describe center-left welfare capitalism.
But it didn't always mean this. There was a time when socialist parties did actually intend to implement real socialism; the term just got watered down to virtually nothing through many cycles of moderation and compromise (and attempts to distance themselves from the USSR). Socialism as per Marx is impossible and socialism as per Lenin is transparently awful, so if you want to win elections rather than achieve your ends through force, you'll quickly find that some ideas play better than others. Repeat for many election cycles and all you've got left is the name.
('Communism' isn't really any better: China is the largest and most influential self-described communist nation today, and they practice state capitalism. And, actually, they also describe themselves as 'socialist.')
I'm not sure this reasoning works; tariffs were actually in place for more than a year. Are you, hypothetical republican senator that doesn't want tariffs but also doesn't want to offend Trump or his loyalists, satisfied with how things played out? You get to preserve the appearance of loyalty... but you also get tariffs. How would you be worse off if you held your nose and voted for tariffs if you get them either way? Or if tariffs are truly unacceptable to you, wouldn't you want to be able to vote them down?
I can maybe see a way to thread this needle -- the tariffs were eventually struck down, after all... only to be immediately replaced with 'new' tariffs under a different legal theory. Theoretically this one has a time limit, but who at this point believes such technicalities will stop Trump from doing whatever he wants? He'll just come up with some other excuse.
And, actually, this way you get the worst of both worlds; if the tariffs stood, you'd at least get the revenue. This way you get all the economic damage -- the true harm done by tariffs is in the transactions that don't take place, not in the ones that do -- and a $200 billion hole in the budget. And it's not even as though consumers will be made whole. Refunds will go to the people who filed the paperwork, because that's easy, and not to the people who actually bore the incidence of the tax, which has essentially no relation to those accounting details.
I think the actual reason the Senate doesn't want to govern is simpler and more cynical: if voters can't recall a single thing their senator did, they re-elect them. With a baseline 90%+ incumbent re-election rate, there's just nothing to be gained by rocking the boat. (That is, if maintaining their position is the only thing they care about, which is a model I've been given little reason to doubt.)
But the Fremen Mirage claims that it is civilizational softness that is the primary factor, producing military softness as a consequence
There's something salvageable here, I think.
Simo Häyhä -- the White Death, the deadliest sniper ever -- was an avid hunter all his life. It wasn't his profession, per se; he was a farmer's son. But surely it was in part the desire for meat to supplement a farmer's diet that prompted his hunting and the hunting culture of rural Finland more generally. From there, it turned into a hobby; during his compulsory term in the militia -- which, importantly, does not train snipers -- he won a number of marksmanship competitions. I think it's fair to say that by the time he eventually did receive formal training as a sniper a decade later, he was already an exceptionally skilled shooter. And while he was the best, he was far from the only: Finnish snipers were unusually effective all through the war.
But would that have been the case had Finland not had a hunting culture? It still does to this day, but one could imagine a number of ways it might not: concerns over gun violence, ecological worries, meat getting cheap while other hobbies got more tempting.
Perhaps a more pointed example: Britain's success in the Napoleonic Wars largely came down to its navy, and its navy's success came down to the competence of its sailors. Officers' commissions were bought and sold in the British army of the period, but not so in the Navy: in that branch, they wanted competent officers. To have any chance of achieving the rank of captain, you were expected to start at the age of 12, and there were multiple stages of reportedly quite difficult exams (in that many failed, including boys from very privileged backgrounds) to rise in the ranks. It was by all accounts a fairly miserable and dangerous experience, and one that lasted not the thirteen weeks of bootcamp but thirteen years before one might be offered command over a small sloop. Why did English gentlemen -- and it was generally gentlemen -- subject their sons to that?
And, naturally, it was worse for the seamen, very difficult, high-skill labor for terrible wages (often as not months late) under brutal discipline and with all the dangers of combat. How did they train these men? This was not an easy job. Well, they largely didn't; they just conscripted civilian sailors. The 'merchant navy' was broadly acknowledged as the source of Britain's naval dominance, the core of their national security strategy despite being a civilian institution.
There are other examples I could mention -- English longbowmen, or the horse archers that have already come up a couple times -- but I think the point is clear: not every skill of military relevance can be learned in a few months at the outbreak of war. A society that encourages the development of those skills in civilians have a real advantage in acquiring competent soldiers. Not an unbeatable edge, but there's no such thing; it's substantial enough to consider, at least.
Of course, these skills include things like literacy and math, not central examples of martial virtue. These days, it might well include video games skills as preparation for drone piloting. But other ones are: declines in gun culture, fitness, self-reliance, patience, wilderness survival, persevering in the face of adversity despite bad food and little sleep, and so on are just the sort of thing grouchy old veterans are talking about when they say society has gotten soft.
To stake out the boundaries of this motte: soldiers often have to perform difficult tasks in harsh conditions, some with very high skill ceilings, and if a nation has a well of civilians who've spent years and years performing similar tasks in similarly harsh conditions to draw on, they've got a leg up on nations that don't. This is much narrower than (some) claims about the corrosive effects of ill-defined decadence (not even going to try to steelman the focus on sexual morality; I wouldn't know where to start), but I think the core concept is preserved. It's not 'any and all privation is good, because it makes people tougher,' but it's also not as trivial as 'fighting makes people better fighters.' There's a region where, demonstrably, improving (some) people's circumstances would make them worse soldiers.
I actually see a fair bit of Chinese in longer conversations - not enough to make it unreadable, but enough for me to notice.
Huh, are you giving it any Chinese characters in the prompt? Which model(s)? I think I've seen this from a commercial model exactly once (Gemini 2 Pro), when I was asking some pretty in-the-weeds questions about Shinto and Japanese Buddhism and it gave me quotes in Japanese without translating them, and even there, its own words were in English. The Deepseek R1 paper mentions language confusion in reasoning blocks was a problem before post-training, but I never encountered it with the final model. I have seen it from some small open weights models, but they're kind of dumb all around.
Take a look at the attached image. That's about a week old. Once you've looked at it, go look up that ticker. (Thanks to @ToaKraka for pointing out the image feature, BTW). That one was a pretty big shock to me from Gemini 3 fast. It doesn't do it every time, but it's done it more than once for that exact ticker.
Yeah, that doesn't shock me. Not quite the case I meant. The reason code specifically is special is that they can use this process:
- Get a bunch of function docstrings and testing code for those functions. This sounds like a lot of work, but if you're Google, I imagine you already have a lot of well-documented, well-tested code. (If you're not Google, you can try scraping Github, though pruning low quality data would be a pain.) Not a lot of it is self-contained, but you can just include documentation or source for everything called by your existing implementation in the context.
- Give the model the docstring for the target function and the other documentation/source but not the original function or its testing code, then have it try to write the target function from scratch some huge number of times
- For each attempt, if the code it provides compiles, meets your style guidelines, and passes all tests, mark it as 'good,' and otherwise as 'bad.'
- Give it the same input, but ask it to write the tests. If the tests it gives you compile and meet the style guidelines, confirm that exactly the same implementations pass all the tests as for the known-good set of tests. If so, mark this generation as 'good' and otherwise as 'bad.'
- Now that you have a large set of good and bad responses for both code and tests for that code, you can use that for DPO (or GRPO or whatever), which trains the model to be more likely to produce good responses and less likely to produce bad ones.
Which works very well. The reason normal prose hasn't seen nearly as much improvement is that judging prose takes skilled human labor to do well, and these huge models are so data-hungry it's just not feasible to get enough of it. (I also suspect a lot of these companies like their models bland and obsequious -- customer support scripts have the same qualities, and those at least were written by real people.) So you only really see these big gains for code and math (for which a similar process can be developed).
This specific example is kind of borderline. It's a dynamic table, right? Something the model made up to answer your prompt? While it got things objectively wrong in a manner that's in principle possible to automatically check, setting up automatic checking for any claim of fact is not as easy as running pylint, which really will catch any syntax error. I imagine they do try to DPO for cases like this, but it's a lot harder.
Models are prone to just making stupid errors occasionally on even the most basic tasks, and I don't know if we're going to be able to find a real solution to that. Something that does help (and is often used on benchmarks) is taking the consensus result of several runs, but that massively inflates inference costs for a relatively small reduction in error rate. It does seem to be a hard problem, in that it's only gotten a a bit better over the past year or so. (There was more improvement in 2024, which I take as a bad sign; they've already tried the easy stuff.)
Imagine the full range of legal opinions that exist on the internet, intelligent, retarded, and everything in between. Now imagine what the average of that mass of opinions would look like. That's effectively what you're getting when you ask an LLM for legal advice.
This just isn't a good model of how LLMs work. If it were doing some naive averaging of all the text it was trained on for a subject, shouldn't it randomly insert words in Spanish or Chinese? But it doesn't. If you ask an LLM whether it's a man or a woman (one without "as an AI language model" post-training), it doesn't present itself as the hermaphroditic average of the people described in its training set, it chooses one and at least tries to stick to its answer. Now, either way it's incorrect, obviously, but it's clearly not an average; a mode, perhaps. But it doesn't just naively take the mode either: If you ask it whether Harry Potter is a real person it will correctly tell you he's fictional, despite the overwhelming majority of the text concerning Harry Potter -- How many billions of words of Harry Potter fanfiction are there? -- treating him as real.
A lot of people argue that LLMs are incapable of understanding context or judging the quality of sources, but that's just... obviously untrue? Ask Gemini whether magic is real, and it'll tell you about sleight of hand and historical beliefs about witchcraft, but conclude the answer is very likely 'no.' Ask it what the spell Create or Destroy Water does and it'll quote the 5th edition rulebook. It understands what was meant by each question perfectly. And it does understand: respond to the second with 'But magic isn't real, right?' and it'll explain the implied category error as well as you could wish.
It's not that it doesn't learn the incorrect ideas in its training set -- tell it to emulate a Yahoo Answers poster and it can do so -- it just also learns contextual information about those ideas (such as that they're false) much as we do. Tell it you want a good answer (which is largely what post-training does) and it'll know to discount those sources. It doesn't do so perfectly, but the notion they lack the capacity altogether is not credible.
Regarding @dr_analog's point:
You can paste in a screenshot of a math problem that 99%+ of adults would fail, calculus, linear algebra, probability, geometry and it will solve it step by step, showing its work.
This is true so far as I know; did you actually try it? LLMs are bad at tasks requiring strict precision, accuracy and rigor that can't be objectively and automatically judged. There's a huge disconnect between performance on math/coding, where it's trivial to generate good/bad responses for DPO etc. post-training, and subjects like law, where it isn't. @dr_analog is right: LLMs are currently much better at exactly math/coding than they are at essay writing, purely due to the ease of generating high-quality synthetic data.
I don't believe we have unlimited energy resources like you seem to, but this is an argument for another time.
Well, this is core to the disagreement. The chemical details of how precisely e.g. food is produced and waste is handled are important, but energy is the ultimate constraint on growth. Plentiful energy enables all sorts of tricks like desalination for fresh water or electrolysis to produce the hydrogen required for the Haber–Bosch process. But if you run out of energy, no trick is going to save you.
To clarify: the energy resources I'm describing are finite; they're just very, very large. The 100X number comes from employing mature technology (breeder reactors, developed in the US in the 50s and currently in commercial use in China (CFR-600) and Russia (BN-800)) on proven deposits-- there's not really much room to doubt the potential there. Hell, the US could get centuries of current usage just burning its accumulated 'waste'-- our exiting light water reactors only actually extract a tiny fraction of the nuclear potential energy in the uranium, while a breeder can get much more out of their 'spent' fuel.
It's comparatively expensive energy, but it's a guaranteed backstop if cheaper sources dry up for whatever reason. There's a working process for seawater extraction, too; it's just not economical when it's still so easy to dig uranium out of the ground. With essentially no required additional R&D, that's already enough to get you to the point where running out of energy is just not going to be a concern in the near future, though the price of energy could be.
Speculative technology ranging from molten salt thorium reactors (probably not that hard, but no one bothers because the uranium process is easier and cheaper) to D-T fusion (there are moderately promising prototypes, but it's a very hard problem) to D-D fusion (science fiction at present) increase those reserves massively, but they're not necessary to e.g. completely replace ground water usage with desalination.
I'm not suggesting we do that -- it's almost certainly way harder than just exercising reasonable ground water stewardship -- but the option exists if we screw everything else up and billions are going to die.
You're missing one. Pollution! The most obvious aspect of this is climate change, where we are wrecking the climatic conditions that allow stable agriculture, but there are many other aspects of pollution including microplastics which I mentioned, and heavy metals that will heavily impact our fertility rate.
I'll admit upfront I'm not too knowledgeable about microplastics, but on other sorts of pollution: we're not yeast. Yeast arguably has a pollution-related carrying capacity, in that in a sealed container it will eventually poison itself with the alcohol that is the byproduct of it's anerobic respiration. Fortunately, out pollutants are technological, not physiological.
Only CO2 has proven both 1. genuinely dangerous and 2. truly hard to mitigate. Heavy metals are dangerous, for sure... but exposure to lead peaked decades ago. Mercury in practice is only an issue if you eat a lot of certain species of fish -- we could stop. It turned out to just not be that hard to limit human exposure. Ozone layer depletion was a real problem... with a very easy solution of banning a couple aerosols; I understand that modern refrigerants are as good as freon ever was. A century ago people made a lot of noise about smog, the price of industrialization; much less so these days. Even China's about past that stage now, if you think the first world got out of it just by offshoring manufacturing.
But that does leave climate change as a more stubborn problem. Not because solving it is impossible -- we've always had fission to fall back on, as soon as we decide it's actually important -- but because it's expensive to fix and presents a difficult international coordination problem... and because most people agree it's not that important. Ecological collapse rendering stable agriculture impossible is wildly out of line with even the most dire warnings offered by the IPCC out to the year 2100. The full-chud 'it's all fake' prediction is much closer to the scientific consensus than that scenario.
We are doing a lot to fight climate change -- co2 emissions per capita peaked around 2000 in the developed world -- and there's a lot more we could and probably should do. (I'm a fan of nuclear energy, you might have gathered.) But that's because the problems it will create are cheaper to mitigate now than they will be in the future, not because it poses a genuine existential threat. And if it turns out that, in spite of all predictions, it really is that bad, there's always stratospheric aerosol injection. I just don't see this being a serious impediment to population growth (once the demographic factors sort themselves out).
In terms of space, we already use the vast majority of arable land on this planet.
True, but we 1. don't use it very efficiently 2. we could supplement with hydroponics (at much greater expense) if necessary. There's also largely untapped options like aquaculture and mesopelagic fishing, though I can't say I'm terribly excited for either.
I don't think this is true for a number of reasons. Firstly, declines in fertility are somewhat due to endocrine disruptors from microplastic pollution we've caused. That isn't going away for anyone any time soon. Secondly, there seems to be a deeper link between modernity and fertility that most want to admit.
The only evolutionary pressure on humanity at the moment is to have more kids. We evolved our whole endocrine system; merely adapting to the presence of microplastics in the environment is utterly trivial in comparison. Similarly, our sexual instincts evolved; obviously the small tweaks necessary to encourage reproduction in spite of modernity can evolve. And it's not as though those adaptations aren't already latent in the population: there are plenty of high-fertility families. Population will drop until those alleles predominate, of course, but that's just the nature of the evolutionary process. That can (and likely will) cause a lot of short-term pain, but it certainly doesn't represent an extinction risk, and only extinction could prevent the population from eventually rebounding.
Finally, as many on this forum are loathe to admit, we have actually outrun the carrying capacity of this planet. There won't be another fertility explosion in this culture because the planet literally will not support it for much longer.
How can you tell? Exceeding carrying capacity generally manifests as mass death, not reduced fertility. What resource is the planet no longer able to supply?
Not space: there remain enormous tracts of undeveloped land, and far more underdeveloped land; people can live comfortably -- by revealed preference, prefer to live -- in cities with orders of magnitude higher population density than the world as a whole.
Not energy: known uranium reserves alone contain 100X the energy of all the fossil fuels humanity has ever burned, and that's most conservative possible estimate. Extracting uranium from seawater, for example, is another factor of 100X, and D-D fusion would outlast the sun at current consumption rates. And desalination makes water a question of energy. (Sea water actually contains enough dissolved uranium to power its own desalination ten times over.)
Not food: never in history has acquiring food taken a smaller fraction of human labor or a smaller amount of arable land per capita, and we're not particularly optimized for the latter -- substituting grains for meat would boost calories-per-acre by a factor of 10-30. And most 'sustainability' issues (nitrogen fertilizer production, water use) are trivially solved with sufficient energy too, and the rest with hydroponics and recycling.
I can't see any factor that dictates global carrying capacity is 8 billion -- I can hardly see any that suggests it's 80 billion.
Sure, this is a sane position, whether or not it's true. But someone who believed this wouldn't (truthfully) say
there cannot be legitimate reasons why a meritocratic test would show a racial or gender skew, therefore showing that the outcome of a process is racially or gender-skewed proves that it wasn't actually fair and meritocratic
Where, in context, the 'process' is a means of assessing applicants for a job, e.g. blind auditions to an orchestra. This argument admits that until the interest gap is closed, there will in fact be a skew in qualified applicants. It argues that you should hire the less meritorious applicants from certain groups anyway, but it doesn't claim that a test saying members of those groups are less meritorious is proof per se the test is biased. Which I agree with @WandererintheWilderness is something people sometimes say.
start from the unassailable premise that there cannot be legitimate reasons why a meritocratic test would show a racial or gender skew, therefore showing that the outcome of a process is racially or gender-skewed proves that it wasn't actually fair and meritocratic.
I've heard people say this and I'm hesitant to claim they don't really believe it... but do they really believe it?
The same people absolutely believe discrimination occurs in culture/education/training/etc. If I cited an article showing that e.g. childhood participation in private music lessons for orchestral instruments had a race skew (which it does), would they be surprised? I really don't think so. Do they believe that lessons and practice just... don't matter? That's the only way I can think of to justify the notion that a fair meritocratic test of orchestra applicants wouldn't show a similar skew even assuming uniform innate capacity and interest.
How about in a different context? It's no secret that, say, chess grandmasters are not uniformly distributed across race and gender. Would this hypothetical DEI proponent truly claim that chess is not a fair and meritocratic measure of chess ability? It's also no secret that just about everyone who gets to that level has played a lot of chess, almost always from a young age, and that that's not uniformly distributed across race and gender. Would they claim that that experience doesn't actually make someone better at chess?
The (slightly different, much rarer) explanation I sometimes encounter is that, while applicants from disadvantaged groups are in fact less capable at the time of application, they'll quickly 'catch up' once placed in a position congruent with their innate ability. I find that claim dubious -- if nothing else, it suggests that a 30-year-old who's played chess four hours a day since the age of five shouldn't be expected to be better than a 30-year-old who's done so since the age of 29 and a half -- but I have no trouble believing someone actually could believe it.
The thing is, I wouldn't want to be the person in the meeting saying 'applicants from disadvantaged groups are in fact less capable at the time of application' regardless of how I follow up on it. I could see myself choosing to say the former, clearly untrue statement instead to avoid the possibility of hostile misinterpretation. But maybe I'm just being cynical and people actually do believe it?
By definition it was an extrajudicial summary execution, as it was a killing that was not sanctioned by the court and he was killed without the benefit of a free and fair trial. ... Government agents killing people in "panicked split-second decisions" does not make it not an execution
This is not in fact how the word 'execution' is used in any other context. Your definition would include killing in justified self defense too. If a man kills a home invader rushing at him with a knife, do you think his defense attorney would call that killing an 'execution?' If you look up how many Germans were executed by the allies during and after WW2, you will get a number in the thousands, not the millions; the allies -- allied soldiers tasked with violence on behalf of the state -- killed millions of German soldiers in the war, but absolutely no one calls those deaths 'executions.'
'Execution' implies deliberation and, most critically, control over the situation. Killing in the course of an altercation can be (and is in this instance, I think!) manslaughter or murder, but it is never an execution. If your definition of 'execution' is co-extensive with 'killing,' why insist on the former? Is it because 'execution' sounds worse because no one else uses your definition?
(ETA: After considering it a little more, I think 'execution' particularly requires that you kill because you believe the victim deserves to die (as a necessary but not sufficient condition). Killing out of confusion or fear of someone's current behavior can't qualify. 'Extrajudicial summary execution' refers to cases like occupying soldiers hanging or shooting civilians on suspicion of sabotage, not those same soldiers firing into a crowd of rioting partisans.)
... does not engender the levels of competency that should/is required by agents of the state. If ICE agents cannot act competently in high stress split second situations then they shouldn't have guns and the power to exercise the state's monopoly on violence.
This sounds good, sure. Have you actually considered the implications? US (non-ICE) police have acted incompetently in high stress split second situations before -- I expect you're familiar with at least a few examples -- so should we abolish the police? US soldiers have absolutely made mistakes like this before; do we need to disband the military? Unfortunately, while 'no lethal mistakes, ever' is a laudable standard, it's one that no group tasked with exercising the state's monopoly on violence has ever met or ever will.
I certainly agree Pretti's shooter, specifically, shouldn't have a gun or the power to exercise the state's monopoly on violence, and in fact should be tried for homicide. The shooting is cause to update in the direction of ICE being incompetent thugs... but update how much?
Out of 50,000(? Organizers claim, anyway) protestors in Minneapolis, ICE has only actually killed two of them. I happen to think that Good's shooter would have been easily acquitted had it gone to trial, but allow that that was murder too: is the failure rate per violent encounter here actually worse than average? I'm not sure, but you haven't even tried to make the argument that it is.
Ah, that is part of what makes it appealing, for sure. But communes have strong Exit rights too, so I don't think it's obviously biasing the comparison. Is the argument that Exit is more important for anarcho-capitalism-lite given reduced Voice? I'm not actually sure Voice is reduced in ancap societies -- you have all the tools of persuasion and politics to get your way and you can pay to get it too, whereas in ancom you only have the former.
This might seem superficially counterintuitive: Sure, in ancom you don't have money, but you also don't have to work for money. Doesn't it cancel out? But in fact theory predicts this result: You are rewarded for your service to the interests of others with influence over others' actions in turn. This has the natural effect of maximizing total utility, as each individual makes money doing what most efficiently enables them to help others and spends money on the help they most want. In other terms, trade is positive sum: both parties are better off for it, both get more value out than they put in. So forbidding (or failing to adequately facilitate) trade reduces total utility.
The communist ethos may be 'from each according to his ability, to each according to his need,' but getting one's needs (and wants) satisfied is the whole incentive for providing one's ability. If you remove that incentive, people just aren't going to try as hard. Auth-left can force them (which is just slavery, and is inefficient and destabilizing in the same ways, but isn't completely unworkable), but lib-left can't, really. So the only way it can ever really work is if people want it to.
It might not be truly autonomous and stateless, but you can join a commune today and get 80% of the way there. Many people have... and almost all them gave it up when it proved unworkable and dumb, and not because of the compromises they had to make to exist within a capitalist society. I have no issue with voluntary communities living in (just about) any way they please, and I'm happy for the people who can find happiness there... but the evidence is that it just doesn't work for the vast majority of people even after self-selecting for the people who want to make it work the most.
(The community that's similarly 80% of the way to anarcho-capitalism is... pre-Civil War America, I think? Not no public spending or government action, but much, much less. Though it looks so good in comparison I'm afraid I've put my thumb on the scale somehow.)
Just like a gun control advocate can advocate for changing the laws with regards to who can own which gun, he can obviously also advocate for actually forcing the cops to protect people.
I mean, he can. Does he? Could be my own ignorance talking here, but I don't think I've ever heard this point from gun control advocates. It ought to be a lot easier to get passed than gun control, since the committed opposition is... the police union, I guess? Not half the country; you can see the rightists in this thread agree cops should have that duty. And by doing so first they'd make gun control more likely by neutering this argument against it. So where is the advocacy?
I would argue that the IRBO only really emerged during the cold war.
The Atlantic Charter, signed 1941, called for:
[N]o territorial changes that do not accord with the freely expressed wishes of the peoples concerned
(It called for several things that didn't happen, actually, but if we're looking for the emergence of the principle, I think this counts.)
Now, the USSR didn't sign it (not that version, anyway), but America did (before actually entering the war, even). Most of the involved parties decided they didn't really mean it within the next few years (it didn't go over terribly well in Britain's colonies, for one), but that kind of makes my point that such promises were hard to take seriously. The UN Charter, 1945, expressed the same sentiment, but of course was careful to grandfather in the allies' recent conquests, and in fact would compel the signatories to condemn e.g. Finland trying to take back the territory the USSR stole in the unprovoked Winter War.
Contrary to common belief, nukes are not the "I win" button. Japan's war had gone very badly and they were facing an invasion, getting nuked was simply the last straw. "The killed 100k Russians when they nuked Leningrad, better make peace before they kill another 100k of my poor countrymen" would not have persuaded Stalin out of all people.
This is true, especially for fission bombs, especially given the very tight production bottlenecks they had at the time. It would still have been a huge advantage -- less so for bombing cities than for discouraging any concentration of force, I would think. MacArthur was general then, too. It'd have certainly been expensive in blood and treasure, but it's hard to imagine it's a fight the US actually loses, provided they had the will to see it through. It certainly didn't get any cheaper for the next 40 years.
(The USSR did ultimately dissolve without a fight, but not before doing an enormous amount of damage around the world. Their efforts to undermine Western dominance were quite successful in tying anti-imperialism to socialism in the public imaginations of much of Africa and South America, arguably immiserating those nations to this day and for who knows how long to come. You could fairly blame that on the Western Imperialism too, but the British and French colonial empires would have dissolved regardless; there's no USSR making Britain pay Mauritius a fortune to rent islands Mauritius never actually occupied today.)
If the US could have defeated the USSR by prolonging WW2 for a year or so, I think they would have done so, not for the right of self-determination of anyone but because any fool could see that the USSR would become their rival superpower. But they had just spent a lot of lives and productivity on winning a big war. Telling the Americans "change of plans, you already freed France from the Nazis, no you get to free Poland from the Soviets" would not have been popular, especially if you consider that plenty of intellectuals were leaning communist.
My point is not that it would have been easy or even smart (though I not-very-confidently believe it would have been), just that it casts a lot of doubt as to just how committed the US (or anyone else) really was to the principle.
Presumably after WW2, what preserved the borders in Western Europe (say between Belgium and the Netherlands) was less a deep respect of the IRBO learned overnight and more the fact that everyone knew that if they tried to make war the US would come down on them like a million pound hammer.
What about Eastern Europe? The notion that the US was truly interested in the principle of national self determination is difficult to credit in light of the enormous support they offered the Soviets in conquering half the continent. America was the world's sole nuclear power for years after the war; didn't do Poland or Czechoslovakia or Karelia any good. They said they'd keep the borders right where they were... while simultaneously allowing Stalin to redraw them as he pleased in the areas he controlled. Why would anyone take those commitments seriously?
Sure, I don't disagree with anything here. Or really anything in the OP; just adding my two cents and offering a couple tips for making productive use of LLMs.
For the example of fixing some OS issue, imagine I didn't have really any technical knowledge of how things work (say, I don't really even know what the registry is unless a tech/LLM tells me something about it). Maybe I'd take my computer to a human tech. Could even be a corporate IT guy. Perhaps, knowing that I don't have a clue, I just give it to him. "Here's my problem; please fix it Ralph Rufus."
Who knows what he'll get up to? What stuff he'll mess with along the way. Things he'll try just because, and then maybe leave it in a changed state, even though it didn't progress toward a solution to the actual problem. This cruft can build up. After years of having this corporate IT guy and that corporate IT guy and the other corporate IT guy just doing who knows what, maybe at some point, things get bizarre enough that the next one says, "Dude, stuff is wild here; we probably should just wipe it and clean install."
I think there are two different use cases here it makes sense to distinguish. This is an example of allowing the LLM to act 'directly' (not actually directly, there's a human in the loop, but it's giving you commands to execute, not writing a script) on a complex, persistent system. Which, yeah, that can absolutely build up cruft that's difficult or impossible to clear away without starting fresh. But even the most careless vibe coding has a serious advantage, in that the actual operations are recorded and auditable. If you put in a tiny bit of effort and use version control, you (or someone else, or another LLM) can even audit how the code changed over time. And, better, you can separate out tasks into different, independently tested scripts to be sure there isn't some complicated interdependence issue. It's the difference between manually tinkering with a machine and writing a dockerfile. It's still certainly possible to build up technical debt to the point you're better off starting fresh, but it's a lot harder. At least for small personal projects, which I hope are most of the things people do make this way.
Careless vibe coding carries real risks; I haven't caught a model trying to do anything dangerous (as opposed to dumb), but I believe the people who say they have. I'd be very leery of running code I can't understand at least well enough to tell if it's making web calls or deleting things it shouldn't be. (But I'd say the same for StackOverflow.) I double check the library names. I wouldn't let it touch anything security-critical, or any files I care about and don't have backed up. I haven't pushed any generated code to a public repo, but if I did, I'd be very careful to ensure there aren't any api keys or passwords or other secrets anywhere in history.
It is... concerning that same tools are available to people less cautious and knowledgeable than me, and I'm certain that will lead to problems. (On the other hand, I'm sure there are people who'd put me into that group.) Enough to make the whole endeavor net-negative? Hard to say, but I'm pretty sure the answer is 'no.' At least, I think someone smart enough to get Antigravity or Claude Code or whatever running ought to be smart enough to understand the big dangers and a few basic principles of good, maintainable code with a short crash course-- which, actually, the LLM is very capable of providing, even if it can't (perfectly) reliably avoid those pitfalls.
I think there are three things going on here, all with the same (somewhat inconvenient) solution:
- LLMs have a tendency to get stuck on certain ideas, even if they acknowledge they're wrong. Once something is in context two or three times, it can be very hard to get it to let it go.
- LLMs advertise huge context limits, and, technically speaking, you can run Gemini Pro 3 on a million tokens... but you definitely shouldn't. Models get way dumber at high context, and noticeably dumber even at relatively modest context (anecdotally, it's significant even at 32k)
- Models tend to get dumber/less obedient the longer a conversation goes, even if the total context is still short*
The answer to all three problems is just to start a new session frequently and copy only the relevant and correct details into the new chat. It can be a pain if you're in the middle of something, but it gives the best results.
This is... somewhat redolent of good coding practices, I think; encapsulation and abstraction, at least. If you break a problem into smaller parts and keep the boundaries between those parts strict, it's easier for both humans and LLMs to conceptualize the totality of what they need at any given time. Ideally, structuring a project this way will not just result in better LLM performance but in more maintainable code too.
On the other side: having an LLM write code at all (rather than, say, directly making system calls) is already a big step towards legibility (and thus maintainability). Such a system is obviously insane, but it's perfectly possible for your program to be a particular internal state of an LLM. For that matter, it's perfectly possible (and indeed ubiquitous) for your 'program' to be the internal state of a human mind. By analogy, 'human vibe coding' is telling the human to design a set of legible policies rather than using their own judgment directly, which does actually have the expected advantages of consistency, comprehensibility, and interoperability.
I guess the takeaway is that we should look to normal management strategy for clues on how to manage LLMs, which might be obvious.
* This at least I think is mainly a training issue: most RLHF/DPO is done on single-turn responses.
Coincidentally, you can relate this to how Locke defines a ruler tyrannical: he who rules not by law but uses power "for his own, private, separate advantage" and "makes not the law, but his will, the rule"
I haven't read Locke, so apologies if I'm misunderstanding, but this seems pretty obviously false to me? Or at least a very non-standard definition of tyranny. If the USSR under Stalin were less corrupt and arbitrary, would it have been less tyrannical? It would have been more thorough in its oppression. In 1984, no one actually benefits from the system: the more power you have in the party, the less freedom it permits you. It's pure Molochianism: the party accumulates all the power it can and crushes all opposition not because anyone actually wants that, but because the party that prioritizes winning over all else is the one that wins.
I suppose such a system would be less tyrannical in the sense of having less of a tyrant? Not necessarily: if the law permits absolute rule by an individual, which many systems of law through history actually do, the tyrant need not override the law to exercise power capriciously. And again, in such a case, I'd see an absolute ruler who uses his position to enrich himself as less tyrannical than one that uses it in support of sincere authoritarian ideology. Hitler's corruption must have had a (very) small but real impact on the efficiency of the Nazi state, and a less efficient Nazi state is less able to pursue the Nazis' tyrannical aims.
Huh, why offer lower interchange fees for category 3? After all, the businesses they're charging don't care about how the customer is using their card -- if anything I might expect group 3 to be more free with their money, justifying a higher fee -- and they're not going to offer a discount for using a lower-fee card (or at least I've never seen that). I suppose there might be marginal businesses that'd refuse higher fee cards? But at most I've seen businesses refusing whole brands, not the products within those brands that are meant for group 4 instead of group 3.
You say this like it's shameful. Protecting citizens from banditry is among the most noble duties of a nation. It's not an 'open secret,' or at least it shouldn't be -- it's a far more honorable casus belli then ideology or great power politics or rumored possession of WMDs, to be sure.
Countries that invoke might-makes-right to rob foreigners have no room to complain when those foreigners' nations invoke might-makes-right to seize restitution and inflict punishment. In fact, the world would be far more prosperous if those norms were consistently and strictly enforced. Not like it's ever a good idea, just some combination of stupidity (ideologically motivated or otherwise), short-sightedness, and corruption, as Venezuela's poverty demonstrates: they own their oil now... but they can't refine it and no one who can is dumb enough to invest in the nation.
No, the whole problem is that the US didn't do enough to punish the theft. Individual criminals tend to be dumb, high time preference, and low executive function, and that seems like an apt analogy to me. Studies show such people respond best to rapid, consistent and highly visible punishment. Letting things drag out for decades and using such indirect methods as funding protestors sends the wrong message and only multiplies the misery of the population.
I also can't imagine stepping in front of a running car
... What do you do at crosswalks?
Walking in front of cars that, while running, are currently stopped isn't actually dangerous in most situations, and most(?) everyone does it all the time. It was a mistake in this instance, obviously, but I hardly think it proves the officer doesn't believe getting run over is dangerous(? Is that really your argument?), just that he didn't think she'd suddenly accelerate.
It's obvious why officers like it as a tactic. Most people are probably not willing to make contact with a person with their vehicle to flee a crime, so it effectively prevents the obvious way someone might escape. If they are wrong about that individual's willingness it lets them escalate to shooting.
I continue to have mixed feelings about it. I don't like it as a means of manufacturing an excuse to use deadly force where you wouldn't normally be able to but it is not clear to me what reform of it as a tactic would look like.
This logic strikes me as dubious. Are cops (or ICE agents) really so dedicated that they're eager to put their lives on the line -- and the danger of standing in front of a car that might abruptly accelerate is very real -- for marginally better clearance rates? Isn't the standard leftist line that cops are so quick to escalate to lethal force because they're cowards unwilling to accept the risks associated with de-escalation? I'm not sure that's true, but it's at least not obviously contrary to their individual interests.
(I could envision a version of this scheme -- leaving out an unloaded gun in easy reach of a suspect, maybe -- where they could try to manufacture an excuse to escalate to lethal force without any substantial personal risk, but this certainly wasn't that. If we're arguing about tire angle, then the officer's life was in the suspect's hands.)
As to why the cop did step in front of the car? I think incompetence is more likely than suicidal malice; the latter exists, but the former is vastly more common.
Actually, modern financial instruments enable you to profit off low volatility too; look into iron condors and calendar spreads. I imagine this is one of the more common insider trading opportunities, in fact: if you know beforehand that this quarter's earnings report looks a lot like last quarter's (or the same quarter from last year, depending on the business), for instance, which it often will.
What you really profit from is not volatility but surprise. If you know better than market, you should be able to find a way to monetize that knowledge. Of course, your profit depends on degree of surprise, so it's hard to make much money when the market is already close to right.
Why do the Jews never look at George Bailey or Harry Potter or Luke Skywalker and say "That's me! Yes, he may not be canonically Jewish, but he's obviously just a stand-in for a Jew in this fictional setting."
They do, however, say that about Superman, who's the furthest thing from those stereotypes.
I don't think the fictional characters you identify with really say that much about you. At most it says how you'd like to be perceived, or how you fear others do perceive you. Wasn't there a thread a week or two ago about how real identity comes from what you do, not how you feel?
- Prev
- Next

Not sure I agree. The mass rapes and executions (mostly) stopped after the war, but the purges and repression only got worse until Stalin's death. And these aren't small numbers; Stalin took millions of people as political prisoners. In fact, it's argued (not uncontroversially) that he was gearing up for his own genocide of the Jews shortly before his death, the fabricated Doctors' Plot being the opening move. (He'd already launched one major pogrom, but this was supposed to be much bigger.) The Soviet Union wasn't stopped, but Stalin personally was, and his successors happened to be more moderate. Who can say if the same wouldn't have happened to the Nazis after Hitler's death?
(Actually, a very similar story played out in China: Mao remained every bit the brutal dictator until his death, orchestrating the Cultural Revolution in his 70s, and it was only after his death that Deng managed to salvage a workable system from his insanity. It's an interesting thought, given the insistence down thread that killing individual leaders never works (vis a vis Iran). Both died peacefully, I suppose, so perhaps not that close an analogy, and the revolutionary government has already survived one transfer of power without moderating.)
More options
Context Copy link