@4bpp's banner p

4bpp

このMOLOCHだ!

2 followers   follows 2 users  
joined 2022 September 05 01:50:31 UTC

<3


				

User ID: 355

4bpp

このMOLOCHだ!

2 followers   follows 2 users   joined 2022 September 05 01:50:31 UTC

					

<3


					

User ID: 355

A late tangent, but I was warming my hands next to last week's heated exchange between @DaseindustriesLtd and @gemmaem and one thing that popped out at me was @f3zinker's chart representing women's messaging behaviour towards men in different positions of the attractiveness distribution, depending on their own. I've seen variants of this data - introduced here with the unambiguous line "Women just about exercise dictatorial demand." - on the internet for a long time (since the days of the OkCupid blog), and it always struck me as strange, insofar as it did not seem to mesh at all with the reality I perceive around me. The points of disagreement are numerous:

  • I believe I'm personally around the 60〜70% mark of the male attractiveness distribution, and have always been extremely passive about dating. Nevertheless I've been approached by women in the 50〜90 range of their distribution (as perceived by me), and had those approaches convert into relationships (some of them very long-term) in the 60〜80 band. This would put me smack dab in a pink area in that chart, repeatedly. I do not get the sense that any of those relationships were unequal in terms of effort or resources invested.

  • People around me, including unattractive ones, of either gender match up all the time, and there is no obvious bias in terms of which side initiates. It's not that unattractive and involuntarily celibate men don't exist (especially from the 70th percentile downwards), but the correlation between involuntary celibacy and attractiveness is actually seemingly quite low.

  • My entire academic and academia-adjacent blob has very low attachment to existing social conventions around dating. I know several people who are poly, and the most disapproval they meet is being the butt of the occasional jokes. Contrary to the stereotype, the ones I know do not strike me as unusually unattractive. Yet, the most attractive poly guys are not pulling massive harems, and in fact I've observed the most attractive poly girls reject repeated advances from the most attractive poly guys (in favour of less attractive ones).

So what's going on here? After reflecting on it for a bit, it seems to me that there's actually an obvious answer: the very framing of the question being charted ("do you 'like', with the implication of interest in a sexual relationship, this person, based on their picture?") only captures meaningful data when asked of men, because men are the only ones for whom look is a dominant term in the value function that estimates whether they want a sexual relationship with someone. Rewording this question slightly in a way that I don't think actually changes the meaning to "Given that this person looks like that, would you provisionally agree to having sex with them?", what's actually going has an alternative explanation that I think rings more true than "women have unrealistic standards": if looks are only a small term in your value function, you don't know enough about the value of the other terms, and the median answer to "would you provisionally agree to having sex" is no, then the looks have to be exceptionally good to shift the answer to "yes".

Importantly, this model does not require the original preference against sex with an unspecified man to be unusually strong: for any given expected utility -epsilon that women assign to having sex with a completely random man, no matter how close to 0, there exists a delta such that if looks are only at most a delta-fraction of women's value function for sex partners, then a random man would have to be top 10% in terms of looks for the expected utility for women of having sex with him to turn positive.

As an intuition pump, imagine we created the same chart for men, using some quality that men don't value particularly highly (but perhaps women do), and a base distribution of women that you(r people) are just slightly skeptical of as sex partners (your pick, based on preference: Some ethnicity you don't like? BMI >25? Cat owners? Age >40?). Take a dating app where you can't post your picture, but instead publicise your monthly income, and also all women are at least slightly chubby. Would you be surprised to find a chart like the above, but for men towards women, where the top 60% earners among men only are willing to "like" the top 10% earning women? Would this reflect men exercising "dictatorial demand"?

Does the idea that disarmament, mutually agreed restraint and maintenance of norms are positive-sum not pop up in those discussions at all? At the very least, it doesn't seem like anyone (in your story, or what I see from Russian telegrammers or otherwise) is trying to seriously expand the game tree one step further and reason about how the balance changes if the other side also starts unabashedly executing POWs or whatever other ways of killing more $enemy are proposed down the line. I thought a standard European history education should have put some emphasis on how the various conventions of warfare emerged from Europe's historical experience in their absence (even if you want to have the edgy 14 year old's cynicism and say that it's just that the elites were spooked that the normlessness may come back to haunt them), but perhaps the connection from "Tired Professional Gentleman-Soldiers in colourful uniforms none of whom really wanted to be there anyway" to "the loathsome enemy right now barbarously rejecting the obvious truth of our narrative" is too much to draw.

Regarding AI alignment -

I'm aware of and share @DaseindustriesLtd's aesthetical objection that the AI safety movement is not terribly aligned with my values itself and the payoff expectation of letting them perform their "pivotal act" that involves deputy godhood for themselves does not look so attractive from the outside, but the overall Pascal's Mugging performed by Yudkowsky, TheZvi etc. as linked downthread really does seem fairly persuasive as long as you accept the assumptions that they make. With all that being said, to me the weakest link of their narrative always actually has been in a different part than either the utility of their proposed eschaton or the probability that an AGI becomes Clippy, and I've seen very little discussion of the part that bothers me though I may not have looked well enough.

Specifically, it seems to me that everyone in the field accepts as gospel the assumption that AGI takeoff would (1) be very fast (minimal time from (1+\varepsilon) human capability to C*human capability for some C on the order of theoretical upper bounds) and (2) irreversible (P(the most intelligent agent on Earth will be an AGI n units of time in the future | the most intelligent agent on Earth is an AGI now) ~= 1). I've never seen the argument for either of these two made in any other way than repetition and a sort of obnoxious insinuation that if you don't see them as self-evident you must be kind of dull. Yet, I remain far from convinced of either (though, to be clear, it's not like I'm not convinced of their negations).

Regarding (1), the first piece of natural counterevidence to me is the existence of natural human variation in intelligence. I'm sure you don't need me to sketch in detail an explanation of why the superintelligent-relative-to-baseline Ashkenazim, or East Asians, or John von Neumann himself didn't undergo a personal intelligence explosion, but whence the certainty that this explanation won't in part or full also be relevant for superintelligent AGIs we construct? Sure, there is a certain argument that computer programs are easier to reproduce, modify and iterate upon than wetware, but this advantage is surely not infinitely large, and we do not even have the understanding to quantify this advantage in natural units. "Improving a silicon-based AI is easier than humans, therefore assume it will self-improve about instantaneously even though humans didn't" is extremely facile. It took humans like 10k years of urbanised society to get to the point where building something superior to humans at general reasoning seems within grasp. Even if that next thing is much better than us, how do we know if moving another step beyond that will take 5k, 1k, 100, 10 or 1 year, or minutes? The superhuman AIs we build may well come with their own set of architectural constraints that force them into a hard-to-leave local minimum, too. If the Infante Eschaton is actually a transformer talking to itself, how do we know it won't be forever tied down by an unfortunately utterly insurmountable tendency to exhibit tics in response to Tumblr memes in its token stream that we accidentally built into it, or a hidden high-order term in the cost/performance function for the entire transformer architecture and anything like it, for a sweet 100 years where we get AI Jeeves but not much more?

Secondly, I'm actually very partial to the interpretation that we have already built "superhuman AGI", in the shape of corporations. I realise this sounds like a trite anticapitalist trope, but being put on a bingo board is not a refutation. It may seem like an edge case given the queer computational substrate, but at the same time I'm struggling to find a good definition of superhuman AGI that naturally does not cover them. They are markedly non-human, have their own value function that their computational substrate is compelled to optimise for (fiduciary duty), and exhibit capacities in excess of any human (which is what makes them so useful). Put differently, if an AI built by Google on GPUs does ascend to Yudkowskian godhood, in the process rebuilding itself on nanomachines and then on computronium, what's the reason for the alien historian looking upon the simulation from the outside to place the starting point of "the singularity" specifically at the moment that Google launched the GPU version of the AI to further Google's goals, as opposed to when the GPU AI launched the nanomachine AI in furtherance of its own goals, or when humans launched the human-workers version of Google to further their human goals? Of all these points, the last one seems to be the most special one to me, because it marks the beginning of the chain where intelligent agents deliberately construct more intelligent agents in furtherance of their goals. However, if the descent towards the singularity has already started, so far it's been taking its sweet time. Why do we expect a crazy acceleration at the next step, apart from the ancient human tendency to believe ourselves to be living in the most special of times?

Regarding (2), even if $sv_business or $three_letter_agency builds a superhuman AI that is rapidly going critical, what's to say this won't be spotted and quickly corroborated by an assortment of Russian and/or Chinese spies, and those governments don't have some protocol in place that will result in them preemptively unloading their nuclear arsenal on every industrial center in the US? If the nukes land, the reversal criterion will probably be satisfied, and it's likely enough that the AI will be large enough and depend on sufficiently special hardware that it can't just quickly evacuate itself to AWS Antarctica. At that point, the AI may already be significantly smarter than humans, without having the capability to resist. Certainly the Yudkowsky scenario of bribing people into synthesising the appropriate nanomachine peptides can't be executed on 30 minutes' notice, and I doubt even a room full of uber-von Neumanns on amphetamines (especially ones bound to the wheelchair of specialty hardware and reliably electricity supply) could contrive a way to save itself from 50 oncoming nukes in that timespan. Of course this particular class of scenario may have very low probability, but I do not think that that probability is 0; and the more slowness and perhaps also fragility of early superhuman AIs we are willing to concede per point (1), the more opportunities for individually low-probability reversals like this arise.

All in all, I'm left with a far lower subjective belief that the LW-canon AGI apocalypse will happen as described than Yudkowsky's near-certainty that seems to be offset only by black swan events before the silicon AGI comes into being. I'm gravitating towards putting something like a 20% probability on it, without being at all confident in my napkinless mental Bayesianism, which is of course still very high for x-risk but makes the proposed "grow the probability of totalitarian EA machine god" countermeasure look much less attractive. It would be interesting to see if something along the lines of my thoughts above has already been argued against in the community, or if there is some qualitative (because I consider the quantitative aspect to be a bit hopeless) flaw in my lines of reasoning that stands out to the Motte.

Like all appeals to "Why do you even care about this? It's so unimportant". The response is obvious. If it's not important and we care more than you do then let us have our way. If you think it is actually important enough to fight over then drop this shaming act.

There is actually an asymmetry here that invalidates this argument, because the pro-trans contingent and the anti-trans one claim to be defending different terminal values rather than arguing in opposite directions over the same one. The pro-trans camp will say that trans representation in women's sports is important because [grand matters of fairness and justice in our society]; the anti-trans camp, on the other hand, generally says that no trans representation is important because [small subset of women can't win prizes at little league competition anymore]. There's nothing particularly inconsistent about saying that caring a great deal about the former is natural and caring a great deal about the latter is suspect. Now, of course from our vantage point it is of course clear that the anti-trans camp actually also is in it for grand matters of how our society is structured, rather than a weird dogged obsession with giving cis women a small chance to win that cup; but game theory forces them to dissimulate and assert even when pressed that they are really in it for [giving more nice things to women] (a societally comparatively accepted goal) rather than [giving fewer nice things to mtf trans] (a goal that is easily painted as vindictive or outright Voldemortian).

The easy exercise is to try and solve exam questions from n years ago. Most of the time, in most subjects, people just walk away shocked how much harder they were.

I've TAed for the same CS courses at a major US university for many years in a row, and could watch the standards being lowered in real time. Yet, in one of my last (COVID) years, we still had a group of students with highly polished progressive vocabulary start a petition about how the difficulty level of our exams is exacerbating a stressful situation and causing particular harm to underprivileged students and we therefore must discontinue our use of plagiarism detection software. (The harvest the software had produced up to that point was bountiful.) Several others messaged us to express their support, but only anonymously and in private. In the end, we survived the semester only by throwing them many bones and basically not giving any grades below an A-.

The quote you produced is disinformation all right for the "it's a QAnon reference" framing, but referring to people running "drag kids" events as "groomers" does seem like a serious accusation that deserves a bit more justification than the pointing and invoking of disgust reflexes that it is. The standard interpretation of "grooming", as I understand it, is gradual manipulation of the underage and otherwise mentally inadequate with the purpose of normalising the idea that they will be sexually abused or exploited by their adult handlers. I doubt that most people running or supporting those events are doing so with the intention of entering sexual relations with the kids that attend them themselves (and if "encouraging the target enter sexual relations I want to see more of with someone else" is sufficient to meet the definition of grooming, then it seems that a lot of things in our culture since times immemorial would count!), and if their right-wing detractors believe otherwise, the burden of proof surely should be on them. If they detractors do believe that all these progressives are actually in it because they hope to have sex with the ten year olds that they are teaching about drag queens and non-binary gender, protestations to the contrary and seemingly low rate of such sex actually happening notwithstanding, then yes, they are in fact entertaining a conspiracy theory (as there would need to be a conspiracy to conceal widespread pedophilic tendencies and/or actions).

(edit: Per something I found out downthread, there is in fact a legal definition of grooming in the US, which markedly does not cover "introducing children to icky and widely taken to be age-inappropriate sexual activity" on its own)

It reminds me of when "cultural Marxism" became an "anti-Semitic conspiracy theory"

Seems like a good riddance to me, because the term was a massive footbullet. The term "cultural marxists" will be resolved correctly by (1) people on your side already and (2) actual cultural marxists, who are in the know about the academic definition drift of "Marxism"; to everyone else, and in particular garden-variety classical liberals who really ought to have been enlisted in the anti-woke coalition much earlier, it just looks like holding up a sign like "actually the main issue I have with my outgroup is that they are dirty commies who want to put limitations on megacorps".

"Groomer" implies that the person is doing it for base selfish motivation (of future sexual gratification), when the people you call that believe they are doing it for the sake of the children and society at large. This is bound to be insulting to activists who come from a (sub)culture that denigrates selfishness and have built their internal narrative of purpose around doing what they are doing.

Any thoughts on what stock one ought to buy right now, as someone whose gradual getting spooked by AI advances has finally passed a critical threshold, in order to be in a good position in the specific subspace of possible futures where most humans have become economically worthless but the current system of contracts and titles remains intact?

Specifically, the "the vast majority of the economy is one or a handful of AI conglomerates, plus whatever industry is required to keep them running; whoever has a share may be less screwed" scenario. I can just about think of Google (for DeepMind) and Microsoft (who seem to be OpenAI's closest openly traded partner), and maybe Nvidia if one expects their GPUs to continue being unrivaled as hardware platforms.

Critically however, Jewish lawyers never appear on the anti-civil rights side of a case.

I think this hints a core intuitive objection I have to the narrative you are seeking to weave here. What do we know about the non-Jewish lawyers on the pro-civil rights side of the "standard civil rights cases" you are talking about here? I would wager that some very clear pattern would emerge, which would correspond to a picture that is more along the lines of there being two broad coalitions fighting (urban vs. rural? Moldbug's Brahmins vs Optimates?), of which the Jews overwhelmingly side with one. That picture, though, no longer provides the categorical support for the "civil rights is a Jewish plan against the Gentiles" picture you are seeking to paint (though of course it is not inconsistent with it; a scheme can of course include dupes and Quislings). I would, for example, guess that to the extent non-white lawyers were involved in civil rights cases, they were also all on the pro-civil rights side; yet, most WNs tend to not ascribe enough agency to them to call civil rights a black/brown/yellow/red plot.

All being said, though, even if your thesis is true, so what? If the civil rights movement is indeed a destructive plot by triple-parens them, I can't get myself to think this is particularly immoral, given that they have a pretty solid case for retaliation/self-defense in destroying whatever it destroys. I also don't think I can't oppose it based on self-interest, because I think so far I've been a net beneficiary even taking into account all of its failings and wrong turns and local negatives.

I think you are missing the linchpin of the worldview, which is an axiomatic assumption that persistent group differences in outcomes can't be just, natural or accidental. The fundamental equality of groups (rather than individuals) is as close to a central dogma of faith as you can get for the dominant secular religion, and everything you observe follows quite easily from trying to square this belief with observed reality. Do you have a better explanation for US statistics that does not violate this belief than that somehow, despite superficial appearances, pro-white bias must have found a way?

(Regarding the bafflement, surely smart and rational people being unwavering in a religious belief should not be surprising, given humanity's track record.)

What exactly do you consider an "HBD aware" set of policies? If you are going to attack a strawman, at least say what that strawman is. As a sort of HBDer (I never particularly liked the ring of the term, but please tell me what I am allowed to call the position that a lot of valued traits including in particular intelligence are heritable and different ethnic groups have different averages in them, without being lumped in with people who want to advocate for spoils for or collusion their own ethnic group), I don't recall ever arguing or wishing for anything other than colorblind and meritocratic policies, and the posts you regularly make seemingly just to try and remind people to associate the former with the latter are really rather tiresome. I'm struggling to understand why you are doing this - are you trying to troll us anti-racial-spoils hereditarians into surrender or meltdown because you think we're legitimising actual racists? If so, why even bother with the complete political non-force that are card-carrying racists? Is it because you think that they are unfairly associated with your political beliefs?

I think a fundamental question that needs to be addressed is what exactly distinguishes a weakman from the enemy bailey - and, I guess, whether it is good (for the discourse) in principle to contest the enemy bailey at all, or if gentlemen should always go straight for the motte (as the choice of our forum's name may seem to suggest).

So what exactly is the problem here? He says that he is motivated to engage socially by opportunities to get laid (which I suspect is true for many people, even as it is generally considered very antisocial to admit), and arguing that a community is strengthened by providing the opportunity (which is hardly a new or outlandish viewpoint; trads say basically as much about trad church-centered communities too). This does not imply that it is the point; both positions are consistent with the point of EA, and even most people in it, being exactly what it is claimed to be.

frothing defendants

What does the "frothing" add here, apart from conveying your own anger (which you could have done in more detached terms) and aggrieving any reader who might imagine themselves as being targeted by the description?

I'm at best neutral towards poly (and manifestly not pro-A, E or otherwise; I find globally-oriented altruism to be a deleterious meme that breeds moral imperialism and personal misery), but seeing them come under this sort of ostentatious attack just makes me want to make a show of donating some money, both in order to defend the ability of these people to live under a different set of norms and to penalize the attempt to shame them for it with vague "can you believe this?" pointing.

To give an overview of what I believe is a reasonable bounded-rationality basis to dismiss this objection:

  • I am not equipped to evaluate the claims in Alexandros's post in detail without significant effort and time investment (despite being a working academic in a quantitative field).

  • I'm not particularly worried about COVID and the societal excesses of the response seem to have already died down, so I personally don't see much value in learning about a surprising therapy for it. It seems unlikely to me that even if something like the contents of this post became widely accepted as truth, the societal response next time something COVID-shaped happens would be much beter.

  • Superficially, it seems there is no particular reason why something like Ivermectin (an antiparasitic that apparently works by disrupting the metabolism of fairly complex multicellular parasites) would work against COVID (a virus). I have a strong prior on most medicines claimed to have a minor beneficial effect on popular therapeutic targets actually being completely ineffectual (as this has been my experience).

  • On the other hand, the "parasite load" story seems superficially plausible.

  • Due to the culture-war dimension of Ivermectin, whose efficacy the red tribe in the US has entangled its social status with (no point in recounting the way this happened here), there is an obvious motivation for members of that tribe to produce compelling-looking arguments for its efficacy. Since Alexandros posts around this community, he seems a priori likely to harbour Red sympathies.

  • Moreover, there is a "contrarian" tribe that is motivated by taking down the rationality-orthogonal "trust the science" wing of the blue tribe, and therefore would also derive utility from successfully Eulering in favour of Ivermectin. Many people seem to talk about the abrasiveness of Alexandros's tone. This increases the probability that he's Red or Contrarian and would therefore have the motive to come to his conclusion.

In short, a situation that seems fairly symmetrical to "read this long and extremely compelling essay by a Harvard academic who is also a Twitter superstar using Science and Logic to prove that Blank Slatism is true". If you had unlimited time and resources or a particularly high stake in finding out whether desirable qualities of humans are genetic, sure, by all means you ought to read it and analyse the argument. For most everyone else, it would be more rational to ignore the essay, leave your prior largely unshifted and spend the time it would take to read on something with higher expected utility, like planning tomorrow's healthy breakfast or getting on top of your todo list.

Things that could convince me to take the essay more seriously:

  • Establish that the author does not stand to benefit from Ivermectin working, e.g. has impeccable blue tribe credentials.

  • Establish that rehabilitating Ivermectin would benefit me personally a great deal.

  • Propose a plausible mechanism by which Ivermectin (specifically!) might work against COVID. Some general handwaving like "it modulates the way the immune system operates" won't work; lots of drugs do that, so I don't see why specifically the one that the Blues are raging against and the Reds are swearing will prove once and for all they should actually be in charge should be the one that happens to modulate it just right.

  • Relatedly, but harder, shift my prior regarding medicines that purport to do anything more complex than targeting one particular well-understood metabolic pathway not working.

How big do you figure was the movement at the time? If the "LGBTQI2 movement" went from being 1% of society of whom 50% were pederasts to being 50% of society of whom 1% are pederasts - as one may expect to be the case if the movement could be modelled as providing a home for all that are sufficiently far from the conservative ideal of sexual orientation, with the distance threshold steadily going down - then this simultaneously call into question the "wokes are crypto-pederasts waiting for their time to strike" narrative many right-wingers seem to want to get out of this historical observation, and whether we can generalise to assume that the movement will step back from another putative overreach, given that there is now much less room for further growth and hence dilution.

Have Republicans given them any indication that they would not simply impose the same sort of policy but in favour of their own aesthetic preferences and power structures, the moment they gain any amount of power? At the end of the day, SV people are still culturally much closer to Democrats than to Republicans, and the proposition to solve their issue with their progressive overlords by inviting in the Republicans must seem about as sensible to them as if you proposed to Republicans to solve their issue with their Democratic-party overlords by inviting in the Iranians, Russians or CCP (which, I thought, is a known but generally considered edgy and stupid position among much of the Dissident Right).

The one thing that unites everyone I know, ranging from the most hardcore SJWs to alt-right RETVRN types, is that nobody cares about children at all, especially not more than our respective political convictions, and especially not the abstract children of others. If you come from a genuinely more old-fashioned or natalist bubble, you may underestimate just how insignificant children have become in younger elite circles; on a gut-feeling level it is genuinely difficult for me to imagine how someone would pretend to care about children for any reason other than as a mysterious ancestral ritual that may score points against the outgroup.

(It may not be surprising that birthrates in my mid-30s cohort are very low, and the few people who did reproduce have largely dropped out socially - not, as far as I can tell, to socialise with other people, but to be alone.)

I'm in Nybbler's "ordinary" category, and I've just mostly stopped engaging when like clockwork you make yet another one of those "actually people who purport to believe in HBD are Stormfronters" subthreads. It is tiresome to have to repeat the same arguments (against the backdrop of the affect-loading "actually they are eugenicists which is progressive" "Hitler was vegan" etc chorus that tends to come out of the woodwork) when it seemingly generates no correction or even acknowledgment from you.

I think you raise interesting points, and I agree with much of your perception of the "successor ideology" filling a religion-shaped void and your suggested prescription of reining it in under the same mechanisms that our society evolved to deal with plurality of traditional religion, but at the same time I bristle against your offhand conflation of all atheism and secularism with the successor ideology. This strikes me as wrong in both straightforward (I'm an atheist who is not woke) and implicational (there are major secularist countries that have not so far shown a tendency to evolve towards anything obviously shaped like Abrahamic religions at all, such as China, and in much of continental Europe the religion-shaped thing that grew to fill the void absent American influence was environmentalism, not racial wokeness) ways.

Moreover, it resembles another important instance where I've been under the impression that American right-wingers sabotage their persuasiveness by being unable to let go of old grudges, which compels them to argue that the new enemy is actually just a guise over an old ancestral enemy and if you had listened to them back then when they said the enemy must be eradicated then you wouldn't have this problem now - that instance being the insistence of labelling wokism with various versions of "Marxism" or "Leninism". To the yet-unpersuaded reader, that just winds up sounding like a barely-concealed "the actual problem with BLM is that you aren't letting us have child labour uranium mines and totalitarian company towns", and this sounds like "the actual problem with BLM is that you aren't letting us have Jesus Camp where the creepy pastor beats kids with a switch for having impure thoughts".

What's a good way to get better at writing?

I kind of suck at it, and this is troubling me. I'm a working academic, and sitting on a growing cache of results that I can't get out because (outside of some unpredictable periods where the chemical stars in my brain align just right or something) I tend to stare down the same paragraph for two hours and finally squeeze out, word by painful word, something that sounds like the ramblings of a schizophrenic with aphasia, and then feel so drained that I will viscerally fear opening vim again for a week or two. "Professional issues" is an easy sell as far as evidence that something must be done goes, but even outside that, there are so many things - posts, stories, explanations - that I wish I could write but can't. The circumstance that every so often, this problem briefly just goes away and I can in fact vomit out several pages that do in fact hold up even if I look at them again later, just makes my problem all the more frustrating - it feels like it's not like there is something I just lack (and therefore could obtain, making the problem go away), but rather that the necessary circuitry is there but defective.

I don't know, is it inconceivable that UBI+light wireheading through superstimuli could keep the vast majority of people sufficiently placid to prevent widespread upheaval until the problem solves itself through birthrate collapse? This would have the same effect as a genocide of the poor, but not involve a lot of violence or even generally offense to revealed ethical preference.

Capitalism relies on a social contract in which people have the opportunity to better their situation. The end of employment takes away that opportunity.

I'm not so convinced of this, insofar as my impression is that over the past 1000 years, most societies were sufficiently "capitalist" in the sense that private property and ownership stakes were mostly honoured most of the time, but in the majority of them most people did not have a meaningful opportunity to significantly better their situation.

But why would Putin attack the Baltics? The only situation in which I can imagine it making sense for him is if they escalate their own hostility to the point that he has no choice with the alternative being a path that leads to him losing control internally - say, by them engaging in a boots-on-the-ground intervention to aid Ukraine, or a full blockade of Kaliningrad. Such actions would almost certainly be justified by rhetoric like yours, arguing that they must strike the Russians while they are weak because surely Putin will come for them afterwards otherwise, leading to the usual crybully escalation cycle that should be familiar from the CW setting ("They're dangerous! We must punch them! They punched back? See, I told you how dangerous they were! You were an idiot for arguing against punching them! In fact this situation is your fault, because we should have punched harder!").

Medvedev

The man has gone full shitposter in his political afterlife; quotes from him should be treated like the "former British intelligence specialists" Russian channels like parading around claiming that UA collapse is imminent every week.

Will Spaniards and French and Italians send enough troops to the meatgrinder to save some faraway countries?

Well, they did that for America's middle eastern meatgrinders. Besides, Ukraine has shown how much the effectivity of any army is magnified when backed by operational depth and modern C&C (satellites, patrol planes, analysis) that for political reasons can't be touched by their adversary. I imagine the effect would be increased manifold if there were no sanitary barrier of the kind that requires manually preprocessing intel that is passed to Ukraine lest the crown jewels of alliance capabilities leak to an adversary. In a battle of Estonia plus NATO minus non-Estonian NATO meat vs. Russia on Estonian territory I would not bet on the Russians, and I don't think the Russians would either.

Where were those people before? I don't recall seeing that position in posts here, though it's of course possible that they were simply rendered invisible by excessive downvotes.

I have littered a nonzero number of times in my life and dodged fares even more (especially when I was a poor student). The chance that anything bad would have happened to me if I had been caught for it was basically zero, and I assume that those for whom it is not the case (i.e. the dumb and impulsive criminals you are talking about) realise this. This breeds resentment (even monkeys, I think, have been found to be sensitive to differential treatment) and presumably reduces buy-in into society from those who are at the short end of this equation.

Now, I know that people on the law-and-order spectrum like seeing criminality (especially non-white-collar criminality, i.e. the type they can't imagine themselves engaging in) in absolutes, where you are either a law-abiding citizen or a criminal who always and at all times is about as bad to the society surrounding them as they can manage to be; therefore there is no point in negotiating or doing anything other than identifying and locking them up ASAP, and in particular they would see "reduced buy-in into society" as a moot concern since they are already being antisocial criminals who don't buy in. However, I don't think this is true; most criminals probably don't engage in antisocial acts nearly as often as they could, and I'd wager they don't commit murder or even smash random windows in all situations they know they could get away with it. In fact they probably subscribe to 90% of the same society-sustaining narrative as the law-and-order crowd, with only some cutouts they have rationalised for themselves to violate it in specific ways in particular contexts. If you make criminals feel that they can't be equal members of society even on their "down-"/law-abiding time, this might just stop being the case, and life for everyone would make a turn for the worse.

(Arguably the US is already halfway there in places with certain minorities being actively fed the narrative that society is not for them, but I assume that the set of criminals that would be caught by "turnstile enforcement anarcho-tyranny" - because this is what it would read to someone whose self-narrative is "productive member of society who sometimes has to stray off the good path for very valid reasons" - is not just a subset of those minorities.)

You don't need the "real" there - it's all auth-on-auth warfare. Any form of actual liberalism can only flourish briefly as the authoritarian supermajority considers it the lesser evil as opposed to having to fight against other types of authoritarians.