@magic9mushroom's banner p

magic9mushroom

If you're going to downvote me, and nobody's already voiced your objection, please reply and tell me

2 followers   follows 0 users  
joined 2022 September 10 11:26:14 UTC
Verified Email

				

User ID: 1103

magic9mushroom

If you're going to downvote me, and nobody's already voiced your objection, please reply and tell me

2 followers   follows 0 users   joined 2022 September 10 11:26:14 UTC

					

No bio...


					

User ID: 1103

Verified Email

I did not say the population would drop 80%. I said food production would drop by 80% (though that's a rough estimate). There's give in a few places (the USA exports food and that would be redirected; grain-fed animals would be replaced by eating the grain; also, while Westerners do need more food than Third-Worlders to not die - because the body stunts from undernutrition, but that's not retroactive - we don't need quite as much food as we get) - just not 5x worth of give.

I think you also have a different opinion of what constitutes "a going concern" than FCfromSSC.

There is an important distinction between the current USA and the USA of 1860. Namely, one of these has eleven times the population of the other despite being mostly the same size (yeah, yeah, Alaska, but it's not exactly the breadbasket of the USA). The modern developed world has staggeringly-high, unprecedented population densities, and while some of that is from permanent knowledge gained, a lot more of it is from economic sophistication. A farmer of 1860 can make most of the stuff he needs - not all, but most, and his tools are at least pretty durable and repairable. A farmer of 2025 is using agricultural equipment manufactured in cities from mined minerals and fuelled with petroleum products from oil fields to spread mined/synthesised fertilisers, pesticides, and F1 hybrid seeds whose progeny aren't viable. Most of those things are produced hundreds of kilometres from his farm if not thousands, and many of them are well beyond his capacity to even repair let alone replace, and they make him more efficient.

Civil strife means things hundreds of kilometres away are not available to you anymore because there are enemies between you and them, and they can't get their inputs either. What we've built is a gleaming metropolis of elaborate, carefully-built crystal towers, not an indestructible pyramid. Guess what happens when your food production drops by 80% and you were only a moderate food exporter in percentage terms before this, and you also have difficulty importing food. Then consider what people will do in their desperation, and the resulting lasting damage to culture and society.

I am actually eliding a fair bit of stuff here because, um, some Mottizens want bad things to happen instead of good things.

(The extent of Australia's food surplus is such that with the standard abandonment of grain-fed livestock (which is super-inefficient in terms of food calories) we'd still clearly pull through if the music stopped. This is a special and highly-unusual privilege. The USA, despite being the biggest food exporter in the world in absolute terms, does not have that absurd cushion of safety.)

"Chopped" is apparently slang for "rough-looking."

I will say, this is a lot better than there being a (new) epidemic of men being chopped up or having their dicks chopped off. I suppose if you wanted to get particularly creative, a particularly disgusting case would be an epidemic of meat intended for eating being discovered as, well, "chopped man"!

Is their heart going to be more in it when it's their own homeland they're burning and shelling?

Probably, yes. Civil wars do tend to have a lot of massacres because both sides consider the other to be traitors and not a legitimate state actor to whom the laws of war apply. Remember, "The Blue Tribe has performed some kind of very impressive act of alchemy, and transmuted all of its outgroup hatred to the Red Tribe." That's a lot of hatred, although perhaps still less than that which the Red Tribe has for the Blue.

On reflection - and you're right, I was kinda repeating old arguments without sufficient reflection - I was basically assuming "tyrant has first move, has the armed forces in lockstep, and is willing to wage Vernichtungskrieg", which is the worst-case scenario for the militia. I will note that you are, in fact, still talking about a lot more than small arms here; mortars are far, far more effective than small arms, and are not something the Blue Tribe is currently trying to take away from private citizens (I'm... pretty sure there's nowhere in the USA where random people can walk into a store and buy a mortar? Something something, Federal Firearms Licence? So then, a militia that has them is specifically either one with illegal stockpiles, one that's basically pulled a fast one on the tyrannous government regarding having such licences, or one with improvised mortars constructed after the start of hostilities when the term "legal" becomes meaningless). And even then, I don't think that's enough to win the war. The peace, yes, I'll vaguely allude to that being a fairly-likely win (if an extremely-Pyrrhic one). But not the war, not if the armed forces are united against you for reasons.

Riiiight, so they can be more easily doxed and their families threatened.

That's called terrorism and rebellion, and there are other ways of dealing with it. A state that hasn't at least partially failed doesn't need to hide from terrorists.

I will note that since mechanisation, you kinda need militia to have tanks and MANPADs in order to provide a credible deterrent to tyranny. This isn't a reductio ad absurdum; that's colourable. But that's where the goalposts are.

(I am armed up to the extent of the law in Victoria - i.e. I have a compound bow - but this isn't to FIGHT THE POWER. This is as a moderately-unlikely contingency in case of the police failing to control cannibal looter mobs subsequent to nuclear war. Cannibal looter mobs are much easier to fight off than SWAT.)

"Women can do no wrong" is an extremely uncharitable reading of this transcript.

It's a harsh reading, but a fair one of Tonia Antoniazzi's rhetoric.

Originally passed by an all-male Parliament elected by men alone, this Victorian law is increasingly used against vulnerable women and girls.

New clause 1 will only take women out of the criminal justice system because they are vulnerable and they need our help.

As Members will know, much of the work that I do is driven by the plight of highly vulnerable women and by sex-based rights, which is why I tabled new clause 1.

While my hon. Friend and I share an interest in removing women from the criminal law relating to abortion,

The fact is that new clause 1 would take women out of the criminal justice system, and that is what has to happen and has to change now.

However, all that this new clause seeks to do is take women out of the criminal justice system now, and give them the support and help they need.

You can argue about whether her proposed amendment actually reflects this, but her rhetoric absolutely does.

Do you endorse "accompaniment" killings like Sati?

The voluntary form is something I can appreciate, if not endorse. Reactionary on deep love, and all.

The involuntary form can fuck off. Murder is bad, news at 11.

Everything getting greyer is less to do with gay activists and more to do with society, in general, not loving bright colors everywhere. I blame autism increasing,

This isn't the autistic pattern. My understanding is that we mostly tend toward loving highly-saturated, solid colours (the most notorious example being anime).

Yes, this is a clear distinction between the two problems. Kavka's billionaire does care about what you intend to do, not only (or even at all) what you will do.

But I don't think it creates much light to try to talk about "bad faith" when describing the external behavior of a movement without any reference to the conscious experiences of anybody in the movement, whether sincere or otherwise.

From within the movement, it sure doesn't feel like it. I did say that it only counts by the outgroup-definition of bad faith, and called it a "third option".

From without, as someone who wants to know ideal behaviour for dealing with the group, the game-theoretic incentives are identical: "don't make deals with things that aren't going to honour those deals". For the outgroup, the rest is gravy; this question of "will X honour deals" is 99% of what it wants to know, because it determines whether it should make terms (and avoid a needless civil war) or fight (and avoid exploitation). That answer rests solely on the result, not the process. The rest is interesting anthropological information, but they're your outgroup; it's not like you matter to them as people and they don't care about all of the same things as you.

To which I say, you aren't offering any evidence that these compromises are offered in bad faith, you're pretending to read the minds of your outgroup and ascribe the worst possible impulses to them.

I feel there's kind of a false dichotomy/definition debate going on here.

Let's talk about Newcomb's Paradox. There is and stubbornly remains some class of people who think the solution to the problem is to intend to one-box, but then to become a two-boxer after Omega has made its prediction. This solution is fatally flawed because, to misquote Minority Report, "Omega doesn't care what you intend to do. Only what you will do". If one will "become" a two-boxer before the decision is made, then one already is a two-boxer, because the definition of a two-boxer is "one who will pick both boxes", not "one who currently thinks he will pick both boxes". If I am programming Omega, and I want to make Omega as reliable as possible, I should count such people as two-boxers because they will two-box; their false consciousness of being a one-boxer, no matter how sincerely believed, is not actually relevant.

(I went looking for the exchange I had with one of these people, but I couldn't find it.)

The shape of the excluded third option should now be pretty clear. There exists a class of people who'll sincerely make a compromise, and then change their minds later. When talking about your ingroup, the natural tendency is to count these people as "good faith", because they believe what they say and you sympathise with them. When talking about your outgroup, the natural tendency is to count these people as "bad faith" because the natural context of analysing your outgroup is wanting to know whether deals will be kept or not.

Hence, under their definitions, "deals have not been kept in the past" is evidence of bad faith, because "your outgroup doesn't care what you intend to do. Only what your movement will do". It's not totally-irrefutable evidence - movements change, and not all deals are created equal - but it's relevant. Moreover, I think modelling social justice as unable to keep its bargains is actually fairly justified, because of two reasons:

  1. Social justice is leaderless. Committees are bad at keeping their bargains absent specific effort, because committees tend to include people who wanted to reject the bargain, and turnover might lead to those people gaining control of the committee at some point (and "you should respect a bargain you never agreed to, because others in your movement did over your objection" is a much-tougher sell than "you should respect a bargain you agreed to"*).

  2. Social justice is not very interested in keeping historical norms. "Dead old white men", and so forth. So that tough sell is even tougher.

I get that it's really awkward to respond to the claim "you can't make a believable compromise, because you will change your mind and/or others in your movement will overrule you". I sympathise. Unfortunately, that doesn't always mean it's false.

*I'm reminded of the exchange at the end of the TNG episode "The Pegasus":

PICARD: In the Treaty of Algeron the Federation specifically agreed not to develop cloaking technology.

PRESSMAN: And that treaty is the biggest mistake we ever made! It's kept us from exploiting a vital area of defence.

PICARD: That treaty has kept us in peace for sixty years, and as a Starfleet officer, you're supposed to uphold it.

It's very, very easy to be a Pressman. There are probably still circumstances where I'd be a Pressman, despite having assimilated Ratsphere cautions against it.

Google "Chopped Man Epidemic" for a vantablackpill.

I did, and 100% of the links are videos. I tried watching one of the less-terrible-looking videos, and it was still terrible; it started with a "preview" reel that was clearly just there to inculcate feelings of "WTF is going on" in order to maximise watchtime.

Could you summarise for people who don't feel like dipping their brains in the brain-hacking engagement-optimisation industry?

Show me an angel, so to speak.

Fun fact: when I was a teenager, I wanted to be a priest. It's just, I'd need a religious experience to tell me what to be a priest of, and I haven't had one.

Why do you believe changing the other person's mind is the point of a public argument, as opposed to shaping the audience's opinion?

I will caution that going there tends to legitimise dishonest debating, flaming, and suchlike. It's a mode I've seen advocated by social justice warriors a decade ago (admittedly, they mostly then moved on to "why even allow the debate?"), and is related to why callout culture became a thing.

Disclosure after slop is barely better than none; before should be required if this is to be allowed at all.

There are benefits, but the harm is "now 100% of the time you are second-guessing whether you're reading an LLM". That's the death knell for serious engagement, because there is no point engaging with an LLM. There are plenty of not-theMotte places to make this point.

My view is opposing AI art is anti-humanist.

I oppose AI art because AI art (usually) gives money to AI companies (who are trying to end the world) and will at some (unknown) point become a memetic hazard to anyone who sees it. I think this is plenty humanist.

I agree with you about the "oh noes the artists" people, though.

Eh, when talking about specifically "autistic nerds" (i.e. like 1% of the population), there are certain caveats on that. Autists typically have retarded* co-ordination, and the top end of the "nerds" (i.e. aspie savants) sometimes get accelerated. A 13-year-old boy with garbage co-ordination against a 14-year-old girl isn't such an uneven match.

*I use this word precisely; adult co-ordination is usually normal, but it takes longer to get there.

When the OpenAI engineers quit the company because it wouldn't slow down for safety, they didn't shoot the remaining employees, instead they created a competitor to sprint faster with the belief that if they reach AGI first, it'll be better aligned for humanity.

To be clear, I'm in favour of co-ordinated meanness on this one - government action. I've exhaustively considered the possibilities of terrorism and with the exception of a certain harebrained scheme which requires nuclear weapons (and good luck getting those as a terrorist), the maths doesn't work out. No single point of failure, awareness raising of the mere idea is unnecessary*, and that leaves you with "terrorism only makes sense if it can be sustained over a period of time" which the Rats can't (and especially can't on a global scale).

I was initially using the metaphor of the USA in a race with other countries; by "shoot them" I meant war. Nuclear war if necessary, but as noted I'm optimistic about the possibility of getting the nuclear powers on board.

Anthropic's actions I model as a combination of lower P(Doom), self-overestimation, greater tolerance for Doom (Silicon Valley tends to attract risk-tolerant types), and most importantly "it's really important to be careful what you get good at".

*Take the climate soup-throwers as an example. They'd be of use if nobody'd heard of global warming. But people have heard of global warming, they (including me) just disagree with the soup-throwers' opinion that it's an X-risk requiring major action RTFN, and throwing soup is not going to convince people of that. Likewise, there have been enough "AI rebellion" films that that kind of terrorism is not really useful (and TBH public opinion is already pretty strongly against AI).

You can't write laws good enough to combat this mindset.

I mean, yes and no. The lawfare against Trump and Musk did eventually fail, you know, and mostly because of the USA's protections against that sort of thing - certainly, it wasn't because Biden and Harris decided to call it off.

I agree that there are a vast number of potential attack vectors, but the task's still not an impossible one. Constitutional rights, and literally having fewer laws, are the most obvious general directions for such efforts.

I think where we're disagreeing is that I think of "powers that can be abused" as a natural category, and you're insisting that different sorts of abusable powers, despite being abusable to the same end, can't be treated as a category.

Crime rate back then was much lower, largely because cops harassed no-gooders in the exact way you consider scary and atrocious.

You are putting words in my mouth. What I consider scary and atrocious is the use of such powers to set up a police state.

I said in my original post that it does depend on definitions and that not all definitions are sufficient to allow this exploit.

Exploits like this are involved in a reasonable amount of slides into one-party states. The Le Pen conviction and the retaliation against Elon Musk for buying Twitter are obvious recent examples (though the latter one failed).

I mean, the preferred solution to "the other guys don't take the risks seriously so they won't stop running" is generally "whip out a pistol and shoot them", although the numbers you've given are on the edges of that solution's range of optimality.

I will note that in reality, the CPC appears fairly cognisant of the risks, probably would enforce stricter controls than "Openly Evil AI" and "lol we're Meta" (Google and Anthropic are less clear), and might be amenable to an agreed slowdown (there are other nations that won't be and will need to be knocked over, but it's much easier to invade a UAE or a Cayman Islands than it is the PRC).

Also, my P(Doom|no slowdown) is like 0.95-0.97, although there will likely be a fair number of warning shots first (i.e. the "no slowdown" condition implies ignoring those warning shots); to align a neural net you need to be able to solve "what does this code do when run" (because you're checking whether a neural net has properties you want in order to procedurally mess with it, rather than explicitly writing it, and hence to train "doesn't kill me when run" you need to be able to identify "kills me when run" in a way other than "run it and see whether it kills me"), and that's the halting problem (proven unsolvable in the general case, and neural nets don't look to me like enough of a special case).

Claude 4 and o3 will take action to avoid being shut down. If you leave aside the literally-unknowable "do machines have qualia" point, they sure seem to be best modelled as capable of agency.

People underestimate how extremely difficult "kill all humans" is as a task.

I'm one of the people saying this. Preppers and other forms of resilience nullify a great many X-risks; another Chicxulub would kill most humans but not humanity (not sure about another Siberian Traps). But there is one specific category of X-risks where that kind of resilience is useless, and that's the "non-human enemy wins a war against us" set (the three risks in this category are the three sorts of possible non-human hostiles - "AI", "aliens" and "God"). Bunkers are no help against those, because if they defeat us they aren't ever going away, and can deliberately break open the bunkers; it might take them a few years to mop up all the preppers (though I imagine God would get everybody in the first pass, and aliens plausibly could), but that doesn't save humanity.