@magic9mushroom's banner p

magic9mushroom

If you're going to downvote me, and nobody's already voiced your objection, please reply and tell me

1 follower   follows 0 users  
joined 2022 September 10 11:26:14 UTC
Verified Email

				

User ID: 1103

magic9mushroom

If you're going to downvote me, and nobody's already voiced your objection, please reply and tell me

1 follower   follows 0 users   joined 2022 September 10 11:26:14 UTC

					

No bio...


					

User ID: 1103

Verified Email

Which of the following do you think should be covered under gun rights? Single-shot normal rifles, shotguns, assault rifles, SMGs, single-shot pistols, anti-materiel rifles, machine guns, technicals/IFVs, MANPADs, recoilless rifles, rocket launchers, tanks, Davy Crocketts?

(This is not mockery. The argument that's literally in the 2A - militia makes you harder to conquer - applies to all of the above except maybe Davy Crocketts.)

Without context on why NexusMods did this, it's not obvious to me whether the motive for dumping it was what you said vs. "transwomen shouldn't sound like women because they're men" - in particular, while the transfeminist lobby is powerful, Rowling has a lot of money and a colourable case against unauthorised mods of her IP.

That's only one of the cases where it's claimed they've interfered. There are others. Hong Kong. Uyghurs. Taiwan. Tiananmen. Tibet.

Granting arguendo that this one is nonsense and/or not a concern (and you might be right), TikTok as controlled by ByteDance still has to go. Don't overfocus on this detail to the point that you miss the big picture; sort-by-controversial is a thing that happens, but it's not correct in how much it prioritises things.

Also, we could be going to war with the PRC in the next couple of years. Do you really think that ByteDance wouldn't go along if the CPC told them to fill their site and app with malware in order to break the Western Internet? Hint: ByteDance is a mainland Chinese company, and in mainland China if you defy the CPC they chop you up for your organs like that scene out of Repo.

Not good. But I'd have to say necessary evil.

Someone whom the speaker believes subhuman - in particular in the moral sense of "if you harm or kill this person, it is less bad than normal".

My parenthetical note was intended to make it clear that I'm not trying to do that foot-in-the-door tactic, but simply trying to get someone else's opinion on the matter. But whatever, you answered which is what I wanted. Is your line at vehicles driven by the actual text of 2A ("bear" arms), or by some argument I don't currently comprehend?

As for your question retort: not orders, and I split hairs on instructions/advocacy/smears. Specifically:

  • If someone grows cannabis illegally, we can deal with that after the fact; if someone builds a nuke illegally, it's a bit harder to deal with that after the fact, so I'm more concerned about the details of how to build an implosion nuke than I am about the details of how to grow cannabis (or even the details of how to make dynamite off-label, though I'm not about to spread the latter myself).
  • I generally draw the line at indirect/vague vs. direct/specific incitement.
  • Ignorant misinformation shouldn't be punishable, but I've no issue with banning disinformation with the associated mens rea of deliberate deception. Much less of a chilling effect from the latter.

You don't have to be a psychopath to think genocide is cool. Psychopathy is the state of having no empathy for people in your ingroup. Lack of empathy for people in the outgroup is far more common and can't really be considered an abnormality.

I did not ask for a definition of Chevron deference; I didn't have it memorised, but I looked it up myself. I asked for a "furious, anxious progressive think piece" about Chevron deference and the "damage" that could occur if it's "significantly revised", because @CrispyFriedBarnacles said he'd seen "a number" of such pieces.

Which is a form of sleeping on AI; they see it only as a tool, not as a potential adversary in its own right. Like I said, though, a rogue-AI incident would definitely fix that; a lot of my !doom probability routes through "we get a rogue AI that isn't smart enough to kill us all, then these kinds of people force the Narrative into Jihad".

Israel is one of the US' most steadfast allies outside of the Anglosphere and the #1 ally in the Middle East,

It's also the US' most poisonous ally; the alliance with Israel is the whole reason the US has problems in the Near East. Osama bin Laden explicitly cited that alliance as his motive for the Twin Towers. And, y'know, there have been multiple instances of Israel selling US weapons to the PRC.

I've seen a number of really furious, anxious progressive think pieces about the Chevron doctrine in particular recently, and the potential damage the court could do to the entire unelected federal bureaucracy and its power if that doctrine is significantly revised.

Could you link one or two?

Good point.

As someone concerned about juvenile transition, and who also thinks the AoC is too high, I think I can actually use both the modus ponens and modus tollens versions of this.

(More generally, I think the big issue with the AoC debate - or to be more accurate, non-debate - is the one Scott described in the Eighth Meditation: "older person wants to have sex with younger person" is seen as creepy, and for most people advocating for a lower AoC means getting tarred with that brush, which keeps anyone respectable from doing it and prevents the Overton Window equilibrating. In theory, 13-year-olds could advocate for AoC 13 and get away with it, but only for 3 years before it becomes a millstone should they continue; this makes it really hard to assemble a solid bloc, particularly after taking into account the likely reaction if one attempted to recruit teenagers to advocate for lowering the AoC. Still, the Internet probably helps somewhat with this issue.)

You're forgetting about copyright. Without copyright software would indeed be competed down to nothing, but with it you cannot run an open operation to zero out the price of software (or other information, such as fictional media or journal articles). You could, in theory, start a competing software company, but there are economies of scale and network effects that mostly prevent that (water has some of these too, but AIUI it's fairly tightly regulated and in practice even when it's not there's always the implicit threat of "if you play funny buggers with the water supply, much of the population will immediately drop everything to put your head on a pike and in a universal-franchise democracy they will get it").

If you want the price of software to crash to zero, therefore, making "people realise" that its equilibrium price is zero won't actually do anything. You need to revoke or at least massively rollback copyright law. Note that this will cause supply to drop significantly unless you do something about that.

I'm not a mind-reader, but with a lot of reasonable parameters I'd be in favour of #2, so I see no particular reason to disbelieve other people saying they'd prefer #2.

Framework #1: people dying is bad.

Global Thermonuclear War would kill a lot of people - indeed, with Russia's arsenal against both the West and China, it'd probably be 1-1.5 billion. AI apocalypse would kill all humans, which is roughly 8 billion. If we want to minimise mean humans killed, then a 100% chance of GTW is still worth it if we predict greater than a 12-20% chance of AI apocalypse in the alternative. To quote EY himself: "shut up and multiply".

Framework #2: existential risk and categorical imperative.

This is not necessarily a one-off. It may be repeated. Consider the results of pursuing either policy consistently in a series of similar situations.

  • If we always do #1, then (without some means of solving AI alignment) the final outcome is that humanity gets destroyed by AI.

  • If we always do #2, then we have a GTW every 50-200 years, but we don't get destroyed by AI.

If you think, as I do, that the existence of humanity and the birth of new humans are very good things, then in the long run it is clearly better to always do #2 than to always do #1. Kant's categorical imperative says, among other things, that we should do the things that we would prefer for everyone to do in our place. So we should do #2.

I mean, obviously I'd be in favour of telling Russia "knock it off" on multiple occasions before escalating to war, but if it comes down to "start GTW or take significant X-risk", I'd push the button.

I certainly agree that letting governments continue neural-net research is a bad idea, but ultimately if you pursue unaligned neural-nets you're an enemy of mankind and, if you continue such illegally after it's banned, you'll have to be hunted down and stopped. I'm hoping you change your mind before then.

Is the censor some sort of anti-SEO, or...?

Ah, perhaps my above post would come off as a bit strange without context. The last two times I read felipec's crossposts, they made catastrophic mistakes in understanding the topic, but the tone was rather smug. I said that the next time he wrote a post and linked it here, if he was smugly wrong again I would stop reading his posts.

So I'm acknowledging that he did better this time. His engagement in the thread is also a bit better.

And there is no (peaceful, legal) mechanism to replace that personnel.

There exist legal means to replace a lot of them that do not inherently involve bloodshed. It's just that making such an attempt probably means they revolt.

I don't think that this was a particularly-well-aimed barb; JTarrou is not someone who frequently Body-Snatchers-screams at people claiming them to be racists. I'm not 100% sure whether his comment is mocking "everyone is racist" or at least semi-seriously saying that "everyone is racist and that's fine", but TTBOMK neither of those are positions in common with Ibram Rogers.

Nixon, Reagan, and Trump couldn't, so I don't know why anyone is thinking anything different will really happen in the future other than a continuation of what's happened since the late 1950's.

When I said I think it's most likely, I'm pricing in nuclear war very heavily. Sudden loss of half the city population means country-based conservative parties will have a lot of power and still be on a culture war footing.

child sexual abuse

I'm going to come out and say this: when debating with opponents of Anglospheric AoC, this term currently obscures considerably more than it illuminates.

The reason is that it is defined in two ways:

  1. legally, as sex with children under the age of 16-18 depending on jurisdiction and context
  2. etymologically, as sex with children that is bad ("abuse")

But the people you are arguing with are claiming that these two things do not coincide! We believe that things satisfying #1 do not necessarily satisfy #2. So the use of this term essentially assumes the falsehood of our claims.

Are you of the opinion that the morality of statutory rape is contingent on monetary compensation to the victims?

I can't speak for others, of course, but I'm of the opinion that the morality of statutory rape is for the most part dependent on whether it's consensual - i.e., whether or not it's "real" rape. The monetary compensation is not super-relevant; I would consider it morally wrong, for instance, to rape a screaming/struggling 14-year-old and then pay him/her $10,000,000, and I would consider it NBD to have consensual sex with a 14-year-old without money changing hands. The only relevance here of the $10,000,000 is that people will consent to many more things for $10,000,000 payment than for $0.

I mean, if I ever call someone an NPC in earnest, that is an intended connotation. I don't think I've ever done so, though I have made the "do not give rights to this thing" argument in the case of misaligned AI.

I'll admit that reading the bit after the dash into other people's use is uncharitable, but given the original meaning is literally "a character in a game that's not controlled by a real human" I don't think "subhuman" is at all a reach.

When you say "the real case against it", are you merely noting an argument that exists, or are you making the argument i.e. saying in your own voice "banning AI is bad because AI could be good too"?

(In case of the latter: I know that The Precipice at least considers AI a bigger threat than literally everything else put together, at 1/10 AI doom and 1/6 total doom. I categorise things a bit differently than Ord does, but I'm in agreement on that point, and when looking at the three others that I consider plausibly within an OOM of AI (Life 2.0, irrecoverable dystopia, and unknown unknowns) it jumps out at me that I can't definitively state that having obedient superintelligences available would be on-net helpful with any of them. Life 2.0 would be exceptionally difficult to build without a superintelligence and could plausibly be much harder to defeat than to deploy. Most tangible proposals I've seen for irrecoverable dystopia depend on AI-based propaganda or policing. And unknown unknowns are unknowable.)

I don't think this would work. The parents don't know whether the kid will be in the top 20% before having him/her, so it'll only have a very-slightly-different effect on high-IQ parents vs. low-IQ parents.

I was reading my mum's university textbooks at 3, so I think you can have a fair idea about the higher end quite early.

Hmm.

Okay, call me a cynic, but this was a null-update for me. SJ is pro-student/youth, is exceedingly passionate/remorseless, likes hounding people from their jobs, and claims everything is due to an "ism" - nothing here is surprising to me, hence no "light". It seemed like this was a case of "Can you believe what Those People did this week?", to quote the thing up the top of the thread.

That said, I suppose the fact that something's a null-update for me doesn't mean it's a null-update for others, and I was definitely being kind of mean by raising it. So, eh.

(ETA: I did find the parent statement Walterodim provided to be a nonzero update.)