@official_techsupport's banner p



1 follower   follows 2 users  
joined 04 Sep 2022
Verified Email


User ID: 122



1 follower   follows 2 users   joined 04 Sep 2022


No bio...


User ID: 122

Verified Email

They got their registration registered with a reliable provider, and as for being null-routed it was either a rogue employee or more likely Null himself fucked up lol. You gotta remember, it's just one guy flying by the seat of his pants. Even this website has more cumulative competence I suspect (though idk, maybe Null also has a secret cabal of nerds advising him).

I spent a lot of time observing these sorts of groups a while back and in the end they come across as, I guess, sad, since they never end up creating much of anything useful and for many of their members it seems to eventually become a damaging obsession.

So you're saying that you spent copious amounts of time cataloging instances of events you believed made such things or groups look bad. Endlessly.

It's exactly what someone who wants the non fertile eliminated from the future would try to persuade you of. A sane and sensible person will dismiss such words as munitions fired in a 5th generation memetic war of genocide in all honesty.

Exactly. There's a powerful drive to remove leftist genes (yeah, yeah, I'm extremely oversimplifying it) from the gene pool, and that's a good thing that we all should support bipartisanly.

An argument you can make against mail in voting is that voting is a proxy for a civil war without the associated costs and so requiring people to get off their asses and vote in person is good, while letting anyone with a heartbeat vote is actually bad.

This of course must be understand in the context of trans- and cis-democracy.

We currently solve this problem by having the entirety of billionaire charity amount to like 1/400th of the US budget. At this scale the unaligned entities are basically capable of picking low hanging fruits of what they consider to be good that was neglected by the US government, not going against the US government in any shape or form.

There are some exceptions, like https://www.latimes.com/local/california/la-me-prosecutor-campaign-20180523-story.html, for that light is the best disinfectant.

There are other entities in that space, which are restricted from causing havoc by the 500 years of laws pertaining to corporations. This usually works well, Bernie Madoff was an exception unlike 99% of lawless-except-by-code crypto entities.

I think that his point is not "you aren't allowed to criticize a billionaire unless you are a billionaire" but "you aren't allowed to criticize people swindled by a scammer when there's some billionaires swindled by them unless you are a billionaire".

Please provide justifications for the request that are at least as strong as the impetus of the request itself.

There's also stuff that goes beyond "interesting". For example, Glenmorangie objectively has an airy taste. It's like the best sub $50 whiskey that you can buy if you don't like being assaulted with the barrel taste, and it also somehow does better than vodka at being smooth. Makers Mark is also a pretty good sub $30 bourbon, to my taste--but it can be somewhat objective, I'm pretty sure that if we give a random person a dram of Makers Mark and Jack Daniels they would recognize the more refined taste of the former.

A lens does a Fourier conversion: it maps directions (of the incoming rays) into locations (on the focal plane), summing stuff up.

https://physics.stackexchange.com/questions/596774/why-can-a-lens-be-described-by-a-fourier-transform I guess.

(unless it's rdrama and they call you a cute twink instead?)

What's your account there?

That aside, in real life self-described EAs universally seem to advocate for honesty based on the pretty obvious point that the ability of actors to trust one another is key to getting almost anything done ever, and is what stops society from devolving into a hobbesian war of all-against-all.

There's a problem with that: a moral system that requires you to lie about certain object-level issues also requires you to lie about all related meta-, meta-meta- and so on levels. So for example if you're intending to defraud someone for the greater good, not only you shouldn't tell them that, but if they ask "what if you were in fact intending to defraud me, would you tell me?" you should lie, and if they ask "doesn't your moral theory requires you to defraud me in this situation?" you should lie, and if they ask "does your moral theory sometimes require lying, and if so, when exactly?" you should lie.

So when you see people espousing a moral theory that seems to pretty straightforwardly say that it's OK to lie if you're reasonably sure you're not getting caught, when questioned happily confirm that yeah, it's edgy like that, but then seem to realize something and walk that back, without providing any actual principled explanation for that, like Caplan claims Singer did, then the obvious and most reasonable explanation is that they are lying on the meta-level now.

And then there's Yudkowsky who actually understood the implications early enough (at least by the point SI rebranded as MIRI and scrubbed most of the stuff about their goal being creating the AI first) but can't help leaking stuff on the meta-meta-level, talking about this bayesian conspiracy that, like, if you understand things properly you must understand not only what's at stake but also that you shouldn't talk about it. See Roko's Basilisk for a particularly clear cut example of this sort of fibbing.

I don't understand this line of argument.

When you don't understand some argument and want help with that, you should try to explain what in particular you don't understand, which parts you're able to follow and where it loses you, paraphrase confusing parts in your own words to see if you understand it correctly and explain why it sounds unconvincing to you. Just stating that you don't understand without any elaboration sounds like bait.

On a related note, here's a funny tweet from a couple of days ago: https://twitter.com/JGreenblattADL/status/1590722899702591489

Persuit of truth is important, but so is keeping a lid on data which can be misused.

Can I see a cost-benefit analysis on whether it's worth it to keep that particular data secret? Even a very handwavy one?

Of course I can't, and it's because of a rather fundamental reason: having anything like that in public betrays the very truth it was intended to conceal. If you publicly claim that the public can't see data X because it might lead to the harmful belief in the conclusion Y, the public will assume that the conclusion Y is true based on your claim. So you need to equivocate and obfuscate.

Worse, since such decisions are made by nominally democratic institutions they can't be made even in secret, because if someone leaks the meeting notes it would be a huge scandal. So they aren't made rationally at all.

Consider for example the messaging "masks don't work, you should not buy masks so that there's enough left for doctors" from the early Covid days. Oh if only there were a behind-closed-door meeting between various senior WHO and CDC officials where they decided that they must lie to the public to address the mask shortages and this particular lie is the best they could do and it's worth it even taking into account long term consequences for trust in institutions.

I conjecture that such a meeting couldn't have happened because nobody wanted to destroy their career by calling for it and speaking plainly in case it's leaked. I point out that now when you can think about clearly it for five minutes it's obvious that the adopted policy was extremely stupid, proves that there was no such meeting, the policy was a result of bureaucrats acting on pure instinct, wink-wink nudge-nudge, no conscious deliberation at all.

So IMO this is the main problem with "keeping a lid" on things: unless you know exactly what you're doing (such as in not publishing nuclear weapon technologies), object-level lies infect all meta-levels, if you lie about the existence of certain data you have to lie about lying about that, and about whether you would lie in such situations, and so on. Which not only produces much more and much more dangerous lies that you'd initially expect, but also prevents you from thinking rationally about whether it's actually worth it.

People (including myself) have been coming up with the idea of reframing freedom of speech as "freedom to listen" for most (but not all!) purposes since forever. It has a bunch of obvious benefits: it's much easier to defend one's right to read Mein Kampf than Hitler's right to have it read (should he have it even?), it's easy to go on the counteroffensive and ask who exactly and on what grounds reserves the right to read something and then decide that I don't have that right for me, and consequently forces the usually unspoken but implied idea that some people are too stupid to be allowed to read dangerous things into the open--I don't disagree actually, but who is deciding and how do I qualify for unrestricted access?

And freedom to listen flat out contradicts the naive interpretation of freedom of speech as freedom to call people nwords on the internet. Because obviously freedom to listen is a freedom to choose what to listen to, and someone interfering with it by screaming the nword violates it. When you think about how to implement it technically, naturally you get the idea of moderation as a service that readers subscribe to based on their individual preferences, rather than something that must be applied to writers in a one size fits all fashion.

Not quite what you're asking about, but Stephen King's N. allows for an extremely fitting alternative explanation where the protective circles are actually summoning circles. Imagine that you're Cthulhu and you can send some mortals nightmare visions trying to get them to summon you: obviously you should convince them that what you want them to do prevents you from being summoned.

because torture to extract confessions works so well that even when you think you are trying to extract actionable intelligence the person you are torturing is actually thinking "what does he want me to confess to?"

Yeah, but he knows that if he confesses to the wrong thing, he will be tortured more. So there is a failure mode where he really doesn't know the information that you're interested in and so makes something up, but if you're aware of this failure mode and the subject does in fact have the information you're interested in, you probably can extract it reliably.

Consider for example https://en.wikipedia.org/wiki/Assassination_of_Reinhard_Heydrich#Investigation_and_manhunt. When Nazis did it, it worked.

The most famous are the Spanish Inquisition and the Soviet GPU/NKVD/KGB. In all these cases, the aim was to extract confessions.

Are you saying that an office dedicated to extracting intelligence tends to transform to extracting confessions? I'm not following, what's the evidence for is this supposed to be?

The nearest thing to a corps of professional torturers focussed on intelligence gathering was French military intelligence during the Algerian war of independence. The torturers destroyed their records so we don't know how well it worked, but we do know that the French lost the war.

As far as I understand from reading Wikipedia, the French military won the war against the Algerians decisively, then lost the war against the French journalists, in a very similar fashion to how the US military utterly destroyed the Viet Cong (https://en.wikipedia.org/wiki/Tet_Offensive), then lost the Vietnam war to the US journalists.

Why do you think that ad hominem is necessarily a fallacy?

Suppose you meet a guy at a party who explains in detail why modern plumbing sucks and how to improve on it vastly. You're intrigued and ask how his elegant mutations and cunning annihilations worked out in practice--only to discover that not only he never tried them ever, but also never did any plumbing at all, modern or otherwise. Is it wrong to disregard his special plan for your toilet with extreme prejudice? I don't think so, because there are vastly more completely deranged plans than actually good ones, one can't end up with a good one without actually trying them in the real world, a lot, and it's not worth your time to debunk a theory that was never put into practice.

Similarly, OK, we can accept it when someone says "it sucks" about a situation they are not themselves necessarily in, or in but having never experienced something different. Marx complaining about labor conditions, a single mother complaining about single motherhood, yeah sure. But when they start proposing their fixes that they have no experience living with whatsoever, then it's entirely valid to ad hominem them.

You say this as if it were an argument in support of the, you know, those ideological preferences.

Telling someone in a position to inflict pain on you truth they don't want to hear is a bad idea (just like speaking truth to power in any other context), and we all know this viscerally. The only way to make the torture stop is to work out what the torturer wants to hear, and tell them that. So the only truth you can extract under torture is the truth you already know.

You're confusing torture used to extract a confession with torture used to extract military intelligence. It is possible to have those things entangled in reality, like, the tortured person lies about the location of the bomb because he doesn't know the real location and wants the torture to stop. But if you just want the data and don't have preferences regarding its content other than you get it, and you have a relatively short feedback loop, I don't see any reason for why it won't work.

Torturing someone with an aim to learn that Saddam Hussein gave them money is pointless. Torturing someone to betray their contacts or sabotage targets or whatever useful non-loaded intelligence can work.

That article is fascinating. The "Baa Baa White Sheep" section is a several pages long explanation of how the whole thing was fabricated, with quotes and citations, how it was some private initiative, and how the council said that they support it but actually they said that it's none of their business, and how actually some reporter couldn't find any worker that confirmed the ban, and so on and so forth.

And then it ends with a single sentence: "In 2000, the BBC reported the withdrawal of guidance to nursery schools by Birmingham City Council that "Baa, Baa, Black Sheep" should not be taught."

The way Wikipedia manages to lie its head off while still sticking to reputable sources is fascinating.

Come back to our BotC club lol! We've been experimenting with the rules until we mostly got rid of the annoyances (for example, a limit of 3 whispers per person per day forces most of the interesting information and discussion into the public square, 24h days feel much faster and to the point, we can fast-forward the last day or two when there's nothing much left to discuss, etc), and I guess we also got somewhat better at the game so there are more fun plays.

We are also doing some realtime voice games on weekends now. And there's an offtopic channel where people keep discussing random things. And some people ended on rdrama.net devcord because of course, most notably hbtz who among other things wrote a blackjack bot that hit the 2^31 dramacoin limit.

So I don't know about making close male friends for life, but there are certainly worse ways to find some online buddies to do fun stuff with.

I think that what you're looking at is https://slatestarcodex.com/2013/03/04/a-thrivesurvive-theory-of-the-political-spectrum/ but the "thrive" side are not chill hippies, but the people who compete with their fellow man rather than with sabertooth tigers etc. So "survivalists" like small rigid hierarchies because they are good for surviving a zombie apocalypse, while "thrivists" like huge social hierarchies where they can backstab their way to the top with utter disregard for external reality.

From that point of view "First it's Protestantism (Conformist) vs Catholicism (Conscientious)" gets it exactly backwards.

Anybody smart enough to build bleeding-edge AI systems is smart enough to understand why if you try to predict the likelihood of a criminal repeating a crime, it will always say that black people are more likely to repeat (it's because black people are more likely to repeat).

An alternative explanation is that doublethink required to simultaneously believe in the party line and in the reality required to do your job doesn't actually work very well and tends to devolve into believing in the party line only. Imagine that you're a bright young guy working on a Google's image classifier. To generate the thought that the classifier might confuse black people for apes so you must specifically check that it doesn't, you must believe that black people tend to have certain ape-like facial features. That's a very dangerous thing to believe, your woke peers would be very unamused if you just blurt it out or inexpertly wink-wink nudge-nudge your way to suggesting that you need to check for that etc. If you have a lot of wrongfact beliefs you have to watch your every word to avoid committing a social suicide. Accidentally releasing a classifier that does in fact mistake black people for apes on the other hand is relatively safe: it's not your personal fault and who could have thought and it's probably bias in the training data anyway. So in a highly ideologized environment people just naturally fail at their jobs instead of trying to maintain a bag of forbidden beliefs.

Eh, Null is doomposting as usual.

The most disturbing thing about all this to me is how easy it is to prop a blatant falsehood via "citogenesis". Multiple reputable sources have claimed that KF drove three people to suicide, therefore it's on Wikipedia as established truth. As far as I know, this is false.

As far as I know the only way to prove that it's false to someone is to ask them who those people were, at which point they discover themselves in a very sus tangle of people repeating rumors they heard from multiple people but with no actual sources, and either get enlightened or appeal to authority of a National Security Analyst for NBC and former Assistant Director of the FBI: twitter.com/FrankFigliuzzi1/status/1566438538765279232 and, yeah, the response can only be that things are really that bad, sorry.