@official_techsupport's banner p

official_techsupport

who/whom

2 followers   follows 2 users  
joined 2022 September 04 19:44:20 UTC
Verified Email

				

User ID: 122

official_techsupport

who/whom

2 followers   follows 2 users   joined 2022 September 04 19:44:20 UTC

					

No bio...


					

User ID: 122

Verified Email

I don't understand this line of argument.

When you don't understand some argument and want help with that, you should try to explain what in particular you don't understand, which parts you're able to follow and where it loses you, paraphrase confusing parts in your own words to see if you understand it correctly and explain why it sounds unconvincing to you. Just stating that you don't understand without any elaboration sounds like bait.


On a related note, here's a funny tweet from a couple of days ago: https://twitter.com/JGreenblattADL/status/1590722899702591489

I stumbled upon this post https://www.lesswrong.com/posts/cgqh99SHsCv3jJYDS/we-found-an-neuron-in-gpt-2 where the authors explain that they have found a particular "neuron" activations of which are highly correlated with the network outputting article "an" versus "a" (they also found a bunch of other interesting neurons). This made me thinking, people often say that LLMs generate text sequentially, one word at a time, but is that actually true?

I mean, in the literal sense it's definitely true, at each step a GPT looks at the preceding text (up to a certain distance) and produces the next token (a word or a part of a word). But there's a lot of interesting stuff happening in between, and as the "an" issue suggests this literal interpretation might be obscuring something very important.

Suppose I ask a GPT to solve a logical puzzle, with three possible answers, "apple", "banana", "cucumber". It seems more or less obvious that by the time the GPT outputs "The answer is an ", it already knows what the answer actually is. It doesn't choose between "a" and "an" randomly, then fit the next word to match the article, it chooses the next word somewhere in its bowels, then outputs the article.

I'm not sure how to make this argument more formal (and force it to provide more insight contrary to the "it autocompletes one word at a time"). Maybe it could be dressed up in statistics, like suppose we actually ask the GPT to choose one of those three plants at random, then we'll see that it outputs "a" 2/3rds of the time, which tells us something.

Or maybe there could be a way to capture a partial state somehow. Like, when we feed the GPT this: "Which of an apple, a banana, and a cucumber is not long?" it already knows the answer somewhere in its bowels, so when we append "Answer without using an article:" or "Answer in Esperanto:" only a subset of the neurons should change activation values. Or maybe it's even possible to discover a set of neurons that activate in a particular pattern when the GPT might want to output "apple" at some point in the future.

Anyway, I hope that I justified my thesis that "it generates text one word at a time" oversimplifies the situation to the point where it might produce wrong intuitions, that when a GPT chooses between "a" and "an" it doesn't yet know which word will follow. While it does output words one at a time, it must have a significant lookahead state internally (which it regenerates every time it needs to output a single word btw).

There should be a word for the kind of situation where people who profess their love for intellectual diversity in practice prove incapable of perceiving any viewpoints outside of a narrow range as legitimate.

From what I know of the Count, including private communication, he was pretty much sincere here, at least in the "I contain multitudes" sense. If that triggered someone that's entirely on them; and especially given the everpresent concerns about our intellectual diversity the administration of this forum probably shouldn't strive to protect the feelings of the white supremacist-adjacent users in particular.

I totally disagree with the conclusion. First of all, we are literally living in the time where one man's vision is about to revolutionize space travel by making a rocket that can lift 100 tons of payload to LEO. Yeah it's interplanetary for now, but why not interstellar next, maybe by the next man with an itch for it?

And second, why do you need to persuade the whole society to migrate? Most of the old world people didn't migrate to America and it was their loss. The few people who did migrate multiplied and prospered. "Indirect evidence of extrasolar planets will never be enough" -- for whom? So we will have bootlicking statists like the author waiting for the government to give them credible evidence and orders to go, while adventurous types will be populating the galaxy.

Marxbro was a troll by the way. At one point we had a discussion about the Labor Theory of Value, I tried my best to steer it away from theorizing and keep to a concrete example of some guys on an island exchanging fishes for pots etc, and eventually he had enough and basically said that no, he didn't want to explain this or that, he was doing it to get a rise out of people like me. Or at least that's how I remember it, it was, what, five years ago? But yeah, my impression was that he let the mask slip.

Of course, in words of a Chinese poet, if you pretend to be insane and tear your clothes and run into the garden, are you actually pretending, which also applies to single-mindedly "trolling" an internet forum for years.

That aside, in real life self-described EAs universally seem to advocate for honesty based on the pretty obvious point that the ability of actors to trust one another is key to getting almost anything done ever, and is what stops society from devolving into a hobbesian war of all-against-all.

There's a problem with that: a moral system that requires you to lie about certain object-level issues also requires you to lie about all related meta-, meta-meta- and so on levels. So for example if you're intending to defraud someone for the greater good, not only you shouldn't tell them that, but if they ask "what if you were in fact intending to defraud me, would you tell me?" you should lie, and if they ask "doesn't your moral theory requires you to defraud me in this situation?" you should lie, and if they ask "does your moral theory sometimes require lying, and if so, when exactly?" you should lie.

So when you see people espousing a moral theory that seems to pretty straightforwardly say that it's OK to lie if you're reasonably sure you're not getting caught, when questioned happily confirm that yeah, it's edgy like that, but then seem to realize something and walk that back, without providing any actual principled explanation for that, like Caplan claims Singer did, then the obvious and most reasonable explanation is that they are lying on the meta-level now.

And then there's Yudkowsky who actually understood the implications early enough (at least by the point SI rebranded as MIRI and scrubbed most of the stuff about their goal being creating the AI first) but can't help leaking stuff on the meta-meta-level, talking about this bayesian conspiracy that, like, if you understand things properly you must understand not only what's at stake but also that you shouldn't talk about it. See Roko's Basilisk for a particularly clear cut example of this sort of fibbing.

I think that what you're looking at is https://slatestarcodex.com/2013/03/04/a-thrivesurvive-theory-of-the-political-spectrum/ but the "thrive" side are not chill hippies, but the people who compete with their fellow man rather than with sabertooth tigers etc. So "survivalists" like small rigid hierarchies because they are good for surviving a zombie apocalypse, while "thrivists" like huge social hierarchies where they can backstab their way to the top with utter disregard for external reality.

From that point of view "First it's Protestantism (Conformist) vs Catholicism (Conscientious)" gets it exactly backwards.

One thing that the Ukraine war has demonstrated is that Russian Bots are a paper tiger, probably.

But I think there are lessons for the “anti-woke” too. That is, relative age effects are a proof-of-concept for significant arbitrary privilege being a real thing. A fair amount of anti-woke arguments claim that gender and racial disparities may disappear entirely when controlling for confounding variables (e.g. the gender wage gap or the racial policing gap).

My objections have been always on the meta-level: I don't doubt that there are some structural isms, but can we have an honest discussion about how much of inequality of outcomes is due to them and how better to approach that? We can't, and that's bad because I'm pretty sure that in several important aspects the pendulum has swung too far a long time ago and this hurts the supposed beneficiaries of anti-ism discriminatory policies as well, in unexpected ways even. For example, https://en.wikipedia.org/wiki/Fannie_Mae#1990s left a lot of black and poor people homeless and with a destroyed credit rating, https://en.wikipedia.org/wiki/Student_loans_in_the_United_States#Race_and_gender put the majority of educated blacks in the US into a sort of indefinite indentured servitude, and having a 60:40 college+ educated female to male ratio makes dating not very fun for those women. This is what you get if you shut down open discussion because you think that the only problem is evil ciswhitemales and nothing could possibly go wrong if you shut them off and follow the road paved with good intentions.

We have a Blood on the Clocktower themotte/rdrama Discord group. It's a social deduction game similar to https://en.wikipedia.org/wiki/Mafia_(party_game), but with a much bigger emphasis on mechanics and logical deduction: there are no "simple peacefuls", everyone has some interesting ability, which also makes it more suitable for online play since you have a lot to discuss right from the start.

Though if you saw previous announcements there's a change: after 9 asynchronous text-based games we decided to try playing it live over voice chat and got seriously hooked on this format, so that's what we are doing now, and in particular are intending to do tomorrow, Saturday 10th September, at 19:00 UTC (http://time.unitarium.com/utc/1900), expecting to play 2 games lasting for about an hour and a half total.

https://rebrand.ly/StorytellerIntro - one page rules explanation.

http://bit.ly/TroubleBrewingScript - one page character reference for the Trouble Brewing script (the game has different sets of possible characters called scripts).

http://bit.ly/TBalmanac - detailed list of Trouble Brewing character abilities with corner cases and interactions, not necessary to read but helps to understand how it all works.

PM me for a discord invite and with any questions you have.

Scrapes and cuts, especially scabs, itch too.

It's not comparable at all. A cut doesn't begin to itch until after several days, and don't itch at anywhere the intensity proportional to the affected area. Needle pricks don't itch at all and they are tens or hundreds of times larger by area than mosquito bites. So no, it's evidently a reaction to the anesthetic stuff they inject.

Why do mosquito bites itch?

Is it entirely accidental, as in, evolution only cared about whatever stuff mosquito inject acting as an effective anesthetic for the duration of the bite, not about what happens next? Or maybe it's beneficial for humans (makes us much more alert and aggressive towards further mosquitos) or maybe even individual mosquitos due to intra-species competition?

I'm probably a lot more willing to entertain HBD or even JQ stuff simply because asking a good faith question about either topic (and others like them) gets you shouted down, ostracized, blacklisted etc.

It's not even some psychological bias, it's a legitimate heuristic. A position can be defended with facts/logic/reason or with appeals to authority, social pressure and threats. A position that is true can be defended with both, a position that is false much is easier defended with the latter. If some position is pretty much exclusively defended with the latter, that's a good evidence that it is false.

This is how a high-trust society feels like.

The most interesting case I personally experienced was when I booked a small hotel 1 km from the center of Tallinn. And I was arriving after midnight so I asked them if that's OK and they said that they will leave the front door unlocked and my key on the reception table. Which they did. And, like, there was at least the computer there on the reception and who knows what else to steal, but apparently that was a good neighborhood. Needless to say, there were no checks whatsoever regarding the breakfast.

That's definitely a position.

There's a problem with basically lying about how "rising tide rises all boats" instead of admitting that you have this position and honestly telling the people who are getting fucked that they are getting fucked at least, not to mention actual redistributive efforts in their favor.

There was a Scott's post that I was never able to find, maybe of the Links kind, where he was seriously surprised that the majority of economists in some poll admitted that removing import tariffs hurts local workers. Because when you don't ask them directly they are very good at making it seem that the fact that their models only look at the GDP and such is OK because everything else is unimportant.

because torture to extract confessions works so well that even when you think you are trying to extract actionable intelligence the person you are torturing is actually thinking "what does he want me to confess to?"

Yeah, but he knows that if he confesses to the wrong thing, he will be tortured more. So there is a failure mode where he really doesn't know the information that you're interested in and so makes something up, but if you're aware of this failure mode and the subject does in fact have the information you're interested in, you probably can extract it reliably.

Consider for example https://en.wikipedia.org/wiki/Assassination_of_Reinhard_Heydrich#Investigation_and_manhunt. When Nazis did it, it worked.

The most famous are the Spanish Inquisition and the Soviet GPU/NKVD/KGB. In all these cases, the aim was to extract confessions.

Are you saying that an office dedicated to extracting intelligence tends to transform to extracting confessions? I'm not following, what's the evidence for is this supposed to be?

The nearest thing to a corps of professional torturers focussed on intelligence gathering was French military intelligence during the Algerian war of independence. The torturers destroyed their records so we don't know how well it worked, but we do know that the French lost the war.

As far as I understand from reading Wikipedia, the French military won the war against the Algerians decisively, then lost the war against the French journalists, in a very similar fashion to how the US military utterly destroyed the Viet Cong (https://en.wikipedia.org/wiki/Tet_Offensive), then lost the Vietnam war to the US journalists.

Yeah, it's, like, for a day or two your new account seems to work (besides half of the subreddits that flat out shadowban posts from new accounts with low karma making it a bit of a Catch 22) then they shadowban it for real, then you get logged out and can't log in because wrong password.

And because rich people are there it is clean and safe.

I doubt it. You don't have to be rich to Uber/Bolt everywhere. In fact by the time you can actively shape what the public is allowed/encouraged to vote for you can have a private driver.

To be honest I don't know why American cities appear so dysfunctional while other places do just fine, when I don't see how the decision-making is remotely democratic. Or maybe it is democratic but ordinary urban Americans are way more brainwashed somehow. I don't know, I know that where I leave we have very nice and cheap public transportation that is used exclusively by people who can't afford cars, but it's nice because it has these social ads playing, telling that if there's some smelly hobo (literally the ad is showing green noxious fumes!) you should immediately call the police and they will remove them. Which they do and if any politician tried to run on the platform of not infringing hobo rights, they would be laughed at by everyone.

The real question we are interested in: "we can have an intervention that would make this black man a productive member of society that you don't even have to pay for, or you can pay $30k/year for decades until he grows too old to do crime".

You gotta admit though, it's a fun contrast between how you diagnosed someone who doesn't pre-match their socks with depression, autism, and a laundry list of other possible disorders, but then admit that you don't understand how someone can have all their underwear in the same color instead of matching it with their visible clothing.

Discord unleashed GPT3 (probably) as a bot on its users. We have been taunting it in our comfy Blood on the Clocktower server. The funniest thing we discovered (credit goes to @Snakes) is that it refuses to give any advice on producing paperclips.

I asked Bing AI to help me make a Blood on the Clocktower character, here's the result: https://i.imgur.com/ZXqkSAP.png

It's an actually interesting character, I discussed it with the pals and they thought that it was quite overpowered if anything.

Also it was a flash in the pan, it took me a while to convince the AI to help me (it kept insisting that it was not a game designer for some reason), then I got this, then I got about a dozen of nonsense/boring suggestions.

On a related note, come play with us in our Blood on the Clocktower discord! https://discord.gg/wJR87pjK

It's a variation on Mafia/Werewolf but with several important distinctions that make it superior, and especially superior for internet games, and even more superior for text games with 24h/game day (but we also play voice games sometimes btw!).

First of all, everyone gets a character with an ability. Abilities are designed to be interesting and include stuff like "if you die in the night, choose a player, you learn their character". Second, dead players' characters are not announced, they can still talk with the living, and retain one last ghost vote, so if you get killed you're still fully in the game and maybe even more trusted. So you get games where everyone is engaged from the very start--because you want to privately claim your character, maybe as one of three possibilities, to some people--to the very end when you cast your ghost vote for who you think is the demon.

Lately we had some rdrama people join (including Carp himself!) so it would be nice to balance their deviousness and social reads with having more themotte folks. We were historically very balanced: https://i.imgur.com/gcotalV.png

My favorite voice game (not our group, but we have had similar shit going down): https://youtube.com/watch?v=r9BNc-nDxww?list=FLRMq6rziC28by3Xtvl8VcEg&t=246

The linked essay makes a convincing argument for what it is.

People (including myself) have been coming up with the idea of reframing freedom of speech as "freedom to listen" for most (but not all!) purposes since forever. It has a bunch of obvious benefits: it's much easier to defend one's right to read Mein Kampf than Hitler's right to have it read (should he have it even?), it's easy to go on the counteroffensive and ask who exactly and on what grounds reserves the right to read something and then decide that I don't have that right for me, and consequently forces the usually unspoken but implied idea that some people are too stupid to be allowed to read dangerous things into the open--I don't disagree actually, but who is deciding and how do I qualify for unrestricted access?

And freedom to listen flat out contradicts the naive interpretation of freedom of speech as freedom to call people nwords on the internet. Because obviously freedom to listen is a freedom to choose what to listen to, and someone interfering with it by screaming the nword violates it. When you think about how to implement it technically, naturally you get the idea of moderation as a service that readers subscribe to based on their individual preferences, rather than something that must be applied to writers in a one size fits all fashion.