@FeepingCreature's banner p

FeepingCreature


				

				

				
0 followers   follows 0 users  
joined 2022 September 05 00:42:25 UTC
Verified Email

				

User ID: 311

FeepingCreature


				
				
				

				
0 followers   follows 0 users   joined 2022 September 05 00:42:25 UTC

					

No bio...


					

User ID: 311

Verified Email

Compromise: Move MLK day to October and put the election on it. I'm sure the Reverend would be fine with it. Republicans are happy because it doesn't create a new holiday and also it reduces the stature given to a black guy, Democrats are happy because black people and minorities get time off to vote and also it ties MLK even more tightly into the civic mythos, plus they can put pictures of him up in the voting room.

Sure. But every time an exploit comes out that chains together like seven distinct vulnerabilities, people ask "how was this possible? they seem to pull out a new security hole at every single layer of security." And the answer is normalization of deviance, ie. "that's bad but we still have more layers of defense".

Eh, I'm not sure that's it. I think it's less that we refuse to contradict them, as that they've come to be in charge properly and so we don't attempt to hinder them. You may gripe about your orders but you still carry them out.

Same here as the other commenter: Ronald Reagan, Robert Fico, Roosevelt, Gerald Ford, the Pope, Bob Marley, Truman, Seward, Reagan, President Reagan.

Do you still get Trump if you try it now?

Huh. I also cannot get any Google autocomplete for "trump shot", "trump assassina...", "trump secret s...", "trump inju..."... Google clearly knows of these topics, but they somehow haven't made their way into their search history model.

This is at least very fishy.

I think netstack literally means "not being the actual person Donald Trump".

I disagree with this - the entire reason LessWrong got as big as it got was that Eliezer very much "brought the fire" in the name of advocating for his vision of correct thought. I don't think you can read, say, the Zombies sequence and argue it's cold and passionless.

"What does the god-damned collapse postulate have to do for physicists to reject it? Kill a god-damned puppy?"

I upvoted the parent because I think it's entirely in keeping with that rhetorical lineage.

I mean, if genes/IQ is real, it's probably small but compounding, a factor on a thousand stacked decisions, like a random walk biased upward or downward, second or even third order. In that case, most causes of bad things happening in their life would seem to be largely unrelated to IQ, since every step has a better causative explanation than IQ, but IQ would still be the determinant of where the chain ended up. (Admittedly, that's very hard to falsify.)

I mean, I believe in moral intuition and I suspect in this case most people would have a strong moral impulse to do just this, even though they'd discard it as impractical. I think it's hard to retreat to moralistic intuition and five minutes later say "but this moral impulse you must squash."

Where it gets complicated for me is, do you have an obligation to save a bee that gets stuck in a spiderweb? There's no reason to assume the bee is more worthy of survival than the spider. But here my moral opinions strongly strike out in favor of "kill the spider, save the bee". But in that case I know that other people have the opposite response.

I read it as more like, sophistry may be employed against inconsequential or subjective matters like religion freely, as there's no harm to it; but if you try to argue with reality, reality is gonna win.

The question to me hinges on this: did the people who say that AGI seems fundamentally impossible then consider that the sub-AGI systems that we today possess were possible? Right now I can go on Twitter and pick up a two-page detailed instruction booklet written in plain English that, if I feed it into a commercially available chatbot, will empower this chatbot to, through a deductive process that at least reads surprisingly similar to human research, form a remarkably accurate answer as to where a photo was taken, where the originators of this chatbot had never at all considered this possibility and did not build the chatbot for this purpose. In the course of doing so, the chatbot will autonomously search the internet, weigh evidence, and execute optical comparisons of photos evincing high-level understanding of visual features. Would anybody who currently says that AGI is sci-fi have admitted this technology could exist? Or would they have said it was, as it were, "at least 100 years off"?

Sure, we don't understand how the models do it so it's easy to say "I thought we didn't have a research path to that skill, and actually we still don't." But empirically, it seems to me that enough skills have been "flaking off general intelligence" - turned out to not be "general intelligence" bound after all - that to me the whole concept of general intelligence is now in doubt, and it seems at least plausible that more and more "AGI-complete skills" will continue to flake off and become practically solved until there's nothing left to the concept. Certainly at least the confident claim that this won't happen is looking very shaky on its feet right now.

Out of interest, do you think that a mars base is sci-fi? It's been discussed in science fiction for a long time.

I think any predictions about the future that assume new technology are "science fiction" p much by definition of the genre, and will resemble it for the same reason: it's the same occupation. Sci-fi that isn't just space opera ie. "fantasy in space", is inherently just prognostication with plot. Note stuff like Star Trek predicting mobile phones, or Snowcrash predicting Google Earth: "if you could do it, you would, we just can't yet."

"At this rate of growth, the entire lake will be this algae in a few days". "Ludicrous silliness!"

The point is we don't have a clue where the sigmoid will level, and there doesn't seem to be a strong reason to think it'll level at the human norm considering how different AI as a technology is to brains. To be clear, I can see reasons why it'll level below the human norm; lithography is a very different technology from brains and it does sure look like the easily Moore-reachable performance for a desktop or even datacenter deployment will sigmoid out well below the human brain scale. But note how that explanation has nothing to do with human brains for reference, and if things go a bit different and Moore keeps grinding for a few more turns, or we find some way to sidestep the limits of lithography like a much cheaper fabrication process leading to very different kinds of deployment, or OpenAI decide to go all in on a dedicated megatraining run with a new continuous-learning approach that happens to work on first, second or third try (their deployed capacity is already around a human brain), then there's nothing stopping it from capping out well above human level.

There was never a need for a "casus belli", Trump can just do things. He can just ignore the supreme court. What's supposed to keep the country working is not "presidents don't assume dictatorial powers" but "the other organs stop him". This will simply be an opportunity to discover if it works.

That's a good metric!

I think the real answer here is "do not under any circumstances allow a party to win all three organs of government."

Rationalist here. Timeless decision theory was never explicitly designed for humans to use; it was always about "if we want to have AIs work properly, we'll need to somehow make them understand how to make decisions - which means we need to understand what's the mathematically correct way to make decisions. Hm, all the existing theories have rather glaring flaws and counterexamples that nobody seems to talk about."

That's why all the associated research stuff is about things like tiling, where AIs create successor AIs.

Of course, nowadays we teach AIs how to make decisions by plain reinforcement learning and prosaic reasoning, so this has all become rather pointless.

As I understand it, there may be genuine self-induced brain damage in play, so probably no.

Yeah I gave him a lot of credit, but the evidence he got online right brainfried is rapidly mounting.

Personally speaking I don't think about it because I believe AI kills us first.

It's not a tell. For instance, it's so common in, say, German, that even German Wikipedia does it. "The Ukraine is a state..."

Personally, I mostly read forum quests and fanfic on my phone. I don't think there's anything special about books in particular.

Was it Scott Alexander who back in the day wrote an essay about how liberal values are optimized for times of peace and abundance and conservative values are optimized for a zombie apocalypse scenario?

Yes.

I just had this comment in the "The Motte needs your help" report queue. Obviously it's in the wrong spot, but also I can't flag it as "this needs a moderator to move it maybe" because the report queue doesn't show context, and on its own this is a perfectly normal comment. Bit of a weakness, idk.

I mean, that kind of sounds like you're saying it's provably not a 1:1 simulation of a human brain.

What you're describing is measurable evidence of new physics. Every physicist in the world would want to buy you a beer.