@magic9mushroom's banner p

magic9mushroom

If you're going to downvote me, and nobody's already voiced your objection, please reply and tell me

1 follower   follows 0 users  
joined 2022 September 10 11:26:14 UTC
Verified Email

				

User ID: 1103

magic9mushroom

If you're going to downvote me, and nobody's already voiced your objection, please reply and tell me

1 follower   follows 0 users   joined 2022 September 10 11:26:14 UTC

					

No bio...


					

User ID: 1103

Verified Email

The main risk I was thinking about (besides "someone more reckless develops ASI first") was the collapse of current civilization reducing humanity's population and industrial/technological capabilities until it is more vulnerable to additional shocks. Those additional shocks, whether over a short period of time from the original disaster or over a long period against a population that has failed to regain current capabilities (perhaps because we have already used the low-hanging fruit of resources like fossil fuels) could then reduce it to the point that it is vulnerable to extinction.

There's one way I could maybe see us having problems recreating some facet of modern tech. That is, indeed, a nuclear war, and the resulting radiation causing the most advanced computers to crash often (since modern RAM/registers operate on such exact precision that they can be bit-flipped by a single decay). Even then, though, there are ways and means of getting around that; they're just expensive.

Ord indeed takes an axe to the general version of this argument. Main points: 1) in many cases, resources are actually more accessible (e.g. open-cut mines, which will still be there even if you ignore them for 50 years, or a ruined city made substantially out of metal being a much easier source of metal than mankind's had since native copper was exhausted back in the Stone Age), 2) redeveloping technology is much easier than developing it for the first time, since you don't need the 1.0, least efficient version of the tech to be useful (e.g. the Newcomen atmospheric engine is hilariously inferior to what we could make with even similar-precision equipment). There are a whole pile of doomsday preppers who keep this sort of information in hardcopy in bunkers; we're not going to lose it. And, well, 1700s humanity (knocking us back further than that even temporarily would be extremely hard, because pre-industrial equipment is buildable by artisans) is still near-immune to natural X-risks; I'm less convinced that 1700s humanity would survive another Chicxulub than I am of modern humanity doing so, but that is the sort of thing it would take, and shocks that large are nailed down with low uncertainty at about 1/100,000,000 years.

If you really want to create a scenario where being knocked back a bit is a problem, I think the most plausible is something along the lines of "we release some horrible X-risk thing, then we go Mad Max, and that stops us from counteracting the X-risk thing". Global warming is not going to do that - sea levels will keep rising, of course, and the areas in which crops can be grown will change a little bit more, but none of that is too fast for civilisations to survive. (It's not like you're talking about 1692 Port Royal sinking into the ocean in a few minutes; you're talking about decades.) Most of the anthropogenic risks are pretty fast, so they're ruled out; we're dead or we're not. Life 2.0 is about the only one where I'd say "yeah, that's plausible"; that can have a long lead time.

Humanity itself isn't stable, it is currently slowly losing intelligence and health to both outright dysgenic selection from our current society and to lower infant mortality reducing purifying selection, so the humans confronting future threats may well be less capable than we are.

Dysgenics is real but not very fast, and it's only plausibly been operating for what, a century, and in only about half the world? This isn't going to be the end of the world. Flynn effect would be wiped out in apocalypse scenarios, of course, but we haven't eroded the baseline that much.

And to zoom out and talk about X-risk in fully-general terms, I'll say this: there are ways to mitigate it that don't involve opening the Pandora's Box of neural-net AGI. Off-world colonies don't need AI, and self-sustaining ones take an absolute sledgehammer to every X-risk except AI and dystopia (and aliens and God, but they're hardly immediate concerns). Dumb incentives for bio research can be fixed (and physics research, if and when we get to that). Dysgenics yields to PGT-P and sperm donors (although eugenics has some issues of its own). Hell, even GOFAI research or uploads aren't likely to take much over a century, and would be a hell of a lot safer than playing with neural nets (safer is not the same thing as safe, but fine, I agree, keeping AI suppressed on extremely-long timescales has issues). "We must do something" does not imply "we must do this".

Likewise I don't see what makes uploads inherently safe but doesn't hold for NNs.

No, really, what do you have against neural networks?

The view I'm coming at this from is: humans have a moral skeleton, innate hardwiring that allows us to learn morality and believe it (as opposed to mimic it). This is highly instrumentally non-convergent and probably needs to be coded into an AI directly; gradient descent on output will only produce lying psychopaths mimicking morality.

GOFAI has some hope because we could code morality directly. Uploads have some hope because you're uploading the hardwiring (whether or not you understand it). As I said, this does not equal safe, in either case; as you say, GOFAI has a lot of potential pitfalls, and uploaded people would be so far out of evolutionary environment that their remaining sane is far from assured.

But I'm not seeing any hope of success on non-uploads without the ability to look inside the box. This is because "is moral" and "is pretending to be moral successfully" have identical output except in situations where dropping the pretence is worth it i.e. situations where there's a high chance of you losing control upon betrayal. Interpretability might pull a rabbit out of the hat (I put it at about 3%, which is better odds than Yudkowsky gives), but I'm not very confident; to me, P?=NP notwithstanding, it seems like the difficulty of determining whether spaghetti-code does X is generally at least as high as the difficulty of writing code that does X, which implies that making safe NNs is at least as hard as writing GOFAI.

Not objecting to your original modpost, but I think you're incorrectly excluding the "is clueless" category here. This strikes me as a thing where lots of people are clueless, and more than that as a thing where there's a fixed supply of clues.

Where have you encountered it outside of Ms. Harrington's work?

I don't know what "māyā" means, but there's that infamous exchange from The Voyage of the Dawn Treader:

“But that would be putting the clock back,” gasped the Governor. “Have you no idea of progress, of development?”

“I have seen them both in an egg,” said Caspian. “We call it Going Bad in Narnia.”

It difficult to see how these are not moral improvements. Indeed even the more modern rights revolutions fighting various quarter-, eigth- and sixteenth-slaveries have been mostly on target.

If you cannot understand the moral calculus of your forebears, it's a sin of pride to pronounce that calculus wrong. To say that your forebears are wrong and have that be more than a farce, you need to understand why they thought what they thought and be able to point to a mistake (of fact or of reasoning). Else, you have no way of really knowing whether you're simply a fool who denies the existence of that which is beyond his ken. Mere replacement in the public consciousness is no substitute; that proves memetic fitness, not correctness.

I'm dubious, for instance, that you actually understand the moral questions posed by slavery. Can you name the two developments which most changed the moral calculus of forced labour between 1400 and the present day?

I tried to phrase the question in such a way as to imply "changed circumstances" rather than "changed understanding". It seems I failed, so oops on that.

"X is meant to appease tree gods, but tree gods aren't real, so X isn't valid" is good enough for the purposes of this point. Yes, there are the "oh, but what if there's some benefit that the HGs don't know about" issues, but the HGs are clearly wrong there regardless of whether you're right, and noting that they're wrong isn't just being pop-culture-Dunning-Krugered.

I got this in the volunteer-mod queue, and rated it "Neutral" because I think you're posting in good faith.

I do not, however, think that this is especially likely; the Five Eyes are certainly extremely powerful, but I don't think they actually want an Anglosphere that's tearing itself apart. They live here, and if the Anglosphere falls their power goes away.

I can believe in them getting mindkilled by the culture war like anyone else and sticking their hands where they don't belong. I can't believe they're responsible for the whole mess.

It's also such a low requirement that it shouldn't really ever come up unless the first point was horribly messed up.

Personal anecdote: I was in Australia's Chemistry Olympiad program twice. The setup of the program was that they put out an exam to anyone interested, best 21 people in the country went to a "scholar school" which was three weeks of extremely intensive university-level learning (we were all high school students), then based off a set of exams there and afterward they'd pick a team of four to represent Australia.

Now, I didn't get picked for the team either time (this was 07 and 08). But take a wild guess at the sex ratio, despite the total lack of any discrimination on the part of the program - it was simply "who had the highest marks on the exam".

Answer: I think there might have been one girl out of 21 one time? I know at least one time it was literally all boys. This wasn't unusual. Physics was similar; biology was typically something like 17:4 favouring girls (I was one of the boys in that one in 05 and 06).

The assumption that if you give everyone equal opportunity, the amount of men and women both able and willing to do X will be the same? It's not actually true. Usually it's not that dramatic, but under extreme selection (IIRC it was about 32,000 kids a year doing that exam? And that's, of course, just the ones who were interested and whose parents/teachers/etc. thought they had a chance) little tips to the balance become nearly pass/fail. And AIUI Harvard has roughly the same degree of selection as was going on there.

Forgot to publically state this.

I meant 1) food surplus (gradual improvement over the years - I picked 1400 as about the time Europe started to break out of Malthusian conditions - then sudden spike in the 20th century as birth ceased to keep pace with food production) making imprisonment without forced labour something that doesn't necessarily result in innocent deaths - when you look at the other plausible punishments for serious crimes (i.e. maiming/exile-beyond-the-frontier/execution), in most cases the criminals would rather be enslaved; 2) change in military value of conscription (up drastically with firearms, then down drastically with mechanisation); during the period where conscription was extremely valuable, countries that didn't adopt it tended to be quickly conquered by countries that did.

Lesser examples include indentured servitude becoming far less of a win-win with the closure of the frontier.

My point here is - slavery and slavery-like things went away, to at least a large extent, because of technological progress rather than moral progress. Our ancestors had a harder problem to solve than we do, and declaring ourselves morally superior because they didn't take an option that didn't exist is, well, overweening pride.

I mean, yeah, usually it boils down to either "hysterical strength", "ignores pain", or both, and in both cases it's a mental phenomenon that can be achieved without drugs (indeed, the former is literally named after hysteria often producing it).

Not sure whether it's in play here, but practice being in pain (e.g. self-harm, possibly also long-term injection drug use) also increases pain tolerance to unusual levels (permanently?).

Am I taking a relativist stance? Maybe, depending on how you define "relativist". There is certainly a degree to which I'm nervous about mistaking what amount to ideological fashions for deep and lasting discoveries*. But in the case of slavery and slavery-like things, while I was indeed implying it was more OK for them than for us, I wasn't invoking relativism, merely changed technological circumstances affecting tradeoffs.

The first big one I was pointing at was punishment for serious crimes. Fines and humiliating punishments (the pillory, for instance) were already around in 1400, but for serious crimes (robbery, rape, murder...) that's not going to stop someone re-offending or provide a big enough deterrent. And, critically, the modern option of "stick them in a box and feed them" does not work in 1400; there is no food surplus, and innocents will die (perhaps not of starvation per se, but of disease due to undernutrition) if you have a significant amount of useless eaters. The remaining options all suck; you can maim them, you can enslave them (either privately or in a working prison), you can exile them (to potentially re-offend somewhere else, or quite likely die), or you can execute them. It is highly non-obvious that slavery isn't the best and most humane option there; certainly, most murderers would rather be enslaved than die!

The second big one was conscription. Conscription was not actually a humongous deal in 1400 AIUI, but between then and now it went drastically up and then drastically back down. This had nothing to do with morals and everything to do with military reality: guns made mass untrained armies really good, and then mechanisation plus nuclear weapons made them less useful again. In the meantime, there wasn't really an option of "don't do conscription"; you'd just get conquered by someone who did.

You can go into tradeoffs in a lot of these cases. Indentured servitude was invented as a solution to the Parfit's hitchhiker problem when colonising new land; getting rid of it was the right thing to do, but mostly because we stopped colonising new land. And we do still have it, in a highly-regulated form - if you join the army, you are required to follow lawful orders to the point of death, because the Parfit's hitchhiker problem is still a big deal in that profession.

There are limits to this; I can't make an argument in favour of hereditary slavery or slave raids that I'd truly accept, regardless of time period. But in a lot of instances, this was clearly technological progress and not moral progress; it wasn't that people in the past were ignorant and evil, they just had a different problem to solve than we do now, and that meant different policy choices.

And this is why I pointed to the whole Chesterton's Fence idea. If you understand why something was done and can point to some mistake, or some reason it no longer applies, then sure, tear it down. But if you can't do that, then you haven't ruled out the scenario of "I'm an idiot with delusions of wisdom", and that demands caution and humility.

*I'd suggest reading the Cornerstone Speech to see a particularly-extreme historical example - Ctrl-F "tedious" to skip to the relevant bit.

The moral basis for protecting life is surely more related to the ... continuance of that life and the person's experiences and actions, than the person's lack of consent to dying?

Preference utilitarianism and to some extent liberalism disagree with this.

I think you're making two separate arguments here and not distinguishing enough between them.

  1. You think that letting governments build high-powered AI while shutting down others' access to it is a bad idea.

  2. You don't like Eliezer Yudkowsky and those who follow him.

The thing is, these are entirely-separate points. Eliezer Yudkowsky does not want to let governments build high-powered AI. Indeed, his proposal of threatening and if necessary declaring war against governments that try is the direct and inevitable result of taking that idea extremely seriously - if you really want to stop a country from doing something it really wants to do, that usually means war. And he's fine with interpretability and alignment research - he doesn't think it'll work on neural nets, but in and of itself it's not going to destroy the world so if people want to do it (on small, safe models), more power to them.

So it's kind of weird that you set up Yudkowsky as your bugbear, but then mostly argue against something completely different from the "Yuddist" position.

As an aside, I think you're wrong to say that pursuing Yudkowskian Jihad will necessarily end in WWIII with China. The PRC has actually started requiring interpretability as a precondition of large AI deployment, which is a real safeguard against the machine-rebellion problem. For all that the CPC is tyrannical, they still don't actually want to kill all humans; they cannot rule humanity if they, and humanity, are dead. I won't pretend; there will probably be some countries that will attempt to cheat any international agreement not to pursue NNAGI, who will have to be knocked over. But I think there's a decent chance of achieving Jihad without a nuclear war being required.

Have Chinese/Russian inspectors looking at GPU production and use, as well as feedstocks for such in case of clandestine factory.

(The reverse would also be used.)

[This is quick, a partial response, I'll have to read your comment more carefully to give it a full and fair thought. Thanks!]

I'll wait for the full response before replying; I don't want to go off half-cocked, and it'd also be annoying to have two parallel conversations.

And I see the LessWrongers cheering and declaring victory at each new headline.

His acolytes seem to think "well the worst of both worlds at least gets us part of the world we want, so let's go for it".

I think this is more that a lot of the LWers had (incorrect) priors that the world would never listen until it was too late, so even insufficient levels of public and government reaction are a positive update (i.e. the amount of public reaction they are seeing, while not currently enough to Win, is more than they were expecting, so their estimated likelihood of it going up enough to Win has been raised). I'm open to being disproved, there, if you have a bunch of examples of LWers specifically thinking that national governments racing for NNAGI could possibly end well.

But anyway, even if you believe the people who brought us the Wuhan Institute for Virology have got it all covered, then you still have to worry about all the other countries in the world.

Sure! Like I said, I think that instituting a full global ban on large NNAI will probably require war at some point. But most countries do not have large nuclear stockpiles, so this doesn't necessarily mean a nuclear WWIII with X * 10^8 dead. I think the latter would probably still be worth it, but as a factual matter I think it's fairly likely that the PRC specifically would fall in line - while in theory a party machine can be made of AI, the current CPC consists solely of humans who do not want to kill all humans.

So...

Poot is Trump. Alice is the people going into histrionics over Trump. VET is an amalgamation of the COVID vaccination and the crackdown following Jan 6. The babysitter is Trump being jailed. The sheep are the Proles. I think the bit about switching to farming cotton at the end refers to some supposed scheme to kill all the proles and replace them with AI? Not sure how Hillary Clinton fits into that.

You're forgetting about copyright. Without copyright software would indeed be competed down to nothing, but with it you cannot run an open operation to zero out the price of software (or other information, such as fictional media or journal articles). You could, in theory, start a competing software company, but there are economies of scale and network effects that mostly prevent that (water has some of these too, but AIUI it's fairly tightly regulated and in practice even when it's not there's always the implicit threat of "if you play funny buggers with the water supply, much of the population will immediately drop everything to put your head on a pike and in a universal-franchise democracy they will get it").

If you want the price of software to crash to zero, therefore, making "people realise" that its equilibrium price is zero won't actually do anything. You need to revoke or at least massively rollback copyright law. Note that this will cause supply to drop significantly unless you do something about that.

If person A is arguing thing B I disagree with because A believes B, I may want to write a thoughtful reply convincing A of !B.

If person A is arguing thing B I disagree with for some other reason, I definitively don't want to write such a reply, because A already believes !B and doesn't need convincing.

If P(person A believes B|person A argues B without noting devil's advocacy) drops significantly below 1, it therefore becomes a much-less-good deal for me to make those kinds of replies - which are at the core of the kind of discourse theMotte attempts to nurture. I risk wasting my time and feeling stupid.

To troll is to drag a baited hook through the water, punishing those who bite down. If you want to encourage people to bite down on food, therefore, it's best to forbid baited hooks. Signposted devil's advocacy is fine, though, because the hook's not baited.

Don't have Discord, so you're getting text here.

13: it depends what "diversity" is code for; I can think of at least three different meanings i.e. "diversity of opinions", "racial diversity per se", and "skimming off the top of ROW's IQ pool". The first is mostly a strength under capitalism because it maximises efficiency at generating alternatives; the second is a weakness because racial animus lowers societal trust; the third is a strength, at least selfishly, for obvious reasons.

28: The adversarial justice system requires that criminal defence lawyers exist. So if this statement is true, you kind of have to accept one of the following propositions:

  1. "This system is fine, but it can't work without bad people" (this raises issues of "if society requires these bad people's badness in order to do good, are they really bad people?")

  2. "The adversarial justice system is bad" (this isn't clearly false - there are benefits and drawbacks - but it's such a big proposition that it really kind of subsumes your original point).

  3. "Criminal defence lawyers should exist but should all suck at their jobs" (a trial that always ends in conviction seems dominated by skipping the trial and proceeding straight to imposing sentence).

29: You can't stably privatise the police force and army; if you try, soon there will be a coup d'état, after which the police force and army will again be connected to government. A government with no monopoly on force is not much of a government.

There are also things like market failures or externalities that are fairest when done coercively (the fire brigade has a classic free-rider problem where if I refuse to pay for the fire brigade, they will still usually prevent fires from reaching my house due to all the people nearby who have paid for it). There are technically ways a labyrinthine system of free contracts can in-practice implement this coercion (in this case, the homeowners of an area all sign a contract that they will pay for the fire brigade and won't sell their homes to anyone who doesn't enter into the same contract), but in many cases it's less paperwork to have one such contract - the social contract - and run it through government.

42: I agree with proposition #16, and that's basically where the "AI is dangerous" thought comes from. If IQ is power, then something with IQ 10,000 (whatever that means) is powerful indeed - and if that something thinks Earth would be a better place without humans (much as, say, humans think Earth would be better off without malarial mosquitoes), the default outcome is that we go the way of the mammoth and the sabre-toothed tiger when men showed up. This is really the core point; most of the argumentation in practice centres on a bunch of... well, the nasty word would be "cope", that attempts to carve out some sort of reason this general argument shouldn't apply.

45: I live in Australia. Australia hasn't polarised nearly as badly as the USA has; it's commonly conjectured that this is because we have compulsory voting and IRV, which forces our two largest parties toward each other via the Median Voter Theorem (you can only win in the centre, because extremists on your side are already forced to vote for you) and thus doesn't leave a lot to get polarised about. I think you're probably right about the marginal effect of slightly increased vs. decreased turnout from what the USA currently has, but this local dependence reverses when you get very far from that.

52: With the obvious exception of "doing MMA to people without their consent or some genuine cause", sure.

61: On the margin you certainly have a case, but the optimal amount of such regulation is importantly nonzero. With zero, you get bosses imposing hazards but not telling workers/customers about them, which generally means you don't get to have nice things.

With voluntary plurality voting, policies that appeal to the base may increase turnout or increase the percentage that vote for you instead of wasting it on a third party; this breaks the MVT.

IRV + compulsory negates that; those far from the centre are forced to preference you as long as you're one micron better than the other guy.

Calling them criminal defence lawyers is accurate; they practice criminal law (the law that deals with crimes) as defence attorneys.

I am also aware of why they exist; like I said, there are benefits to the adversarial system. With that said, inquisitorial systems don't always result in a police state.

Neurotypical was invented as the inverse of "autistic". Some autistics are, to use the old word, "morons", but over half aren't (I'm autistic and have IQ 130). It's one thing to use slurs, but is it so much to ask that you use accurate ones?

There are two purposes of ads:

  1. Create common knowledge that deals are available

  2. Hoodwink gullible people into taking bad deals.

#1 is strongly positive-sum because it reduces deadweight loss. #2 is strongly negative-sum; the equilibrium is that everyone gets taught about how not to fall for ads, and also that businesses spend large resources on marketing, both of which are losses to society. Back in the 1930s when marketing psychology and communications technology were far less advanced, #1 was the bigger effect. Nowadays it is fairly obvious that #2 is the bigger effect.

You can't uninvent modern marketing techniques, and you can't ban ads without extreme collateral damage, but reducing their effectiveness is almost certainly a net win. Ads masquerading as non-ads are more effective and therefore bad. Ads that are better targetted to people's psychological weaknesses are more effective and therefore bad.