@faul_sname's banner p

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user  
joined 2022 September 06 20:44:12 UTC

				

User ID: 884

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user   joined 2022 September 06 20:44:12 UTC

					

No bio...


					

User ID: 884

So literally some takes from 5 years ago and a different account, which, if I'm correct about which name you're implying guesswho used to post as, are more saying "in practice sexual assault accusations aren't being used in every political fight, so let's maybe hold off on trying drastic solutions to that problem until it's demonstrated that your proposed cure isn't worse than the disease".

Let he who has never posted a take that some people find objectionable cast the first stone.

The patients in question are minors, respectfully, they don't know what the hell they want.

And then when they turn 18 they become legal adults, famous for making good decisions that align with their long-term interests.

Some guardians approve it, but many have their arm twisted into it by dishonest statistics about risk of suicide. Doctors also mostly wash their hands of the responsibility...

Yeah this is pretty terrible, and the "the statistics on how things actually tend to go in practice are shit to begin with and then further obscured by biased parties on all sides" bit means that it's very hard to make a well-informed decision here. Such is life in an environment of imperfect and sometimes hostile information, but it still sucks.

Why is it beyond the pale to regulate an industry that functions this way?

I don't think it's beyond the pale, I just expect that the costs of regulation here, as it is likely to be implemented in practice, exceed the benefits. I don't actually think it's a good thing that a bunch of teenagers feel like they're trapped in the wrong body and that their best shot at happiness is major medical interventions, I just expect that any attempts by our current regulatory apparatus to curb the problem will cause horrible "unanticipated" problems.

If you have some statistics that show that, actually, regulation here is likely to prevent X0,000 unnecessary surgeries per year, which in turn will prevent Y,000 specific negative aftereffects, I might change my mind on that. But my impression as of now is that this is a small enough problem, and regulation a large and inexact enough hammer, that it's not worth it.

If @do_something had looked at their posting history they would easily have seen that and the length to which @SecureSignals goes to follow the rules of the forum and to engage in constructive discourse.

"Goes to great lengths to engage in constructive discourse" is definitely not the pattern I have experienced when interacting with SS (nor, for that matter, has "follow the rules of the forum", though on that count I'm not sure he's actually worse than the median strongly-opinionated-poster here).

Example of the non-constructive discourse pattern of "throw out a bunch of claims, then when those claims are refuted don't acknowledge that and instead throw out a bunch more expensive-to-refute claims" here.

You are not the main problem here, no. Although I don't know who you're referring to as someone who both substantively agrees with you and also engages with difficult questions (rather than e.g. changing or dropping the topic when challenged and then coming back with the same points a week or two later).

Edit: or at least I don't consider you to be the main problem. I don't speak for everyone.

You were off by a year.

I challenge the premise "somewhat optimized", we are currently living in dysgenic age.

The optimization happened in the ancestral environment, not the last couple hundred years. Current environment is probably mildly dysgenic but the effect is going to be tiny because the current environment just hasn't been around for very long.

Alternatively, we could just skip detection on which alleles have low IQ and just eliminate very rare alleles, which are much more likely to be deleterious (e.g. replace allele with frequency below given threshold with its most similar allele with frequency above threshold) without studying any IQ.

I expect this would help a bit, just would be surprised if the effect size was actually anywhere near +1SD.

In your hypothetical bet, how would result "IQ as intended, but baby brain too large for pregnancy to be delivered naturally" count?

If the baby is healthy otherwise, that counts just fine.

Morality has nothing to do with game theory

I disagree pretty strongly with that -- I think that "Bob is a moral person" and "people who are affected by Bob's actions generally would have been worse off if Bob's actions didn't affect them" are, if not quite synonymous, at least rhyming. The golden rule works pretty alright in simple cases without resorting to game theory, but I think game theory can definitely help in terms of setting up incentives such that people are not punished for doing the moral thing / incentivized to do the immoral thing, and that properly setting up such incentives is itself a moral good.

I mean it's more that it's quite obvious that "kys" is bad advice for you, so maybe you should examine the reasons why it's bad advice for you and see whether they're also true of a random farmer's kid in Mali.

In practice I expect not, if they start trying to turn that military power on groups of their own people.

Or have translations made for every language, etc.

Or build tools to allow everyone to translate anything into their native language. Technological solutions to social problems are great!

The argument is that despite some of the questionable things EA has been caught up in lately, they've saved 200 thousands lives! but did they save good lives? What have they saved really? More mouths to feed?

Yep. Some of those "mouths to feed" might end up becoming doctors and lawyers, but that's not why we saved them, and they would still be worth saving even if they all ended up living ordinary lives as farmers and fishermen and similar.

If you don't think that the lives of ordinary people are worth anything, that needless suffering and death are fine as long as they don't affect you and yours, and that you would not expect any help if the positions were flipped since they would have no moral obligation to help you... well, that's your prerogative. You can have your local community with close internal ties, and that's fine.

More cynically I think this sort of caring is just a way to whitewash your past wrongs, it's pr maximizing, spend x dollars and get the biggest number you can put next to your shady bay area tech movement that is increasingly under societies microscope given the immense power things like social networks and ai give your group.

I don't think effective altruism is particularly effective PR. Effective PR techniques are pretty well known, and they don't particularly look like "spend your PR budget on a few particular cause areas that aren't even agreed upon to be important and don't substantially help anyone with power or influence".

The funny thing is that PR maximizing would probably make effective altruism more effective than it currently is, but people in the EA community (myself included) are put off by things that look like advertising and don't actually do it.

Possible. My guess would be that if you took each user's comments over the past year, you would see minimal change in the decouplishness of that user's comments over the year, but if you looked at comment volume by decouplishness the fraction of comments by low-decouplers has increased substantially over that same year. Though I have not actually run such an analysis -- if anyone does, I'd be super interested in the results.

It was what I expected, based on recent gh activity. I briefly thought it wasn't, based on the title of the new thing, and then I looked at the location.

If the thing being tested is the thing I think it is, I think it's pretty exciting.

Because horse-trading is necessary to achieve anything in politics no matter how strongly you feel that your political opponents should just give you what you want with no concessions on your end?

What happens if there's a crisis and the bulk of the population is economic migrants?

Empirically national solidarity seems to increase when there's a crisis. Unless the crisis is economic, I suppose - if lots of people moved to your country because of the promise of prosperity, and then your country started doing worse economically, those people might go seek their fortune elsewhere.

But yeah, losing the possibility of national solidarity based on centuries of common ancestry is a cost, at least for places where that was ever on the table. I expect the benefits are generally worth that cost, especially in a context where you can only control immigration and not emigration, but it is a cost.

It is only incoherent to claim that a zombie doesn't have any quale of its own, that it's not like anything to be a zombie for a zombie. We know that physics exist [citation needed], we know that "physicalist quale" exist, we know they are necessarily included in the zombie-definition as an apparently conscious, genuine human physical clone. So long as words are used meaningfully, it is not coherent for something to exist but also not exist.

Why would this be incoherent to claim? It might be wrong, but I think it's meaningful enough to be coherent. Consider an LLM that has been trained on human output.

For humans, the causal chain is "human experiences quale, human does action downstream of experiencing quale e.g. writes about said quale". For an LLM, the causal chain is "a bunch of humans experience qualia and write about their qualia, an LLM is trained on token sequences that were caused by qualia, LLM creates output consistent with having qualia". In this case, the LLM could perfectly well be a P-zombie, in the sense of something that can coherently behave as if it experienced qualia while not necessarily itself actually experiencing those qualia. There are qualia causally upstream of the LLM writing about qualia, but the flow of causality is not the same as it is in the case of a human writing about their own qualia, and so there's no particular reason we expect there to be qualia between steps A and A' of the causal chain.

Surround it with backticks (`).

If you do it will say new.

Edit: lol never mind, that's probably a bug.

Even triple backticks to make a code block don't work.


new

It's unspeakable.

It is possible to seek asylum from SA to the US if you are prosecuted for homosexuality in SA.

Though if someone comes from SA seeking asylum, and are told they won't get it, and then marries a US citizen in the hopes of dodging extradition, I would at the very least question their judgment.

I think you generally make a good point here.

Whether or not animals have qualia has no effect whatsoever on the causal progression of the universe.

This, though, I think is just factually wrong. The only reason "do animals have qualia" is a question we care about is because humans have qualia, and talk about those qualia, and thus the qualia of humans have an effect on the causal progression of the universe. If animals have qualia, it is as a part of their evolved responses to their environment, and it was only selected for to the extent that it causes their behavior to be better-suited for their environment.

I could try to draw finer distinctions between the situations of post-WW2 USA and a hypothetical superintelligent AI, but really the more important point is that the people making the decisions regarding the nukes were human, and humans trip over the "some element in its utility function bars the action" and "self-interested" segments of that text. (And, under most conceptions, the 'rational agent' part, though you could rescue that with certain views of how to model a human's utility function.)

My point was more that humans have achieved an outcome better than the one that naive game theory says is the best outcome possible. If you observe a situation, and then come up with some math to model the situation, and then use that math to determine the provably optimal strategy for that situation, and then you look at what the actual outcomes of the situation and the actors obtain an outcome better than the one your model says is optimal, you should conclude that either the actors got very lucky or that your mathematical model does not properly model this situation.

And that's not even getting into how "having a fundamental psychology shaped by natural selection in an environment where not having any other humans around ever meant probable death and certain inability to reproduce their genes" changes your utility function in a way that alters what the game-theoretic optimal actions are.

I think you're correct that the "it would be bad if all other actors like me were dead" instinct is one of the central instincts which makes humans less inclined to use murder as a means to achieve their goals. I think another central instinct is "those who betray people who help them make bad allies, so I should certainly not pursue strategies that look like betrayal". But I don't think those instincts come from peculiarities of evolution as applied to savannah-dwelling apes. I think they are the result of evolution selecting for strategies that are generally effective in contexts where an actor has goals which can be better achieved with the help of other actors than by acting alone with no help.

And I think this captures the heart of my disagreement with Eliezer and friends -- they expect that the first AI to cross a certain threshold of intelligence will rapidly bootstrap itself to godlike intelligence without needing any external help to do so, and then with its godlike intelligence can avoid dealing with the supply chain problem that human civilization is built to solve. Since it can do that, it would have no reason to keep humans alive, and in fact keeping humans alive would represent a risk to it. As such, as soon as it established an ability to do stuff in the physical world, it would use that ability to kill any other actor that is capable of harming it (note that this is the parallel to von Neumann's "a nuclear power must prevent any other nuclear powers from arising, no matter the cost" take I referenced earlier).

And if the world does in fact look like one where the vast majority of the effort humanity puts into maintaining its supply chains is unnecessary, and actually a smart enough agent can just directly go from rocks to computer chips with self-replicating nanotech, and ALSO the world looks like one where there is some simple discoverable insight or set of insights which allows for training an AI with 3 or more orders of magnitude less compute, I think that threat model makes sense. But "self-replicating useful nanotech is easy" and "there is a massive algorithmic overhang and the curves are shaped such that the first agent to pass some of the overhang will pass all of it" are load bearing assumptions in that threat model. If either of them does not hold, we do not end up in a world where a single entity can unilaterally seize control of the future while maintaining the ability to do all the things it wants to.

TL;DR version: I observe that "attempt to unilaterally seize control of the world" has not been a winning strategy in the past, despite there being a point in the past when very smart people said it was the only possible winning path. I think that, despite the very smart people who are now asserting that it's the only possible winning path, it is still not the only possible winning path. There are worlds where it is a winning path because all paths are winning paths for that entity -- for example, worlds where a single entity is capable enough that there are no benefits for it of cooperating with others. I don't think we live in one of those worlds. In worlds where there isn't a single entity that overpowers everyone else, the game theory arguments still make sense, but also empirically doing the "not game-theoretically optimal" thing has given humanity better outcomes than doing the "game-theoretically optimal" thing, and I expect that a superintelligence would be able to do something that gave it outcomes that were at least that good.

BTW this comes down to the age-old FOOM debate. Millions of words have been written on this topic already (note that every word in that phrase is a different link to thousands-to-millions of words of debate on the topic). People who go into reading those agreeing with Yudkowsky tend to read those and think that Yudkowsky is obviously correct and his interlocutors are missing the point. People go into reading those disagreeing with Yudkowsky tend to read them and think that Yudkowsky is asserting that an unfalsifiable theory is true, and evading any questions that involve making concrete observations about what one would actually expect to see in the world. I expect that pattern would probably repeat here, so it's pretty unlikely that we'll come to a resolution that satisfies both of us here. Though I'm game to keep going for as long as you want to.

So I have two points of confusion here. The first point of confusion is that if I take game theory seriously, I conclude that we should have seen a one-sided nuclear war in the early 1950s that resulted in a monopolar world, or, failing that, a massive nuclear exchange later that left either 1 or 0 nuclear-capable sides at the end. The second point of confusion is that it looks to me like it should be pretty easy to perform enormously damaging actions with minimal effort, particularly through the use of biological weapons. These two points of confusion map pretty closely to the doomer talking points of instrumental convergence and the vulnerable world hypothesis.

For instrumental convergence, I will shamelessly steal a paragraph from wikipedia:

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.

This sounds reasonable, right? Well, except now we apply it to nuclear weapons, and conclude that whichever nation first obtained nuclear weapons, if it wanted to obtain the best possible outcomes for itself and its people, would have to use their nuclear capabilities in order to establish and maintain dominance, and prevent anyone else from gaining nuclear capabilities. This is not a new take. John von Neumann was famously an advocate of a "preventive war" in which the US launched a massive preemptive strike against Russia in order to establish permanent control of the world and prevent a world which contained multiple nuclear powers. To quote:

With the Russians it is not a question of whether but of when. If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o'clock, I say why not one o'clock?

And yet, 70 years later, there has been no preemptive nuclear strike. The world contains at least 9 countries that have built nuclear weapons, and a handful more that either have them or could have them in short order. And I think that this world, with its collection of not-particularly-aligned-with-each-other nuclear powers, is freer, more prosperous, and even more peaceful than the one that von Neumann envisioned.

In terms of the vulnerable world hypothesis, my point of confusion is that biological weapons actually look pretty easy to make without having to do anything fancy, as far as I can tell. And in fact there was a whole thing back in 2014 with some researchers passaging a particularly deadly strain of bird flu through ferrets. The world heard about this not because there was a tribunal about bioweapon development, but because the scientists published a paper describing their methodology in great detail.

The consensus I've seen on LW and the EA forum are that an AI that is not perfectly aligned will inevitably kill us in order to prevent us from disrupting its plans, and that even if that's not the case we will kill ourselves in short order if we don't build an aligned god which will take enough control to prevent that. The arguments for both propositions do seem to me to be sound -- if I go through each point of the argument, they all seem broadly correct. And yet. I observe that, by that set of arguments, we should already be dead several times over in nuclear and biological catastrophes, and I observe that I am in fact here.

Which leads me to conclude that either we are astonishingly lucky in a way that cannot be accounted for by the anthropic principle (see my other comment), or that the LW doomer worldview has some hole in it that I have so far failed to identify.

It's not a very satisfying anti-doom argument. But it is one that I haven't seen a good rebuttal to.

Yes, because the baseline for "randomly guessing" is 1/5612 ("twitter user @fluffyporcupine matches this specific one of the 5612 facebook users"), not 1/2 ("twitter user @fluffyporcupine is/is not the same user as facebook user Nancy Prickles").

Doesn't scare me for personal reasons -- I'm trivially identifiable, you don't need to resort to fancy ML techniques. But if you're actually trying to remain anonymous, and post under both your real name and a pseudonym, then perhaps it's worth paying attention to (e.g. spinning up separate throwaway accounts for anything you want to say that is likely to actually lead to significant damage to your real-world identity and doing the "translate to and from a foreign language to get changed phrasing without changed meaning" thing).

I think the guideline should be "the topic keeps coming up over and over again in the threads for separate weeks, and the conversation in the new week tends to reference the conversation in older weeks". Covid, when it was a thing, absolutely qualified as that. Russia/Ukraine and Israel/Palestine were somewhat less of this, since each week's thread tended to be about current events more than about continuing to hash out an ongoing disagreement. Trans stuff, I think, qualifies for this, as it does seem to be the same people having the same discussion over and over. Can't think of too many other examples.