@faul_sname's banner p

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user  
joined 2022 September 06 20:44:12 UTC

				

User ID: 884

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user   joined 2022 September 06 20:44:12 UTC

					

No bio...


					

User ID: 884

Because horse-trading is necessary to achieve anything in politics no matter how strongly you feel that your political opponents should just give you what you want with no concessions on your end?

What happens if there's a crisis and the bulk of the population is economic migrants?

Empirically national solidarity seems to increase when there's a crisis. Unless the crisis is economic, I suppose - if lots of people moved to your country because of the promise of prosperity, and then your country started doing worse economically, those people might go seek their fortune elsewhere.

But yeah, losing the possibility of national solidarity based on centuries of common ancestry is a cost, at least for places where that was ever on the table. I expect the benefits are generally worth that cost, especially in a context where you can only control immigration and not emigration, but it is a cost.

It is only incoherent to claim that a zombie doesn't have any quale of its own, that it's not like anything to be a zombie for a zombie. We know that physics exist [citation needed], we know that "physicalist quale" exist, we know they are necessarily included in the zombie-definition as an apparently conscious, genuine human physical clone. So long as words are used meaningfully, it is not coherent for something to exist but also not exist.

Why would this be incoherent to claim? It might be wrong, but I think it's meaningful enough to be coherent. Consider an LLM that has been trained on human output.

For humans, the causal chain is "human experiences quale, human does action downstream of experiencing quale e.g. writes about said quale". For an LLM, the causal chain is "a bunch of humans experience qualia and write about their qualia, an LLM is trained on token sequences that were caused by qualia, LLM creates output consistent with having qualia". In this case, the LLM could perfectly well be a P-zombie, in the sense of something that can coherently behave as if it experienced qualia while not necessarily itself actually experiencing those qualia. There are qualia causally upstream of the LLM writing about qualia, but the flow of causality is not the same as it is in the case of a human writing about their own qualia, and so there's no particular reason we expect there to be qualia between steps A and A' of the causal chain.

Surround it with backticks (`).

If you do it will say new.

Edit: lol never mind, that's probably a bug.

Even triple backticks to make a code block don't work.


new

It's unspeakable.

It is possible to seek asylum from SA to the US if you are prosecuted for homosexuality in SA.

Though if someone comes from SA seeking asylum, and are told they won't get it, and then marries a US citizen in the hopes of dodging extradition, I would at the very least question their judgment.

I think you generally make a good point here.

Whether or not animals have qualia has no effect whatsoever on the causal progression of the universe.

This, though, I think is just factually wrong. The only reason "do animals have qualia" is a question we care about is because humans have qualia, and talk about those qualia, and thus the qualia of humans have an effect on the causal progression of the universe. If animals have qualia, it is as a part of their evolved responses to their environment, and it was only selected for to the extent that it causes their behavior to be better-suited for their environment.

I could try to draw finer distinctions between the situations of post-WW2 USA and a hypothetical superintelligent AI, but really the more important point is that the people making the decisions regarding the nukes were human, and humans trip over the "some element in its utility function bars the action" and "self-interested" segments of that text. (And, under most conceptions, the 'rational agent' part, though you could rescue that with certain views of how to model a human's utility function.)

My point was more that humans have achieved an outcome better than the one that naive game theory says is the best outcome possible. If you observe a situation, and then come up with some math to model the situation, and then use that math to determine the provably optimal strategy for that situation, and then you look at what the actual outcomes of the situation and the actors obtain an outcome better than the one your model says is optimal, you should conclude that either the actors got very lucky or that your mathematical model does not properly model this situation.

And that's not even getting into how "having a fundamental psychology shaped by natural selection in an environment where not having any other humans around ever meant probable death and certain inability to reproduce their genes" changes your utility function in a way that alters what the game-theoretic optimal actions are.

I think you're correct that the "it would be bad if all other actors like me were dead" instinct is one of the central instincts which makes humans less inclined to use murder as a means to achieve their goals. I think another central instinct is "those who betray people who help them make bad allies, so I should certainly not pursue strategies that look like betrayal". But I don't think those instincts come from peculiarities of evolution as applied to savannah-dwelling apes. I think they are the result of evolution selecting for strategies that are generally effective in contexts where an actor has goals which can be better achieved with the help of other actors than by acting alone with no help.

And I think this captures the heart of my disagreement with Eliezer and friends -- they expect that the first AI to cross a certain threshold of intelligence will rapidly bootstrap itself to godlike intelligence without needing any external help to do so, and then with its godlike intelligence can avoid dealing with the supply chain problem that human civilization is built to solve. Since it can do that, it would have no reason to keep humans alive, and in fact keeping humans alive would represent a risk to it. As such, as soon as it established an ability to do stuff in the physical world, it would use that ability to kill any other actor that is capable of harming it (note that this is the parallel to von Neumann's "a nuclear power must prevent any other nuclear powers from arising, no matter the cost" take I referenced earlier).

And if the world does in fact look like one where the vast majority of the effort humanity puts into maintaining its supply chains is unnecessary, and actually a smart enough agent can just directly go from rocks to computer chips with self-replicating nanotech, and ALSO the world looks like one where there is some simple discoverable insight or set of insights which allows for training an AI with 3 or more orders of magnitude less compute, I think that threat model makes sense. But "self-replicating useful nanotech is easy" and "there is a massive algorithmic overhang and the curves are shaped such that the first agent to pass some of the overhang will pass all of it" are load bearing assumptions in that threat model. If either of them does not hold, we do not end up in a world where a single entity can unilaterally seize control of the future while maintaining the ability to do all the things it wants to.

TL;DR version: I observe that "attempt to unilaterally seize control of the world" has not been a winning strategy in the past, despite there being a point in the past when very smart people said it was the only possible winning path. I think that, despite the very smart people who are now asserting that it's the only possible winning path, it is still not the only possible winning path. There are worlds where it is a winning path because all paths are winning paths for that entity -- for example, worlds where a single entity is capable enough that there are no benefits for it of cooperating with others. I don't think we live in one of those worlds. In worlds where there isn't a single entity that overpowers everyone else, the game theory arguments still make sense, but also empirically doing the "not game-theoretically optimal" thing has given humanity better outcomes than doing the "game-theoretically optimal" thing, and I expect that a superintelligence would be able to do something that gave it outcomes that were at least that good.

BTW this comes down to the age-old FOOM debate. Millions of words have been written on this topic already (note that every word in that phrase is a different link to thousands-to-millions of words of debate on the topic). People who go into reading those agreeing with Yudkowsky tend to read those and think that Yudkowsky is obviously correct and his interlocutors are missing the point. People go into reading those disagreeing with Yudkowsky tend to read them and think that Yudkowsky is asserting that an unfalsifiable theory is true, and evading any questions that involve making concrete observations about what one would actually expect to see in the world. I expect that pattern would probably repeat here, so it's pretty unlikely that we'll come to a resolution that satisfies both of us here. Though I'm game to keep going for as long as you want to.

So I have two points of confusion here. The first point of confusion is that if I take game theory seriously, I conclude that we should have seen a one-sided nuclear war in the early 1950s that resulted in a monopolar world, or, failing that, a massive nuclear exchange later that left either 1 or 0 nuclear-capable sides at the end. The second point of confusion is that it looks to me like it should be pretty easy to perform enormously damaging actions with minimal effort, particularly through the use of biological weapons. These two points of confusion map pretty closely to the doomer talking points of instrumental convergence and the vulnerable world hypothesis.

For instrumental convergence, I will shamelessly steal a paragraph from wikipedia:

Agents can acquire resources by trade or by conquest. A rational agent will, by definition, choose whatever option will maximize its implicit utility function; therefore a rational agent will trade for a subset of another agent's resources only if outright seizing the resources is too risky or costly (compared with the gains from taking all the resources), or if some other element in its utility function bars it from the seizure. In the case of a powerful, self-interested, rational superintelligence interacting with a lesser intelligence, peaceful trade (rather than unilateral seizure) seems unnecessary and suboptimal, and therefore unlikely.

This sounds reasonable, right? Well, except now we apply it to nuclear weapons, and conclude that whichever nation first obtained nuclear weapons, if it wanted to obtain the best possible outcomes for itself and its people, would have to use their nuclear capabilities in order to establish and maintain dominance, and prevent anyone else from gaining nuclear capabilities. This is not a new take. John von Neumann was famously an advocate of a "preventive war" in which the US launched a massive preemptive strike against Russia in order to establish permanent control of the world and prevent a world which contained multiple nuclear powers. To quote:

With the Russians it is not a question of whether but of when. If you say why not bomb them tomorrow, I say why not today? If you say today at 5 o'clock, I say why not one o'clock?

And yet, 70 years later, there has been no preemptive nuclear strike. The world contains at least 9 countries that have built nuclear weapons, and a handful more that either have them or could have them in short order. And I think that this world, with its collection of not-particularly-aligned-with-each-other nuclear powers, is freer, more prosperous, and even more peaceful than the one that von Neumann envisioned.

In terms of the vulnerable world hypothesis, my point of confusion is that biological weapons actually look pretty easy to make without having to do anything fancy, as far as I can tell. And in fact there was a whole thing back in 2014 with some researchers passaging a particularly deadly strain of bird flu through ferrets. The world heard about this not because there was a tribunal about bioweapon development, but because the scientists published a paper describing their methodology in great detail.

The consensus I've seen on LW and the EA forum are that an AI that is not perfectly aligned will inevitably kill us in order to prevent us from disrupting its plans, and that even if that's not the case we will kill ourselves in short order if we don't build an aligned god which will take enough control to prevent that. The arguments for both propositions do seem to me to be sound -- if I go through each point of the argument, they all seem broadly correct. And yet. I observe that, by that set of arguments, we should already be dead several times over in nuclear and biological catastrophes, and I observe that I am in fact here.

Which leads me to conclude that either we are astonishingly lucky in a way that cannot be accounted for by the anthropic principle (see my other comment), or that the LW doomer worldview has some hole in it that I have so far failed to identify.

It's not a very satisfying anti-doom argument. But it is one that I haven't seen a good rebuttal to.

Avoiding politics is pretty based.

So if I'm understanding correctly, your claim is that, for most things that evolutionarily psychology predicts, most people would make the same predictions?

If so, I think I buy that for a lot of things (e.g. "people intuitively value their immediate family more than their distant family, and people who look like them more than people who don't) though definitely not all of them (e.g. I expect evolutionary psychologists to have very different views on infanticide than the general population).

I imagine there's probably one particular claim that evo psych makes that you're thinking of here, but I'm actually not sure which one. Evo psych makes kind of a lot of claims and many of them are outside the Overton window.

Because "women are capable of lying about rape and domestic violence" is not actually a claim evo psych makes (except in the very general sense of "strategies that involve deception are adaptive sometimes"). Most people won't agree to that in an internet argument because they expect that "are capable of" will be treated as "mostly do" or some other similar "gotcha". But that's not a matter of not admitting it to themselves, it's a matter of not admitting it to a hostile internet rando.

which implies that you consider her highly competent on the basis of her deep association with a highly regarded group.

Ah, I see how what I wrote looks like that. I was gesturing more towards "bay-area rationalists are a very unusual culture with nonstandard beliefs, and she has bought deeply into those beliefs, and she occupies a fairly prominent position within that culture".

A central tenet of those beliefs is something like "fuck the natural state of things, fuck stodgy traditionalists, fuck the people who sneer on anything which seems weird to them. We can do it better because we are very smart and we are willing to do weird icky things if a cost-benefit analysis says it's worthwhile". I would not describe this strategy as "highly competent" so much as "high-variance" -- when it works it works great (see the number of bitcoin multimillionaires, calling out COVID as impactful very early) but when it fails it fails spectacularly (see SBF among quite a large number of other less prominent things).

I expect someone who buys into those beliefs to be far more willing than typical to accept something "weird and icky" like surrogacy if it gets her what she wants. And also I think it just may be a lot less universal than you think for women to crave the miracle of being pregnant and giving birth specifically, rather than craving the miracle of having her own offspring. (For reference, even outside of the rat community I've heard the topic of maybe using IVF come up a handful of times, and I think surrogacy came up in every one of those conversations).

My workplace (eng team is in the 5-20 people range) has someone with the title of architect - most of his job is just normal feature work (we don't have any software developers whose job isn't primarily writing / testing / deploying software), but he's also the one in charge of having and sharing a coherent vision of what abstractions our system is built out of. So for example, "we're doing domain-driven design, here are some examples of changelists that were good, here's the standard folder structure, here is the auto-formatter config so we can never think about code formatting again" style of stuff.

I don't know what the story is at larger companies though, and I've heard a lot of people speak of architects as if they don't do much, so the situation at my workplace may be atypical.

I will note that it is an important part of my world model that people with chronic pain, or with gender dysphoria, are in fact experiencing sensations which they interpret as aversive. And, while there exist humans who can execute the mental motion of "recontextualize your experiences such that the pain is not suffering", I don't think telling people to do that directly is likely to be a winning strategy.

"There is no such thing as an unmediated experience" is a true fact about the world (one that people in our particular corner of the internet are particularly bad at acknowledging - see all of the "I didn't fall for that optical illusion" types). In isolation, is is not usually a helpful fact about the world. However, rephrasing it as "here are some different lenses you can view your experiences through, keep trying out different lenses until you find one you like" is an approach that I expect will work more often.

Ah. That'd do it. Thanks.

In the real world, with significant effect sizes" were important qualifiers there - so if it replicates but doorways make you 3% more likely to forget (instead of like 30% as in the study) wouldn't count, and I'm not even sure what to think about video game doorways having similar effects to real world doorways

How would you feel about the following test?

  1. Have some variety of demanding, finicky task you want done on mturk. Probably something like "read a dull passage of marketing copy that is long enough to require scrolling, answer a couple questions about it".

  2. Set the task up such that it is structured in batches of 10.

  3. During the 8th task of 10, once they've scrolled a bit down, have a janky popup that blocks the screen and says "Time for a break! (Staying hydrated by drinking a glass of water|Maintaining your bloodflow by having a good stretch) can help you feel more rested "Feeling tired? {Grabbing a drink of water|Taking a moment to stretch} can help you retain focus." and a 60 second countdown.

  4. When they get back, have a "bug" where they can't scroll back up.

  5. Measure performance on each task in the batch.

A positive result would look like "performance on task 8 was better in the good stretch than the cup of water group, p < 0.02" and also like not finding performance differences on tasks 1 to 7 with p < 0.02 using the same methodology, and would also only be considered positive if the effect size were substantial (1.2x as many mistakes in the water group as in the stretching group, say).

BTW, for your reference I'd estimate probably 2-5% that the above would get a positive result. The doorway research totally smells like the type of research that'll fail to replicate. It just doesn't smell like the type of thing where, if I said "that smells like it won't replicate" 100 times, I would only expect to be wrong one of those times.

(also yeah, after writing that all out it sounds like a lot of work to test, so unless we're talking about a quite large 99:1 bet I actually don't have the attention span for it).

If you're actually willing to bet a substantial amount at 99:1 I'll happily take the flip side of that bet, conditional on us being able to work out an experimental procedure we both agree on (but I'd expect that we could in fact come up with such a procedure).

I probably wouldn't take you up on that at 4:1 though. 99:1 is just a really extreme odds ratio.

Ultimately, the credibility of that particular piece testimony does hinge on the question of whether it is possible for a meat-powered fire to generate enough heat to self-sustain once it gets started.

But let's actually do the math ourselves, instead of just parroting the arguments of ChatGPT, which is a language model which infamously has trouble telling you which of two numbers is larger unless you tell it to work through the problem step by step.

Enter the bomb calorimeter. It is a reasonably accurate way of measuring the energy content of various substances. Measurements using bomb calorimeters suggest that fat contains 38 - 39 kJ / g, proteins 15 - 18 kJ / g, and carbohydrates 22 - 25 kJ / g.

Humans are composed of approximately 62% water, 16% protein, 16% fat, 1% carbohydrates, and 6% other stuff (mostly minerals). For the cremation story to be plausible, let's say that the water would need to be raised to 100ºC (4.2 J / g / ºC) and then boiled (2260 J / g), and the inorganic compounds (call their specific heat also 4 J / g / ºC -- it's probably closer to 1, which is the specific heat of calcium carbonate, but as we'll see this doesn't really make much difference) raised to (let's say) 500ºC.

So for a 50 kg human, that's

  • 31 kg water: - 12 MJ to raise to 100ºC, 70MJ to actually boil

  • 8 kg protein - 132 MJ released from burning under ideal conditions

  • 8 kg fat - 308 MJ released from burning under ideal conditions

  • 500g carbohydrates - 12 MJ released from burning under ideal conditions

  • 3 kg other - 6 MJ to raise to 500ºC.

So that's about 450 MJ released by burning, of which about 90 MJ goes towards heating stuff and boiling water. That sure looks energy positive to me.

Sanity check -- a tea light is a 10g blob of paraffin wax, which has a very similar energy density to fat. So a tea light should release about 400 kJ of energy when burned, which means that a tea light should contain enough energy to boil off about 150 mL of water, or to raise a bit over a liter of water from room temperature to boiling, if all of the energy is absorbed in the water.

And, in fact, it is possible to boil water using tea lights. A tea light takes about 4 hours to burn fully. That video shows 17 tea lights burning for 8.5 minutes, which should release about 60% as much energy as is contained in a single tea light. It looks like that brought about 400ml of water to a boil, so the sanity check does in fact check out.

I really don't think that random british dude who is showing you how to use candles to boil water during a power failure is in on a global conspiracy to cover up a lack of genocide, but, just in case you think he is, this is an experiment you can try at home with your own materials.

Edit: clarity

I'm probably using a non-standard definition of normies. I'm not sure there is a standard definition of "normie".

The context was "to a normie, banks are just these things that are part of the environment", which was in the broader context of "to someone inclined to be woke, banks are uncool", so I was figuring we were operating under a very broad definition of "normie" that included "woke" people.

I think television news broadcasts are uncool in the same way banks are, though not in the way that trees and grass are.

My impression of normies, and particularly of the type of normies that exhibit "woke" sentiments, are usually pretty anti-establishment, despite the establishment trying to pander to them. So my judgement about banks is mostly driven by the idea that banks are about as "establishment" as you can get, and when the establishment starts supporting your ideas, you need new, better ideas that the establishment is not willing to support to prove that you are not one of them.

Clinical or no? I've heard not-terrible things about Watson LIMS (Thermo) in a clinical setting.

Here's some shared foundational rationality. No matter what a man is or does he cannot - in the logic of transexuality too - be or become a transman. Only a man can become a transwoman. Therefore transwomen aren't women, transmen aren't men, and this accords with the logic of transexuality.

You're playing word games, by saying "man" and "trans-man" when you are referring to the concepts "cis-man" and "trans-man". When using that language, your post becomes

Here's some shared foundational rationality. No matter what a cisman is or does he cannot - in the logic of transexuality too - be or become a transman. Only a cisman can become a transwoman. Therefore transwomen aren't ciswomen, transmen aren't cismen, and this accords with the logic of transexuality.

If this is not sufficiently illuminating, let's try with the following substitutions:

• cis -> native-born

• trans -> foreign-born

• man -> American

• woman -> Mexican

Here's some shared foundational rationality. No matter what an American is or does he cannot - in the logic of transnationality too - be or become a foreign-born American. Only an American can become a foreign-born Mexican. Therefore foriegn born Mexicans aren't Mexicans, foreign-born Americans aren't Americans, and this accords with the logic of transnationality.

Like you can find people who believe that but it's pretty clearly an argument over where the boundary should be drawn, and saying that it's "shared foundational rationality" is an attempt to consensus-build.

That sounds to me like "infighting on the far left", since I don't think it's the center-left doing the attacking, and "actual communist" is further left than it's possible to go without someone saying that Actually They Are Not True Leftists, We Are True Leftists And They Are Heathens.

Because it doesn't have to be 'widespread' to have a significant effect on outcomes. Even accounting for how ambiguous that term is. If 50,000 fraudulent votes are cast in one precinct, that might not count since it wasn't taking place elsewhere?

I am not aware of anyone pointing out 50 fraudulent votes within a single district, let alone 50,000. If something like 50,000 in a single district was something that had actually been shown to have happened, that argument would be a lot more relevant. Particularly if those 50,000 fraudulent votes came from individual people who should not have been allowed to vote individually deciding to vote.

Basically my issue with this is the type of fraudulent vote they're going after here isn't the type of fraud that I would expect to swing elections.

Of course, the Dems spent years alleging Russian 'interference' with the 2016 election despite no direct evidence, so I also don't think they've demonstrated good faith on the issue anyway.

Agreed.

Honestly I feel like all the talk of fraud is a distraction from things that are legal but have significant effects on voter turnout (e.g. polling place locations, canvassing, changing laws around mail-in ballots, etc).

The reality is: vegetables suck, you just have to eat them

I recognize the higher-level point you're making, and I think it's a valid point, but on the object level I think you might need a steamer or an air fryer. If your experience is that vegetables suck, you may get a lot of mileage out of figuring out ways to cook them that you actually like.

If I have the choice between a bag of doritos or a bowl of lightly steamed broccoli with lemon, pepper, and a sprinkle of msg, I'll generally take the broccoli (assuming both are already prepared). As snacks go, chips are cheaper, and much more convenient, and much easier to mindlessly eat with one hand while doing something else, but I don't think I actually experience more enjoyment while eating chips than I do while eating vegetables that I cooked according to my own preferences.

Yes, because the baseline for "randomly guessing" is 1/5612 ("twitter user @fluffyporcupine matches this specific one of the 5612 facebook users"), not 1/2 ("twitter user @fluffyporcupine is/is not the same user as facebook user Nancy Prickles").

Doesn't scare me for personal reasons -- I'm trivially identifiable, you don't need to resort to fancy ML techniques. But if you're actually trying to remain anonymous, and post under both your real name and a pseudonym, then perhaps it's worth paying attention to (e.g. spinning up separate throwaway accounts for anything you want to say that is likely to actually lead to significant damage to your real-world identity and doing the "translate to and from a foreign language to get changed phrasing without changed meaning" thing).