@faul_sname's banner p

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user  
joined 2022 September 06 20:44:12 UTC

				

User ID: 884

faul_sname

Fuck around once, find out once. Do it again, now it's science.

1 follower   follows 1 user   joined 2022 September 06 20:44:12 UTC

					

No bio...


					

User ID: 884

Intelligence isn't caused by a few genes, but by thousands of genes that individually have a minuscule contribution but, when added up, cause >50% of existing variation in intelligence

I would bet good money that taking a genome, and then editing it until it had every gene which is correlated with higher intelligence, would not get you a baby that was even a single standard deviation above what you would naively predict based on the parents.

Consider a simple toy model, where

  1. Human intelligence is modulated by the production of a magical protein Inteliquotin (IQN), which causes brains to wrinkle.
  2. Human intelligence is a direct but nonlinear function of IQN concentration -- if IQN concentration is too low, it results in smooth brains (and thus lower intelligence), while if the concentration is too high, it interferes with metabolic processes in the neurons (and thus also results in lower intelligence). Let's say the average concentration is 1.0µg/mL.
  3. The optimal IQN concentration for inclusive fitness in the ancestral environment, and the average among the human population is 1.0µg/mL. However, the optimal concentration for intelligence specifically is 10% higher, at 1.1µg/mL (between those concentrations, improved fitness due to increased intelligence is more than offset to decreased fitness due to, let's say, "increased propensity for Oculoumbilical Tendency leading to elevated predation rates")
  4. The production of IQN is modulated by 1000 different genes IQN000 - IQN999, with the high-IQN variant of each gene occuring in 10% of the population, and where each gene independently causes IQN to increase by 0.01µg/mL.

If you have this scenario, each gene IQN000...IQN999 will explain about 0.1% of the variance in IQ, and yet using CRISPR to force just 5% more of the IQN genes to the "good" variant will lead to poorer outcomes than just leaving the system alone.

All that being said, you should be able to squeeze some results out of that technique. Just not multiple SD of improvement, at least not by doing the naive linear extrapolation thing.

Admins on the motte can see who up/downvoted a thing.

That's a nice legal theory you have there.

Let's say you're an engineer at one such company, and you want to expose a UART serial interface to allow the device you're selling to be debuggable and modifiable for the subset of end-users who know what they're doing. You say "this is part of the consumer-facing functionality". The regulator comes back and says "ok, where's the documentation for that consumer-facing functionality" and you say "we're not allowed to share that due to NDAs, but honest this completely undocumented interface is part of the intended consumer-facing functionality".

How do you expect that to go over with the regulator? Before that, how do you expect the conversation with the legal department at your company to go when you tell them that's your plan for what to tell the regulator if they ask?

So literally some takes from 5 years ago and a different account, which, if I'm correct about which name you're implying guesswho used to post as, are more saying "in practice sexual assault accusations aren't being used in every political fight, so let's maybe hold off on trying drastic solutions to that problem until it's demonstrated that your proposed cure isn't worse than the disease".

Let he who has never posted a take that some people find objectionable cast the first stone.

I assume you have some reason you think it matters that we can't use mathematics to come up with a specific objective prior probability that each model is accurate?

Edit: also, I note that I am doing a lot of internal translation of stuff like "the theory is true" into "the model makes accurate predictions of future observations" to fit into my ontology. Is this a valid translation, or is there some situation where someone might believe a true theory that would nevertheless lead them to make less accurate predictions about their future observations?

Other threats have come (nuclear weapons) and we've always come through them

I would actually really like to see a rebuttal of this one, because the doomer logic (which looks correct to me) implies that we should all have died decades ago in nuclear fire. Or, failing that, that we should all be dead of an engineered plague.

And yet here we are.

The constant debates between the Napoleon deniers and their opponents are sucking all the air out of the room. What do you do?

Containment thread? It worked pretty well for covid, when covid stuff was sucking all the air out of the room.

Morality has nothing to do with game theory

I disagree pretty strongly with that -- I think that "Bob is a moral person" and "people who are affected by Bob's actions generally would have been worse off if Bob's actions didn't affect them" are, if not quite synonymous, at least rhyming. The golden rule works pretty alright in simple cases without resorting to game theory, but I think game theory can definitely help in terms of setting up incentives such that people are not punished for doing the moral thing / incentivized to do the immoral thing, and that properly setting up such incentives is itself a moral good.

I mean in this case given the relative military strength I think it's more like the horse and the weasel voting on what's for dinner. I think the horse will be just fine.

There's probably even a few people doing that! But it's not the bulk of what we're seeing.

What you're seeing is driven largely by what is most outrageous to see, and thus most likely to be shared and appear on your feeds and in the news. The people saying "damn this sucks, I don't even know what a good solution looks like but murdering innocent civilians in their homes for offenses committed by their countrymen doesn't seem like a good solution" are not having their opinions amplified to the whole world.

Maybe I just have an unusually levelheaded community, but most of the takes I've heard from people I actually know in real life look more like "damn this sucks, I hope it doesn't get too much worse" than for cheering for the deaths of Israeli or Palestinian civilians.

Can this changeless, fully actual thing have downstream causal effects? If not, I don't think "we call this changeless, fully actual thing God, and God is the reason the universe exists" works as an argument for the existence of God-as-the-thing-that-caused-the-universe-to-exist.

If the changeless thing is allowed to causally affect things, the question becomes "but where did the system that contains the changeless thing and also the changeable universe come from, since the changing universe can't be a part of the changeless thing".

Interesting - the error when you look at the votes here isn't just a 403?

Again, that interpretation is nice if correct. Can you point to anything in the document which supports the interpretation that saying "We have assessed that leaving this debug interface provides user benefit because the debug interface allows the user to debug" would actually be sufficient justification?

My mental model is "it's probably fine if you know people at the regulatory agency, and probably fine if you don't attract any regulatory scrutiny, and likely not to be fine if the regulator hates your guts and wants to make an example of you, or if the regulator's golf buddy is an executive at your competitor". If your legal team approves it, I expect it to be on the basis of "the regulator has not historically gone after anyone who put anything even vaguely plausible down in one of these, so just put down something vaguely plausible and we'll be fine unless the regulator has it out for us specifically". But if anything goes as long as it's a vaguely plausible answer at something resembling the question on the form, and as long as it's not a blatant lie about your product where you provably know that you're lying, I don't expect that to help very much with IoT security.

And yes, I get that "the regulator won't look at you unless something goes wrong, and if something does go wrong they'll look through your practices until they find something they don't like" is how most things work. But I think that's a bad thing and the relative rarity of that sort of thing in tech is why tech is one of the few remaining productive and relatively-pleasant-to-work-in industries. You obviously do sometimes need regulation, but I think in a lot of cases, probably including this one, the rules that are already on the books would be sufficient if they were consistently enforced, but they are in fact rarely enforced and the conclusion people come to is "the current regulations aren't working and so we need to add more regulations" rather than "we should try being more consistent at sticking to the rules that are already on the books", and so you end up with even more vague regulations that most companies make token attempts to cover their asses on but otherwise ignore, and so you end up in a state where full compliance with the rules is impractical but also generally not expected, until someone pisses off a regulator at which point their behavior becomes retroactively unacceptable.

Edit: As a concrete example of the broader thing I'm pointing at, HIPAA is an extremely strict standard, and yet in practice hospital systems are often laughably insecure. Adding even more requirements on top of HIPAA would not help.

Why would you assume "aliens" not "previous Earth civilization" in that case?

Assuming that other consciousnesses exist does not produce better advance predictions of experiences

Sure it does! I talk about consciousness, and what I say about it is caused by how I myself experience consciousness. If consciousness exists in others, I expect them to talk similar experiences to consciousness to the ones I have, and if it doesn't exist in others, well then it's pretty weird that they'd talk about having conscious experiences that sound really similar to my conscious experiences for some reason that is not "they are experiencing the same thing I am". If others were p-zombies, then sure all of their prior utterances may have sounded like they were generated by them being conscious, but absent a deeper understanding of how exactly their p-zombification worked, I could not use that to generate useful predictions of what their future utterances about consciousness would be (because, as we've established, the p-zombies are not just reporting on their internal state, but instead doing something else which is not that).

Modeling others as experiencing the same consciousness as I do does in fact lead to better advance predictions of my observations. It doesn't do so in a very philosophically satisfying way if you want to talk about axioms and proofs, but pragmatically speaking "other people are also conscious like me" sure does seem like a useful mental model for generating predictions.

I don't think reasoned beliefs are forced by evidence; I think they're chosen. He's arguing that specific beliefs aren't a choice, any more than believing 1+1 = 2 is a choice.

The choice of term "reasoned belief" instead of simply "belief" sounds like you mean something specific and important by that term. I'm not aware of that term having any particular meaning in any philosophical tradition I know about, but I also don't know much about philosophy.

He's arguing that specific beliefs aren't a choice, any more than believing 1+1 = 2 is a choice.

That sounds like the "anticipated experiences" meaning of "belief". I also cannot change those by sheer force of will. Can you? Is this another one of those less-than-universal human experiences similar to how some people just don't have mental imagery?

The larger point I'm hoping to get back to is that the deterministic model of reason that seems to be generally assumed is a fiction

I don't think I would classify probabilistic approaches like that as "deterministic models of reason".

But yeah I'm starting to lean towards "there's literally some bit of mental machinery for intentionally believing something that some people have".

I suspect that "belief", rather than "choice", is the word that you two are using differently. You can't choose your "beliefs(1)" in the sense of "what you anticipate what your future experiences will be contingent on taking some specific course of action", but you can choose your "beliefs(2)" in the sense of "which operating hypothesis you use to determine your actions".

I might be wrong though. It is conceivable to me that some people can change their beliefs(1) by sheer force of will.

If you do figure it out, I expect at least a LW post or two about it 🙏

If I do, I will definitely make an LW post or two about it. May or may not happen, I have quite a lot going on in the next two months (and then more going on after that, because a lot of the stuff going on is "baby in 2 months").

I agree that this is how it'll likely work out (and it does in smart humans), but isn't that tantamount to enforcing internal consistency, just under adversarial stimulus?

I think the disagreement is more about how often the adversarial stimulus comes about. I expect that in most cases, it's not worth it to generate such an adversarial stimulus (i.e. it costs more than 0.01 A for an adversary to find that trade cycle, so if they can only expect to run the cycle once it's not worth it). So such an agent would trend towards an internally consistent equilibrium, given a bunch of stimuli like that, but probably not very quickly and the returns on becoming more coherent likely diminish very steeply (because the cost of incoherence decreases as the magnitude decreases, and also the frequency of exploitation should decrease as the payoff for exploitation decreases, so the rate of convergence should slow down more than linearly over time).

Ah, would that I had enough money to throw at a housefly and hope to stun it, but at least you're putting yours to noble ends haha.

That'll change with the officially becoming a doctor thing, I expect. And also becoming a doctor helps rather more directly with the whole pandemic preparedness thing.

I’ll be honest I have come down on the Toner being correct and Altman deserved to be fired side of the coin.

I think if the board had just led with that a lot of people would have agreed. "Leader tries to dismantle the structures that hold him accountable" is a problem that people know very well, and "get rid of leader" is not a controversial solution to that problem.

But in fact the board accused Altman of being a lying liar and then refused to stand behind that accusation, even to the subsequent CEOs.

There's gotta be something else going on.

I claim the only effective way to do that in a way that avoids exploitation is very intelligent consequentialism.

I claim that doesn't work either, if your environment is adversarial, because the difference between your model of the expected consequences of your actions and the actual realized consequences of your actions can be exploited. This doesn't even require an adversary that is generally more intelligent than you, just an adversary that notes a specific blind spot you have (see how humans can beat the wildly superhuman Go engine KataGo by exploiting a very specific blind spot it has in its world model).

The majority of both plant's products work. If you have applications that need only a few working chips of a certain type, or individual working chips, you can use chips from either company without much bother.

But say you need 10 working chips of the same type, from the same factory. A single failure means the product is worthless. 0.97 x 0.97... = 0.74

0.94 x 0.94... = 0.54

On the flip side, if you have a test for chip quality which can diagnose bad chips with sensitivity and specificity of 75%, you can use that test to get from a 6% bad chip rate to a 2% bad chip rate if you're willing to throw away a little over a quarter of your chips. (Math: out of 1000 chips, there will be 705 good chips the test says are good, 235 good chips the test says are bad, 15 bad chips the test says are good, and 45 bad chips the test says are bad).

Even pretty crappy tests (0.75 is a terrible number for both sensitivity and specificity) can get you massive advantages over just relying on base rates.

So, by that thought process

Do you want a huge population of Afghans, Ethiopians and Sub-Saharan Africans coming to your country?

If we are capable of having the sort of process that is capable of predicting, not necessarily very well, just a bit better than chance, which particular applicants have an elevated risk of being a problem, and we're willing to use that process even if it unfairly rejects a significant fraction of applicants, then yes, I do want a huge (selected) population of immigrants from those countries coming here.

That said, I live in a country where the vast vast majority of residents have immigrant ancestors within the last 10 generations, and a solid quarter of them within the past generation (i.e. they are either themselves immigrants or their parents were). The "the country has a strong sense of solidarity because everyone belongs to the ethnic group that's always lived here" ship has not sailed, because that ship never arrived in the first place.

Why can't it be real? The Haber-Bosch process is at least as impactful of an "air + energy + water -> bulk useful material" process, and it's real and cost-effective.

Anyone who comes up with some process that

  1. Has low infrastructure costs
  2. Produces some industrially valuable product
  3. Spins up and down quickly, and tolerates long idle periods (i.e. starts producing the product as soon as you feed it power, stops when you stop feeding it power, and doesn't have issues if it doesn't start again for a long time)

has a license to print money when power costs dip to zero or below. Which they already do from time to time, and if solar power continues to be deployed more and more, that situation will happen more often.

Terraform's "power -> methane" thing certainly isn't efficient, compared to other forms of grid energy storage, but what it is is scalable. Basically it seems to be a bet on "power prices will be zero / negative some fraction of the time in some locations", which seems likely to happen if solar keeps being deployed at the current rate, or if any country anywhere in the world gets serious about fission power.

I drove a corolla until it started giving me trouble (around 300,000 km), followed by a prius until that started giving me trouble (around 400,000 km), both were IMO quite good cars. I think you should be able to get a lightly used one that is <10 years old within your budget in Scotland, and that should have all the creature comforts you want.

That said, for bluetooth specifically, for $20, you can get a thing which plugs into the cigarette lighter of a car and does bluetooth pairing and then broadcasts to a radio frequency (choose a dead channel), which you can then tune your car radio to. In my experience they work well enough that you never think about them once you've done the initial 2 minutes of setup - your phone just automatically pairs when you get in the car, and the car speakers play what your phone is playing.

This line of argument reminds me of the "to get people to ride public transit, you don’t have to fix the issues with public transit, you just have to make the experience of traveling by car much much worse" argument I see sometimes.

The Against Malaria Foundation is a pretty solid choice, and is the one that makes up most of my charitable contributions. If you care more about quality than about quantity of life, you might also consider Deworm the World. Their pitch is also refreshingly concrete and not "woke" at all:

More than 913 million children are at risk for parasitic worm infections like soil-transmitted helminths and schistosomiasis.

These infections mainly occur in areas with inadequate sanitation, disproportionately affecting poor communities. Children infected with worms are often too sick or weak to attend school because their body can’t properly absorb nutrients. If left untreated, worm infections lead to anemia, malnourishment, impaired mental and physical development, and severe chronic illnesses.

A safe, effective, and low-cost solution does exist — in the form of a simple pill taken once or twice a year. Regular treatment reduces the spread of the disease and helps children stay in school and live healthier and more productive lives.

Since 2014, Deworm the World has helped deliver over 1.8 billion deworming treatments to children across several geographies – for less than 50 cents per treatment. We work closely with governments to implement high-quality and cost-effective mass deworming programs which are resulting in dramatic reductions in worm prevalence.

Every year, GiveWell publishes a detailed analysis of the cost effectiveness of each charity in a spreadsheet that documents their assumptions and their model. If you care to do so, you can also make a copy of the spreadsheet and plug in your own numbers, though I basically never do that.

But yeah, no reason to give money to a global health charity that has politics you hate. The impact per dollar between the listed global health charities just doesn't vary by all that much.